Loading ...
Sorry, an error occurred while loading the content.
 

Re: [nslu2-linux] Re: Optware's official build machine

Expand Messages
  • Alex Potapenko
    ... Agree, I had, for example, to patch mbwe-bluering s toolchain a bit to have it built, but it was fairly simple. I hope fixing other problems will be just
    Message 1 of 17 , Feb 20, 2011
      Regarding building packages using later versions of Linux, my
      experience is historical difficulty building the tool chains with the
      later tools.

      Agree, I had, for example, to patch mbwe-bluering's toolchain a bit to have it built, but it was fairly simple. I hope fixing other problems will be just as simple...
      --
      Best regards,
      Alex Potapenko
    • Brian
      ... I remember I tried to build perl-host in host/builds, got problem related to building feeds of different architectures, and reverted. You re welcome to
      Message 2 of 17 , Feb 20, 2011
        --- In nslu2-linux@yahoogroups.com, Robert Hammond <rob.hammond@...> wrote:
        >
        > Brian
        >
        > A small improvement that you may want fix before re-building the build
        > machines.
        >
        > Currently perl-host builds in the optware package build directory rather
        > than in the host build directory, easily fixed by changing the two mk
        > files in the sources folder. I think best if you fix this.
        >

        I remember I tried to build perl-host in host/builds, got problem related to building feeds of different architectures, and reverted. You're welcome to take another crack at it. But be sure to build multiple feeds.

        Regards,

        -Brian
      • Ian White
        Hi all, I am after some advice. I have a venerable slug that has over the years run all the Unslung versions from 1.1-beta through to 6.7-alpha. Its primary
        Message 3 of 17 , Feb 21, 2011
          Hi all,
           
          I am after some advice.
           
          I have a venerable slug that has over the years run all the Unslung versions from 1.1-beta through to 6.7-alpha.
           
          Its primary purpose has been to "pull" user-file store backups from various windows PCs and a NAS device *and then push these backups back out to various alternate locations*.
           
          To this end it has had a 750GB disk attached with some custom partitioning:
          700GB (ext3) root partition, 7.5GB (ext3) legacy root partition (un-used), 2.5GB (swap) and 32GB (FAT32/VFAT) partition.
           
          It then runs just over 30 separate scheduled "pull" or "push" backup "jobs", some daily and some weekly,
          such that I have a hot-standby NAS which has a mirror of the data held on a primary NAS
          and remote/off-site backup NAS with data that is approximately 1 week old
           
          The slug is set to automatically reboot every Saturday night.
          Once every 6 to 12 months or so, this reboot would fail because it failed the disk checks.
           
          The fix is (was) to boot a live-linux CD on PC and do an "fsck" of the ext3 partition(s).
           
          This has worked flawlessly until, you guessed it, last Saturday night.
          Unfortunately the "fsck -y" (yes, risky I know, but it has never failed before)
          went ok at first but then encountered many many "short reads" whose recovery apparently failed.
          Even trying another fsck using alternate superblock didn't help.
           
          So whilst I have not lost any real data, I have lost my backup "strategy"
          and I think the partition/data, (but not necessarily the drive) is "toast".
           
          But as the drive is also rather old (the oldest date on the "un-used" partition goes back to 2005) I already had plans to replace it.
          So I have a blank 2TB drive sitting in another USB enclosure ready to use.
           
          I think it is time for me to move away from the "unslung" firmware and move to SlugOS (BE)
          as I think I want to make the slug take a more "passive" role by acting as the recipient of rdiff-backup (using rsync)
          where the backup is initiated/"pushed" by the machine hosting the "master" copy of the data
           
          So the advice needed is as follows:
           
          1) Will SlugOS cope with a 2TB drive?
           
          2) What sort of partitioning scheme would work best?
              (I am inclined to partition the drive into at least 2 x 1TB partitions, just so the checks are quicker and only 1/2 of the data is "lost" if a partition becomes corrupt)
           
          3) I'm betting that I also need some swap space because SlugOS will not cope with these sized partitions using ram alone?
              Any recommended size for this swap space? (Like do I need say 1GB swap per 1TB of partition to be "fsck"'d)
           
          Sorry for the long-ish post and thanks in advance for any advice/tips
           
          TIA
          Ian White
           
          P.S. The other NAS devices are a Qnap and 2 Linkstations, all with "root" access, but the Slug was my first NAS, so I think it deserves some continued effort on my part.
        • Mike Westerhof (mwester)
          On 2/21/2011 4:59 AM, Ian White wrote: [snip] ... Yes. You might, however, need the latest development version of SlugOS if the chipset in your enclosure is
          Message 4 of 17 , Feb 21, 2011
            On 2/21/2011 4:59 AM, Ian White wrote:
            [snip]
            > So the advice needed is as follows:
            >
            > 1) Will SlugOS cope with a 2TB drive?

            Yes. You might, however, need the latest development version of SlugOS
            if the chipset in your enclosure is not supported by the older kernel in
            SlugOS 5.3.

            > 2) What sort of partitioning scheme would work best?
            > (I am inclined to partition the drive into at least 2 x 1TB
            > partitions, just so the checks are quicker and only 1/2 of the data is
            > "lost" if a partition becomes corrupt)

            You'll need a rootfs, and that should be separate from the data. And
            breaking up such large partitions is a good idea. So that would imply
            at least three partitions - a small one (a GB or two is more than
            adequate) for the rootfs, and then the two data partitions.

            > 3) I'm betting that I also need some swap space because SlugOS will not
            > cope with these sized partitions using ram alone?
            > Any recommended size for this swap space? (Like do I need say 1GB
            > swap per 1TB of partition to be "fsck"'d)

            You'll need swap -- but usually its sized based on the RAM in the device
            (using the rule-of-thumb of 1x or 2x the amount of RAM), and then one
            sees if your workload will fit into that. In your case, there's no real
            way to know:

            a) For fsck, there are pathological cases of corruption that will run
            almost ANY system out of swap, so there's no guaranteed amount you can
            configure that will allow you to repair all types of damage. Moreover,
            once you get into the 4x-swap-to-memory range, your performance is
            probably such that you'll never finish the fsck anyway.

            b) For rsync, the amount of memory will depend on the size of the
            filelist, since it builds that data-structure in-memory. So there is an
            upper limit based on what you are presenting to rsync -- you'll have to
            present it with the worst-case workload, and measure it's virtual memory
            consumption and that will tell you how much virtual memory you need (and
            make your swap space slightly to double that figure, depending on what
            you do about the dreaded OOM Killer).

            You'll have to sort what to do about that most evil of all Linux kernel
            creations, the Out-Of-Memory Killer. This ugly monster's job is to
            prevent the system from crashing by detecting a situation that might
            result in a shortfall of virtual memory and terminating a process that
            would avoid that shortfall. It's rather like throwing passengers out of
            the airplane when low on fuel, in order to save the remaining
            passengers. [http://lwn.net/Articles/104185/%5d

            You can either create a swap partition, or just use a swap file; it
            really makes no difference for the NSLU2 (any performance difference due
            to filesystem overhead is buried by the slowness of USB in the first place).

            Also, you might like to tweak the "volatiles" settings (deep down
            beneath the /etc directory) -- you'll be wanting to move as much stuff
            out of the tmpfs filesystems as you can. In fact, since those
            filesystems come right out of the RAM, you might like to replace them
            entirely with real filesystems, thus making sure that nothing writing a
            temp file to /tmp or /var/tmp will end up consuming precious memory.

            Here's a starting point for configuring your new, happier slug (after
            all, ALL slugs are happier when running SlugOS [<-- our new marketing
            slogan -- what do you thing?? ;-)]
            http://www.nslu2-linux.org/wiki/SlugOS/InstallandTurnupABasicSlugOSSystem

            -Mike (mwester)

            > Sorry for the long-ish post and thanks in advance for any advice/tips
            >
            > TIA
            > Ian White
            >
            > P.S. The other NAS devices are a Qnap and 2 Linkstations, all with
            > "root" access, but the Slug was my first NAS, so I think it deserves
            > some continued effort on my part.
          • Ian White
            Hi Mike, Thanks for the info, this was just the information I was looking for. I have successfully flashed my slug with SlugOS(BE) 5.3-beta, run turnup init ,
            Message 5 of 17 , Feb 22, 2011
              Hi Mike,

              Thanks for the info, this was just the information I was looking for.

              I have successfully flashed my slug with SlugOS(BE) 5.3-beta, run "turnup
              init", formatted the disk (in the enclosure using Ubuntu live CD) and it is
              recognized by the slug

              So I am ready to establish a better partitioning scheme and then progress
              towards my "happier slug"

              Thanks again
              Ian W.

              On 22/02/2011 12:55 am, Mike Westerhof wrote:
              > On 2/21/2011 4:59 AM, Ian White wrote:
              > [snip]
              >> So the advice needed is as follows:
              >>
              >> 1) Will SlugOS cope with a 2TB drive?
              >
              > Yes. You might, however, need the latest development version of SlugOS
              > if the chipset in your enclosure is not supported by the older kernel in
              > SlugOS 5.3.
              >
              >> 2) What sort of partitioning scheme would work best?
              >> (I am inclined to partition the drive into at least 2 x 1TB
              >> partitions, just so the checks are quicker and only 1/2 of the data is
              >> "lost" if a partition becomes corrupt)
              >
              > You'll need a rootfs, and that should be separate from the data. And
              > breaking up such large partitions is a good idea. So that would imply
              > at least three partitions - a small one (a GB or two is more than
              > adequate) for the rootfs, and then the two data partitions.
              >
              >> 3) I'm betting that I also need some swap space because SlugOS will not
              >> cope with these sized partitions using ram alone?
              >> Any recommended size for this swap space? (Like do I need say 1GB
              >> swap per 1TB of partition to be "fsck"'d)
              >
              > You'll need swap -- but usually its sized based on the RAM in the device
              > (using the rule-of-thumb of 1x or 2x the amount of RAM), and then one
              > sees if your workload will fit into that. In your case, there's no real
              > way to know:
              >
              > a) For fsck, there are pathological cases of corruption that will run
              > almost ANY system out of swap, so there's no guaranteed amount you can
              > configure that will allow you to repair all types of damage. Moreover,
              > once you get into the 4x-swap-to-memory range, your performance is
              > probably such that you'll never finish the fsck anyway.
              >
              > b) For rsync, the amount of memory will depend on the size of the
              > filelist, since it builds that data-structure in-memory. So there is an
              > upper limit based on what you are presenting to rsync -- you'll have to
              > present it with the worst-case workload, and measure it's virtual memory
              > consumption and that will tell you how much virtual memory you need (and
              > make your swap space slightly to double that figure, depending on what
              > you do about the dreaded OOM Killer).
              >
              > You'll have to sort what to do about that most evil of all Linux kernel
              > creations, the Out-Of-Memory Killer. This ugly monster's job is to
              > prevent the system from crashing by detecting a situation that might
              > result in a shortfall of virtual memory and terminating a process that
              > would avoid that shortfall. It's rather like throwing passengers out of
              > the airplane when low on fuel, in order to save the remaining
              > passengers. [http://lwn.net/Articles/104185/%5d
              >
              > You can either create a swap partition, or just use a swap file; it
              > really makes no difference for the NSLU2 (any performance difference due
              > to filesystem overhead is buried by the slowness of USB in the first
              > place).
              >
              > Also, you might like to tweak the "volatiles" settings (deep down
              > beneath the /etc directory) -- you'll be wanting to move as much stuff
              > out of the tmpfs filesystems as you can. In fact, since those
              > filesystems come right out of the RAM, you might like to replace them
              > entirely with real filesystems, thus making sure that nothing writing a
              > temp file to /tmp or /var/tmp will end up consuming precious memory.
              >
              > Here's a starting point for configuring your new, happier slug (after
              > all, ALL slugs are happier when running SlugOS [<-- our new marketing
              > slogan -- what do you thing?? ;-)]
              > http://www.nslu2-linux.org/wiki/SlugOS/InstallandTurnupABasicSlugOSSystem
              >
              > -Mike (mwester)
              >
              >> Sorry for the long-ish post and thanks in advance for any advice/tips
              >>
              >> TIA
              >> Ian White
              >>
              >> P.S. The other NAS devices are a Qnap and 2 Linkstations, all with
              >> "root" access, but the Slug was my first NAS, so I think it deserves
              >> some continued effort on my part.
              >
              >
            • Ian White
              I have re-flashed my formerly unslung slug with SlugOS/BE and set up my desired partitioning structure But when I came to install the various optware
              Message 6 of 17 , Mar 10, 2011
                I have re-flashed my formerly "unslung" slug with SlugOS/BE and set up my
                desired partitioning structure

                But when I came to install the various optware packages including coreutils,
                rsync, samba, python etc.
                I found that there were no packages for librsync and rdiff-backup

                Are these (librsync and rdiff-backup) available for SlugOS/BE
                or should I go for a Debian-based slug firmware/Debian install to obtain
                access to Debian's larger package collection?

                Thanks
                Ian W.
              Your message has been successfully submitted and would be delivered to recipients shortly.