Loading ...
Sorry, an error occurred while loading the content.

Re: Slow as a slug, Confirmed?

Expand Messages
  • Phil Endecott
    ... No, the rsync algorithm is integer-only. I suggest that vmstat is the best way to determine where the bottleneck is. Phil.
    Message 1 of 5 , Apr 5, 2007
    • 0 Attachment
      > maybe that rsync hash-computation uses floating point calculations

      No, the rsync algorithm is integer-only.

      I suggest that vmstat is the best way to determine where the bottleneck is.

      Phil.
    • John
      As I write this, e2fsck is still running after 3 hours of churning my hard disk. Phil, I include vmstat results below which show a lot of swapping,
      Message 2 of 5 , Apr 5, 2007
      • 0 Attachment
        As I write this, e2fsck is still running after 3 hours of churning
        my hard disk. Phil, I include vmstat results below which show a lot of
        swapping, particularly during "pass 2". I also include atop results
        showing memory usage by process.

        I will try killing some unslung daemons to see if I can achieve
        Dave's fast debian-slug performance.

        # vmstat 5
        procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
        r b swpd free buff cache si so bi bo in cs us sy id wa
        0 1 6308 22840 260 1336 0 0 0 0 2109 27 0 3 97 0
        0 1 6308 22840 260 1336 0 0 0 0 2109 19 0 2 98 0

        Above is idle. Then "e2fsck -nvf dev/sda1" starts:
        e2fsck 1.34 (25-Jul-2003)
        Warning! /dev/sda1 is mounted.
        Warning: skipping journal recovery because doing a read-only filesystem check.
        Pass 1: Checking inodes, blocks, and sizes
        At first vmstat shows a lot of swapping:

        2 3 40860 560 252 544 107 8181 262 8190 9104 6549 13 47 40 0
        1 2 61132 316 268 764 2559 2727 2787 2727 6013 4187 34 28 38 0
        3 1 63156 740 256 5580 676 2078 3590 2078 6191 4312 25 35 40 0
        [snip]
        After 10 minutes, the swapping (si/so) settles down somewhat:

        2 2 64208 856 228 16188 39 30 4438 30 5592 3766 30 33 37 0
        1 2 64056 748 228 16404 48 22 5398 22 6158 4303 25 42 32 0
        1 2 63912 652 228 16312 51 0 5294 0 6016 4134 37 34 29 0
        3 1 63768 736 228 16148 34 0 5103 0 5949 4101 30 39 31 0
        1 2 63784 804 228 15984 24 5 2290 5 4020 2147 32 22 46 0
        1 2 63764 792 228 16000 0 18 1360 18 3428 1555 27 12 61 0
        1 2 64276 520 228 16440 39 96 6554 96 6821 4898 39 41 20 0
        1 3 64564 796 228 16572 34 90 5730 90 6398 4537 35 40 25 0
        1 2 64668 868 228 16640 16 47 3107 47 5062 3425 19 28 53 0
        2 2 64772 532 232 17108 42 26 5597 26 6292 4426 33 39 28 0
        [snip]
        After Pass 2 starts, "Pass 2: Checking directory structure," swapping
        increases and stays high The slug is about 80% i/o bound (cpu-id):

        1 3 80496 836 272 1156 1786 310 1950 310 4096 2194 5 17 78 0
        2 2 81316 812 272 1252 1313 354 1618 354 4088 2143 4 15 82 0
        0 3 81144 872 272 1564 1414 290 1600 290 3813 1882 3 11 86 0
        2 1 81320 620 236 1796 1222 286 1485 287 3874 1932 6 13 81 0

        During pass 2, I ran atop which shows memory usage by program. upnpd
        claims a lot of memory usage even though there are no Windows computers
        on the network.

        PRC | sys 1.24s | user 0.98s | #thr 58 | #zombie 0 | #exit ? |
        CPU | sys 15% | user 10% | irq 0% | idle 75% | wait 0% |
        CPL | avg1 3.09 | avg5 3.11 | avg15 3.09 | csw 21060 | intr 40310 |
        MEM | tot 29.8M | free 0.9M | cache 2.7M | buff 0.2M | slab 0.0M |
        SWP | tot 117.7M | free 35.4M | | vmcom 0.0M | vmlim 0.0M |
        PAG | scan 0 | stall 0 | | swin 4513 | swout 677 |
        NET | transport | tcpi 3 | tcpo 3 | udpi 6 | udpo 6 |
        NET | network | ipi 9 | ipo 9 | ipfrw 0 | deliv 9 |
        NET | dev ixp0 | pcki 3 | pcko 9 | si 0 Kbps | so 3 Kbps |

        PID MINFLT MAJFLT VSTEXT VSIZE RSIZE VGROW RGROW MEM CMD 1/1
        855 709 604 113K 92396K 13196K 0K -296K 43% e2fsck
        863 179 0 74K 2428K 2356K 0K 0K 8% atop
        211 0 0 6K 10908K 452K 0K 116K 1% upnpd
        219 0 0 6K 10908K 452K 0K 116K 1% upnpd
        220 17 17 6K 10908K 452K 0K 116K 1% upnpd
        223 7 4 6K 10908K 452K 0K 116K 1% upnpd
        224 0 0 6K 10908K 452K 0K 116K 1% upnpd
        225 3 5 6K 10908K 452K 0K 116K 1% upnpd
        860 71 0 270K 5864K 424K 0K 0K 1% sshd
        344 81 5 151K 4816K 380K 0K 20K 1% nmbd
        373 8 0 18K 1904K 264K 0K 0K 1% USB_Detect
        852 0 0 16K 1380K 260K 0K 0K 1% vmstat
        396 0 0 11K 1216K 80K 0K 4K 0% crond
        3 0 0 0K 0K 0K 0K 0K 0% ksoftirqd_CPU0
        4 0 0 0K 0K 0K 0K 0K 0% kswapd
        10 0 0 0K 0K 0K 0K 0K 0% usb-storage-0


        On Thu, Apr 05, 2007 at 11:51:36AM +0100, Phil Endecott wrote:
        > > maybe that rsync hash-computation uses floating point calculations
        >
        > No, the rsync algorithm is integer-only.
        >
        > I suggest that vmstat is the best way to determine where the bottleneck is.
        >
        Phil.
        >
        >
      • Rod Whitby
        Be aware that you are also comparing the performance of an old 2.4 kernel and utilities (Unslung) versus a brand new 2.6.18 kernel and utilities (Debian).
        Message 3 of 5 , Apr 5, 2007
        • 0 Attachment
          Be aware that you are also comparing the performance of an old 2.4 kernel and utilities (Unslung) versus a brand new 2.6.18 kernel and utilities (Debian).
          Don't know if that will make any difference or not ...
          -- Rod

          -----Original Message-----
          From: John <jl.050877@...>
          Date: Friday, Apr 6, 2007 5:55 am
          Subject: Re: [nslu2-linux] Re: Slow as a slug, Confirmed?

          As I write this, e2fsck is still running after 3 hours of churning
          my hard disk. Phil, I include vmstat results below which show a lot of swapping, particularly during "pass 2". I also include atop results
          showing memory usage by process.

          I will try killing some unslung daemons to see if I can achieve Dave's fast debian-slug performance.

          # vmstat 5
          procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
          r b swpd free buff cache si so bi bo in cs us sy id wa
          0 1 6308 22840 260 1336 0 0 0 0 2109 27 0 3 97 0
          0 1 6308 22840 260 1336 0 0 0 0 2109 19 0 2 98 0

          Above is idle. Then "e2fsck -nvf dev/sda1" starts:
          e2fsck 1.34 (25-Jul-2003)
          Warning! /dev/sda1 is mounted.
          Warning: skipping journal recovery because doing a read-only filesystem check.
          Pass 1: Checking inodes, blocks, and sizes
          At first vmstat shows a lot of swapping:

          2 3 40860 560 252 544 107 8181 262 8190 9104 6549 13 47 40 0
          1 2 61132 316 268 764 2559 2727 2787 2727 6013 4187 34 28 38 0
          3 1 63156 740 256 5580 676 2078 3590 2078 6191 4312 25 35 40 0
          [snip]
          After 10 minutes, the swapping (si/so) settles down somewhat:

          2 2 64208 856 228 16188 39 30 4438 30 5592 3766 30 33 37 0
          1 2 64056 748 228 16404 48 22 5398 22 6158 4303 25 42 32 0
          1 2 63912 652 228 16312 51 0 5294 0 6016 4134 37 34 29 0
          3 1 63768 736 228 16148 34 0 5103 0 5949 4101 30 39 31 0
          1 2 63784 804 228 15984 24 5 2290 5 4020 2147 32 22 46 0
          1 2 63764 792 228 16000 0 18 1360 18 3428 1555 27 12 61 0
          1 2 64276 520 228 16440 39 96 6554 96 6821 4898 39 41 20 0
          1 3 64564 796 228 16572 34 90 5730 90 6398 4537 35 40 25 0
          1 2 64668 868 228 16640 16 47 3107 47 5062 3425 19 28 53 0
          2 2 64772 532 232 17108 42 26 5597 26 6292 4426 33 39 28 0
          [snip]
          After Pass 2 starts, "Pass 2: Checking directory structure," swapping increases and stays high The slug is about 80% i/o bound (cpu-id):

          1 3 80496 836 272 1156 1786 310 1950 310 4096 2194 5 17 78 0
          2 2 81316 812 272 1252 1313 354 1618 354 4088 2143 4 15 82 0
          0 3 81144 872 272 1564 1414 290 1600 290 3813 1882 3 11 86 0
          2 1 81320 620 236 1796 1222 286 1485 287 3874 1932 6 13 81 0

          During pass 2, I ran atop which shows memory usage by program. upnpd claims a lot of memory usage even though there are no Windows computers on the network.

          PRC | sys 1.24s | user 0.98s | #thr 58 | #zombie 0 | #exit ? | CPU | sys 15% | user 10% | irq 0% | idle 75% | wait 0% | CPL | avg1 3.09 | avg5 3.11 | avg15 3.09 | csw 21060 | intr 40310 | MEM | tot 29.8M | free 0.9M | cache 2.7M | buff 0.2M | slab 0.0M | SWP | tot 117.7M | free 35.4M | | vmcom 0.0M | vmlim 0.0M | PAG | scan 0 | stall 0 | | swin 4513 | swout 677 | NET | transport | tcpi 3 | tcpo 3 | udpi 6 | udpo 6 | NET | network | ipi 9 | ipo 9 | ipfrw 0 | deliv 9 | NET | dev ixp0 | pcki 3 | pcko 9 | si 0 Kbps | so 3 Kbps |

          PID MINFLT MAJFLT VSTEXT VSIZE RSIZE VGROW RGROW MEM CMD 1/1
          855 709 604 113K 92396K 13196K 0K -296K 43% e2fsck
          863 179 0 74K 2428K 2356K 0K 0K 8% atop
          211 0 0 6K 10908K 452K 0K 116K 1% upnpd
          219 0 0 6K 10908K 452K 0K 116K 1% upnpd
          220 17 17 6K 10908K 452K 0K 116K 1% upnpd
          223 7 4 6K 10908K 452K 0K 116K 1% upnpd
          224 0 0 6K 10908K 452K 0K 116K 1% upnpd
          225 3 5 6K 10908K 452K 0K 116K 1% upnpd
          860 71 0 270K 5864K 424K 0K 0K 1% sshd
          344 81 5 151K 4816K 380K 0K 20K 1% nmbd
          373 8 0 18K 1904K 264K 0K 0K 1% USB_Detect
          852 0 0 16K 1380K 260K 0K 0K 1% vmstat
          396 0 0 11K 1216K 80K 0K 4K 0% crond
          3 0 0 0K 0K 0K 0K 0K 0% ksoftirqd_CPU0
          4 0 0 0K 0K 0K 0K 0K 0% kswapd
          10 0 0 0K 0K 0K 0K 0K 0% usb-storage-0


          On Thu, Apr 05, 2007 at 11:51:36AM +0100, Phil Endecott wrote:
          > maybe that rsync hash-computation uses floating point calculations

          No, the rsync algorithm is integer-only.

          I suggest that vmstat is the best way to determine where the bottleneck is.

          Phil.





          Yahoo! Groups Links
        • Phil Endecott
          ... That s why it s slow then :-) I recall that the problem with the Debian installer on 1GB flash drives was the result of excessive memory use from
          Message 4 of 5 , Apr 7, 2007
          • 0 Attachment
            John wrote:
            > I include vmstat results below which show a lot of
            > swapping, particularly during "pass 2" [of fsck].

            That's why it's slow then :-)

            I recall that the problem with the Debian installer on 1GB flash drives
            was the result of excessive memory use from mkfs.ext2. Now here's
            another ext2 program that uses more RAM than the slug can provide.
            Perhaps it would be worthwhile contacting the ext2 people and asking
            for their opinions. It could be, for example, that fsck is
            deliberately using RAM to cache things that it has read from the disk
            in order to run faster; in this case, disabling the caching would be
            appropriate in our case.


            Phil.
          Your message has been successfully submitted and would be delivered to recipients shortly.