Smokeping on Debian/NSLU2
- Sent over here by the moderator of the Debian/NSLU2 group ...
> I have a freshly minted NSLU2 running the Debian image installed byISP's
> My intent is to make a bunch of microprobes to place on various
> networks to measure response times of our web products. In thepast,
> I've used Smokeping on BSD boxes that were discarded. I thought itthis:
> would be cool to have an all in one headless dedicated box to do it.
> I have an NSLU2, a SanDisk CF reader and a 4GB microdrive. Debian
> installs, I take the defaults and it leaves me with a "top" like
> top - 17:52:03 up 48 min, 1 user, load average: 1.48, 1.43, 1.19
> Tasks: 48 total, 2 running, 46 sleeping, 0 stopped, 0 zombie
> Cpu(s): 13.0%us, 87.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%
> Mem: 29988k total, 28800k used, 1188k free, 1396k
> Swap: 96348k total, 9336k used, 87012k free, 8608kcached
> ulimit looks like this.
> # ulimit -a
> core file size (blocks, -c) 0
> data seg size (kbytes, -d) unlimited
> max nice (-e) 0
> file size (blocks, -f) unlimited
> pending signals (-i) unlimited
> max locked memory (kbytes, -l) unlimited
> max memory size (kbytes, -m) unlimited
> open files (-n) 1024
> pipe size (512 bytes, -p) 8
> POSIX message queues (bytes, -q) unlimited
> max rt priority (-r) 0
> stack size (kbytes, -s) unlimited
> cpu time (seconds, -t) unlimited
> max user processes (-u) unlimited
> virtual memory (kbytes, -v) unlimited
> file locks (-x) unlimited
> I've been able to 'apt-get' apache and smokeping packages and they
> work, sort of. Whenever I drill down in smokeping, I get the
> following error when it tries to render an rrd into an image:
> ERROR: malloc im->gdes[gdi].data
> which is a wrapper around a malloc call. It doesn't matter if I run
> smokeping.cgi under perl each time or speedy_cgi, they both blowup
> with the malloc error.
> Has anyone seen an error like this and/or got smokeping to run on
> As an alternative, if I wanted to build the code myself, what all
> packages need to be installed to build a development environment.
> Thanks in advance.