Perlbal Out of Memory

Todd Lipcon todd at
Wed Jul 25 21:14:55 UTC 2007

Hi all,

This afternoon Perlbal mysteriously died with the output "Out of Memory" 
on the console. All I can find online says that this means that a malloc() 
failed within the Perl interpreter. My monitoring software reveals that 
2.6GB was being used for system cache at the time of death, and 1G/1G swap 
free. Interestingly, about 8 minutes before Perlbal died, the "mem_free" 
graph shows a sharp decline from 380M to 210M, then jumping up to 590M 
when Perlbal crashed. To me, this indicates that the Perlbal process was 
only using about 380M when it ran out. Other graphs indicate no abnormal 
CPU or disk usage during the 8 minute decline.

Here's ulimit as the perlbal user (I'm running perlbal as a non-root user 
on a high port and using iptables to forward port 80 to it):

$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 64511
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Does perlbal ever lock memory? Should I remove the ulimit -l for this 

Is there some flag I can pass to perl so that if this happens again I'll 
get some kind of useful stack trace or core dump? Will changing ulimit -c 
unlimited produce a core dump in this situation?


More information about the perlbal mailing list