Memory issue 2.1.1 request for info
Matthew T. Kromer:
I'm extremely curious that the cache doesn't clear when you restart Zope. One would normally not expect to see this. Can you do a "ps xauww" and mail it to me / the list?
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 1104 72 ? S May26 0:05 init [3] root 2 0.0 0.0 0 0 ? SW May26 0:03 [kflushd] root 3 0.0 0.0 0 0 ? SW May26 0:02 [kupdate] root 4 0.0 0.0 0 0 ? SW May26 0:00 [kpiod] root 5 0.0 0.0 0 0 ? SW May26 0:03 [kswapd] root 6 0.0 0.0 0 0 ? SW< May26 0:00 [mdrecoveryd] root 9 0.0 0.0 0 0 ? SW May26 0:00 [scsi_eh_0] bin 330 0.0 0.0 1196 0 ? SW May26 0:00 [portmap] root 359 0.0 0.0 1156 144 ? S May26 0:00 syslogd -m 0 root 370 0.0 0.0 1396 0 ? SW May26 0:00 [klogd] daemon 386 0.0 0.0 1128 104 ? S May26 0:00 /usr/sbin/atd root 402 0.0 0.0 1304 108 ? S May26 0:00 crond root 418 0.0 0.0 1124 0 ? SW May26 0:00 [inetd] root 434 0.0 0.0 1176 0 ? SW May26 0:00 [lpd] root 457 0.0 0.0 2204 0 tty1 SW May26 0:00 [login] root 458 0.0 0.0 1076 0 tty2 SW May26 0:00 [mingetty] root 459 0.0 0.0 1076 0 tty3 SW May26 0:00 [mingetty] root 460 0.0 0.0 1076 0 tty4 SW May26 0:00 [mingetty] root 461 0.0 0.0 1076 0 tty5 SW May26 0:00 [mingetty] root 462 0.0 0.0 1076 0 tty6 SW May26 0:00 [mingetty] zope 465 0.0 0.0 1740 0 tty1 SW May26 0:00 [bash] root 498 0.0 0.0 2360 32 ? S May26 0:00 [in.rlogind] root 499 0.0 0.0 2288 0 pts/0 SW May26 0:00 [login] jason 500 0.0 0.0 1752 484 pts/0 S May26 0:00 -bash zope 1992 0.0 0.0 3188 0 tty1 SW May29 0:01 [python] zope 1996 0.0 22.2 220152 216340 tty1 S May29 1:33 /Zope/bin/python /Zope/z2.py -t 10 -D zope 1998 0.0 22.2 220152 216340 tty1 S May29 0:02 /Zope/bin/python /Zope/z2.py -t 10 -D zope 1999 0.0 22.2 220152 216340 tty1 S May29 0:00 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2000 0.0 22.2 220152 216340 tty1 S May29 0:00 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2001 1.4 22.2 220152 216340 tty1 S May29 23:39 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2002 0.0 22.2 220152 216340 tty1 S May29 0:13 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2003 0.0 22.2 220152 216340 tty1 S May29 0:08 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2004 0.0 22.2 220152 216340 tty1 S May29 0:00 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2005 0.0 22.2 220152 216340 tty1 S May29 0:42 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2006 0.0 22.2 220152 216340 tty1 S May29 0:00 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2007 0.0 22.2 220152 216340 tty1 S May29 0:00 /Zope/bin/python /Zope/z2.py -t 10 -D zope 2008 1.1 22.2 220152 216340 tty1 S May29 17:55 /Zope/bin/python /Zope/z2.py -t 10 -D root 2474 0.0 0.0 2360 0 ? SW 09:35 0:00 [in.rlogind] root 2475 0.0 0.0 2292 0 pts/1 SW 09:35 0:00 [login] jason 2476 0.0 0.0 1748 0 pts/1 SW 09:35 0:00 [bash] jason 2610 0.0 0.0 2508 864 pts/0 R 11:41 0:00 ps xauww The memory values are significantly smaller % for the threads than they were orignially. Shane had me set the "Target Time Between Accesses" to 3 seconds. I have futzed with it and now have it at 30 seconds. Also, the "target Size" was at 20,000 objects and now I have it at 10,000.
Also a "free" would be helpful too.
total used free shared buffers cached Mem: 971452 966896 4556 2448 627104 92608 -/+ buffers/cache: 247184 724268 Swap: 128480 6732 121748
Under linux, shared memory is amongst the last to be scavenged in low memory conditions. I'm curious to see if something is allocating that memory as shared or what.
You might also try an "ipcs -a" command to make sure it's not SYSV type shared memory (which can stick around even after a process exits.)
------ Shared Memory Segments -------- key shmid owner perms bytes nattch status ------ Semaphore Arrays -------- key semid owner perms nsems status ------ Message Queues -------- key msqid owner perms used-bytes messages Things are moving pretty speedilly since I adjusted the setcheckinterval. Thanks Matt!! I'll send the output of these commands after a Zope exit at the end of our intranet business here today. Just think, with ZEO I won't have to wait until the close of business ('cause the web is never closed:) Thanks again, -- Jason Spisak 444@hiretechs.com
participants (1)
-
Jason Spisak