On Wed, 2010-01-06 at 03:57 -0500, Tres Seaver wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Gaute Amundsen wrote:
Hi
What's the difference between the to cache-size settings here? I've not been able to find a good description to guide me in finding the right values.
<zodb_db main> mount-point / cache-size 10000 1. <zeoclient> server foo:9080 storage main name zeostorage var $INSTANCE/var cache-size 20MB 2. </zeoclient> </zodb_db>
As far as I've been able to determine, the second one 2. refers to a separate cache at the zeo connection level.
The ZEO client cache is a disk-based cache, separate from the RAM-based cache used by any ZODB connection. It doesn't use RAM, per se, except for an in-memory index of OID -> (layer, offset).
What would happen if set too small or too large?
Running 9 zeo clients (2.7.9) like this on a frontend with 4G ram, varnish, apache, and haproxy. Tethering on the edge of too much swapping. Needing to cut down..
Assuming you are running with the default number of "worker threads" (4): 10000 objects in the cache * 4 active connections per appserver * 9 appservers means you are caching 360,000 objects in RAM. If they are putting you into swap, then you are averaging something like 10Kb per object, which is pretty big.
Sorry, forgot to mention: 1 thread per appserver. Otherwise there's not much use in haproxy's load balancing we figured. Meory usage as reported by munin is avg. 160M, max ~220M and backend loads/stores are about 1000/10 a minute avg. and iowait at 45% avg today. committed memory 3.7G avg. This little oneliner for summing VSZ: ps auxf | grep zope | awk '{total+=$5} END{print total}' reports: 1918552 for zope 1489768 for varnish 2270732 for www-data And for RSS 1375908 zope 177200 varnsih 374624 www-data in addition to the 9, that includes one appserver with 6 therads/60000 obs and one with 1 therad/20000 for developers and robots respectively.
Would it be better to spend all the memory on 1? Is there a good way to find a balance?
I think I might cut down on the number of appservers on that host, if it truly is hitting swap. Or trim the number of threads per appserver down from 4 to 2 or 3.
Problem is we run out of them way to often now, and get 503's from haproxy. I just added another 3 in fact.. Perhaps I'll try with 6 tomorrow.. Thanks :) Gaute