zeo cache-size tuning - number of obj. vs MB
Hi What's the difference between the to cache-size settings here? I've not been able to find a good description to guide me in finding the right values. <zodb_db main> mount-point / cache-size 10000 1. <zeoclient> server foo:9080 storage main name zeostorage var $INSTANCE/var cache-size 20MB 2. </zeoclient> </zodb_db> As far as I've been able to determine, the second one 2. refers to a separate cache at the zeo connection level. Running 9 zeo clients (2.7.9) like this on a frontend with 4G ram, varnish, apache, and haproxy. Tethering on the edge of too much swapping. Needing to cut down.. Would it be better to spend all the memory on 1? Is there a good way to find a balance? Regards Gaute Amundsen
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gaute Amundsen wrote:
Hi
What's the difference between the to cache-size settings here? I've not been able to find a good description to guide me in finding the right values.
<zodb_db main> mount-point / cache-size 10000 1. <zeoclient> server foo:9080 storage main name zeostorage var $INSTANCE/var cache-size 20MB 2. </zeoclient> </zodb_db>
As far as I've been able to determine, the second one 2. refers to a separate cache at the zeo connection level.
The ZEO client cache is a disk-based cache, separate from the RAM-based cache used by any ZODB connection. It doesn't use RAM, per se, except for an in-memory index of OID -> (layer, offset).
Running 9 zeo clients (2.7.9) like this on a frontend with 4G ram, varnish, apache, and haproxy. Tethering on the edge of too much swapping. Needing to cut down..
Assuming you are running with the default number of "worker threads" (4): 10000 objects in the cache * 4 active connections per appserver * 9 appservers means you are caching 360,000 objects in RAM. If they are putting you into swap, then you are averaging something like 10Kb per object, which is pretty big.
Would it be better to spend all the memory on 1? Is there a good way to find a balance?
I think I might cut down on the number of appservers on that host, if it truly is hitting swap. Or trim the number of threads per appserver down from 4 to 2 or 3. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAktEUIQACgkQ+gerLs4ltQ6pYwCfX4g/ZVfAnyJNQ4Y1njnSx1HV 6QEAnAg/UHsCEQSx6rUpj5K9X2JkoGdQ =7ya0 -----END PGP SIGNATURE-----
On Wed, 2010-01-06 at 03:57 -0500, Tres Seaver wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Gaute Amundsen wrote:
Hi
What's the difference between the to cache-size settings here? I've not been able to find a good description to guide me in finding the right values.
<zodb_db main> mount-point / cache-size 10000 1. <zeoclient> server foo:9080 storage main name zeostorage var $INSTANCE/var cache-size 20MB 2. </zeoclient> </zodb_db>
As far as I've been able to determine, the second one 2. refers to a separate cache at the zeo connection level.
The ZEO client cache is a disk-based cache, separate from the RAM-based cache used by any ZODB connection. It doesn't use RAM, per se, except for an in-memory index of OID -> (layer, offset).
What would happen if set too small or too large?
Running 9 zeo clients (2.7.9) like this on a frontend with 4G ram, varnish, apache, and haproxy. Tethering on the edge of too much swapping. Needing to cut down..
Assuming you are running with the default number of "worker threads" (4): 10000 objects in the cache * 4 active connections per appserver * 9 appservers means you are caching 360,000 objects in RAM. If they are putting you into swap, then you are averaging something like 10Kb per object, which is pretty big.
Sorry, forgot to mention: 1 thread per appserver. Otherwise there's not much use in haproxy's load balancing we figured. Meory usage as reported by munin is avg. 160M, max ~220M and backend loads/stores are about 1000/10 a minute avg. and iowait at 45% avg today. committed memory 3.7G avg. This little oneliner for summing VSZ: ps auxf | grep zope | awk '{total+=$5} END{print total}' reports: 1918552 for zope 1489768 for varnish 2270732 for www-data And for RSS 1375908 zope 177200 varnsih 374624 www-data in addition to the 9, that includes one appserver with 6 therads/60000 obs and one with 1 therad/20000 for developers and robots respectively.
Would it be better to spend all the memory on 1? Is there a good way to find a balance?
I think I might cut down on the number of appservers on that host, if it truly is hitting swap. Or trim the number of threads per appserver down from 4 to 2 or 3.
Problem is we run out of them way to often now, and get 503's from haproxy. I just added another 3 in fact.. Perhaps I'll try with 6 tomorrow.. Thanks :) Gaute
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Gaute Amundsen wrote:
On Wed, 2010-01-06 at 03:57 -0500, Tres Seaver wrote:
The ZEO client cache is a disk-based cache, separate from the RAM-based cache used by any ZODB connection. It doesn't use RAM, per se, except for an in-memory index of OID -> (layer, offset).
What would happen if set too small or too large?
If it is too small, your app will have to go across the net too often to fetch an object's state from the ZEO server. If it is too large, then recovering after the client disconnects from the ZEO server can take a very long time.
Assuming you are running with the default number of "worker threads" (4): 10000 objects in the cache * 4 active connections per appserver * 9 appservers means you are caching 360,000 objects in RAM. If they are putting you into swap, then you are averaging something like 10Kb per object, which is pretty big.
Sorry, forgot to mention: 1 thread per appserver. Otherwise there's not much use in haproxy's load balancing we figured.
I would probably test that assumption. One rule of thumb I use is one appserver process per CPU, and then tune number of threads / connection cache size until the machine stays just out of swap.
Meory usage as reported by munin is avg. 160M, max ~220M and backend loads/stores are about 1000/10 a minute avg. and iowait at 45% avg today. committed memory 3.7G avg.
This little oneliner for summing VSZ: ps auxf | grep zope | awk '{total+=$5} END{print total}' reports: 1918552 for zope 1489768 for varnish 2270732 for www-data
And for RSS 1375908 zope 177200 varnsih 374624 www-data
in addition to the 9, that includes one appserver with 6 therads/60000 obs and one with 1 therad/20000 for developers and robots respectively.
Nothing jumps out at me: I might look at pushing varnish or apache off onto another machine, maybe.
I think I might cut down on the number of appservers on that host, if it truly is hitting swap. Or trim the number of threads per appserver down from 4 to 2 or 3.
Problem is we run out of them way to often now, and get 503's from haproxy. I just added another 3 in fact..
Perhaps I'll try with 6 tomorrow..
If your proxy is timing you out, then maybe you need to look at more hardware, or elas spend some time profiling the application and trimming out unnecessary work there. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver@palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAktE7C8ACgkQ+gerLs4ltQ74QgCffLc5emNAK7l22qbdGDQgagVv UnkAoMFwNe4cnO1grPe6x6aHt6WgdRNt =sRPq -----END PGP SIGNATURE-----
participants (2)
-
Gaute Amundsen -
Tres Seaver