Zope Performance with Multiple Mount Points
We've recently moved to a Zope configuration with approximately 30 separate databases mounted at mount points within a main database. Previously, we ran a single database which was approaching 300GB in size. We chose to split the database to reduce Data.fs size. We now run 30 separate ZEO Servers (on a single machine). However, after switching to this configuration we've noticed some performance issues. Initially, after a client is restarted and builds out it's caches the site runs very quickly. However as the memory usage grows the performance degrades. If the client gets into swap the client practically dies. We had 10,000 object caches on each of the databases with 2 threads on the client. 30 x 10,000 x 2 = 600,000 objects in memory. We've since reduced the cache sizes to 1,000 or 30 x 1,000 x 2 = 60,000 objects. This seemed to extend the life of a client between restarts. Generally, our client machines will hover around 97%-99% memory usage and 90%-100% CPU (on a 2 CPU machine). We still experience periodic performance problems and are looking for any input that might help us address them. Thanks for your input, -- Brian Brinegar Web Services Coordinator Engineering Computer Network
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brian Brinegar wrote:
We've recently moved to a Zope configuration with approximately 30 separate databases mounted at mount points within a main database.
Previously, we ran a single database which was approaching 300GB in size. We chose to split the database to reduce Data.fs size. We now run 30 separate ZEO Servers (on a single machine).
However, after switching to this configuration we've noticed some performance issues.
Initially, after a client is restarted and builds out it's caches the site runs very quickly. However as the memory usage grows the performance degrades. If the client gets into swap the client practically dies.
We had 10,000 object caches on each of the databases with 2 threads on the client. 30 x 10,000 x 2 = 600,000 objects in memory. We've since reduced the cache sizes to 1,000 or 30 x 1,000 x 2 = 60,000 objects. This seemed to extend the life of a client between restarts.
No idea what's running onside your system and inside your Zope - 1000 objects seems way to small for almost any case. Also 30 ZEO servers seems kind of extreme. We are running similar a big applications and we are running with five or six ZEO servers - not 30. - -aj
Generally, our client machines will hover around 97%-99% memory usage and 90%-100% CPU (on a 2 CPU machine).
We still experience periodic performance problems and are looking for any input that might help us address them.
Thanks for your input,
- -- ZOPYX Limited | zopyx group Charlottenstr. 37/1 | The full-service network for Zope & Plone D-72070 Tübingen | Produce & Publish www.zopyx.com | www.produce-and-publish.com - ------------------------------------------------------------------------ E-Publishing, Python, Zope & Plone development, Consulting -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvDRJgACgkQCJIWIbr9KYyNeQCgtfofRFL5dcB3qFa4xtgNx6oF ijAAniOxsXUD3f4az341cDhK2MA5CO+m =cdwr -----END PGP SIGNATURE-----
Our goal with this configuration was to limit the ability of a single site to lock the entire database during a large transaction. Are ZEO Server transaction locks one per storage or one per server? If the locks are per storage rather than per server I believe we could accomplish our goal with less servers, in which case is there an upper limit on the number of storages a single server can reasonably handle? Thanks, -Brian Andreas Jung wrote:
Brian Brinegar wrote:
We've recently moved to a Zope configuration with approximately 30 separate databases mounted at mount points within a main database.
Previously, we ran a single database which was approaching 300GB in size. We chose to split the database to reduce Data.fs size. We now run 30 separate ZEO Servers (on a single machine).
However, after switching to this configuration we've noticed some performance issues.
Initially, after a client is restarted and builds out it's caches the site runs very quickly. However as the memory usage grows the performance degrades. If the client gets into swap the client practically dies.
We had 10,000 object caches on each of the databases with 2 threads on the client. 30 x 10,000 x 2 = 600,000 objects in memory. We've since reduced the cache sizes to 1,000 or 30 x 1,000 x 2 = 60,000 objects. This seemed to extend the life of a client between restarts.
No idea what's running onside your system and inside your Zope - 1000 objects seems way to small for almost any case. Also 30 ZEO servers seems kind of extreme. We are running similar a big applications and we are running with five or six ZEO servers - not 30.
-aj
Generally, our client machines will hover around 97%-99% memory usage and 90%-100% CPU (on a 2 CPU machine).
We still experience periodic performance problems and are looking for any input that might help us address them.
Thanks for your input,
-- Brian Brinegar Web Services Coordinator Engineering Computer Network
participants (2)
-
Andreas Jung -
Brian Brinegar