[ZODB-Dev] ZEO Limits
Toby Dickenson
tdickenson@geminidataloggers.com
Fri, 21 Dec 2001 13:20:52 +0000
On Fri, 21 Dec 2001 10:50:42 -0200, Fabiano Weimar dos Santos
<fabiano@x3ng.com.br> wrote:
>the project is, in part, described in =
http://www.redeescolarlivre.rs.gov.br.
>The main objetive is interchange data about learning projects at public
>schools, in Rio Grande do Sul, Brasil.
Well, the pictures are nice ;)=20
>There, we are planning to install a "mega-cluster", with a central ZEO
>Storage Server and 2200 Zeo Client Storage on each school arround the =
state.
There are some disadvantages to that system.... Im not sure how many
you already know about:
1. the ZEO server needs to trust its ZEO clients. If you care about
security, that means you have 2200 endpoints to lock down.
2. any upgrades to Python, Zope or python products need to be
replicated across 2200 machines.
3. any writes involve sending an invalidation message to each zeo
client. That is a big load on the ZEO server for each write.
4. When connecting, each ZEO client has to revalidate its cached
objects. That is a big load on the ZEO server time is comes up.
>Do you know anybody with a network instalation with this number of =
machines?
no. Do you really need that many zope servers? If the problem is one
of latency and bandwidth, would it help to have many caches (I like
squid), and fewer Zopes?
If you persue this approach it may be interesting to consider a
hierarchy of ZEO servers. Each end point does not connect to the
central ZEO server, but rather to a regional hub ZEO server shared by
perhaps 100 other endpoints. That hub's storage is itself a ZEO client
(with a big cache) connecting to the master ZEO server (shared by 22
hubs). Ive not *used* this configuration personally, but I have
thought about it, and it looks like it should be more scalable.
ZOPE
many * ------------- =20
ClientStorage -------> ZEO Server
------------- =20
ClientStorage ----------> ZEO Server
-----------
FileStorage
Toby Dickenson
tdickenson@geminidataloggers.com