[Zope] ZODB scalability in in large Zope deployments

Toby Dickenson tdickenson at devmail.geminidataloggers.co.uk
Sat Oct 23 12:52:01 EDT 2004


On Saturday 23 October 2004 07:37, Jeff Rodriguez wrote:
> I'm planning on using Plone and Zope for our company intranet. If 
> there's one thing I've learned working at my company it's that we will 
> push whatever the product is to it's absolute limit and then ask for more.
> 
> Which makes me wonder, how does ZODB scale in massive deployments? I'm 
> not really concerned with the number of hits so much as ZODB's ability 
> to handle:
>    1. Large quantities of large objects
>    2. Small quantities of very large objects.

ZODB's memory management policies currently ignore object size, so it is 
normal to split up very large objects so that ZODB seems them as many objects 
of normal size. This requires explicit application code.

For example, Zope's File object that represents a downloadable file will split 
the file body into 64k chunks, storng each chunk in a seperate ZODB object.

>    3. Large quantities of small objects.
> 
> I can really forsee millions of objects with sizes ranging from a couple 
> kilobytes to hundreds of megabytes. 

Lets say your *average* application object is a megabyte. Thats up to a 
terabyte of data. If chunked into 64k, that is 16 million ZODB objects. Thats 
way beyond my experience.

-- 
Toby Dickenson


More information about the Zope mailing list