[ZWeb] ZCatalog Issues

Shane Hathaway shane at hathawaymix.org
Sat Jul 10 15:08:22 EDT 2004


On Friday 09 July 2004 16:43 pm, Michael Bernstein wrote:
> 2. Attempting to re-index the ZCatalog (by clicking the 'Update Catalog'
> button in the 'Advanced' tab) causes a timeout or an error message.
> There are over 179k objects, but it is my understanding that ZCatalog
> should scale to that many objects fairly easily. I am given to
> understand that Casey and Shane have been working on fixing this issue.

Well, here is what I've learned so far from analyzing the zope.org catalog.  
Maybe this will help Casey.  Maybe others can help, too.

I exported the zope.org catalog as a .zexp and wrote a utility that roughly 
analyzes .zexp files.  It took several hours, but since an export operation 
does not unpickle the objects, it finished successfully.

The total size of the .zexp is 340 MB and it contains 572,231 objects.  Here 
is a breakdown of the sizes of the objects:

214387 objects 0-63 bytes
115033 objects 64-255 bytes
202881 objects 256-1023 bytes
30591 objects 1024-4095 bytes
5700 objects 4096-16383 bytes
3434 objects 16384-65535 bytes
194 objects 65536-131071 bytes
11 objects 131072-1048575 bytes
0 objects 1048576-2147483647 bytes

I decided to first look at the largest objects in detail.  I was happy to see 
there are no 1 MB objects, but there are two 500K objects and nine objects 
between 128K and 200K in size.  Each of those 11 objects is either an 
IOBucket, an IOBTree, or an IISet.  At least three of them unintentionally 
contain large, fully-rendered HTML pages (presumably because some indexed 
object generates HTML for the given attributes.)

Note that zope.org currently has its per-connection database cache size set to 
23,000 objects.  The catalog can not fit in that space, and even if it did, 
we'd run out of memory.  The box has 2 GB, and between two app servers, there 
are eight connections.  Each connection maintains its own copy of the 
database.  340 MB is probably a low estimate of the catalog's full resident 
unpickled size, but I'll use it anyway: keeping this catalog in memory would 
take at least 340 MB * 8 = 2.7 GB.  That's also ignoring the size of other 
objects loaded from the database connections.

So should we pile RAM into the box and boost the cache size to 600,000?  I 
think that would be unwise.  I've seen evidence that the time spent managing 
expiration in the ZODB cache rises exponentially with the number of objects 
in the cache.  Flushing a cache containing 20,000 objects can take minutes, 
and flushing a cache containing 60,000 objects can take an hour.  Also, it's 
a bit difficult to work on this because it's all in C.

It seems like this catalog contains simply too many objects.  A third of them 
are very small (less that 64 bytes including the class name); I wonder if we 
could combine some of these.  I think I'll next try to find out how many of 
the objects are in text indexes and lexicons.

There is a bit of good news: zope.org is not consuming gobs of RAM due to a 
memory leak.  I wrote a small Python C extension that uses mallinfo() to 
reveal how much heap a Python process is actually using for objects, which is 
often much smaller than the process size as the operating system sees it.  
Whenever I flush the caches in Zope, its heap usage shrinks to less than 10% 
of its process size.  That means most of the memory is consumed by 
reclaimable ZODB objects.  (I'll post the C extension on the web if anyone is 
interested.)

I'll post more information as I learn more.

Shane


More information about the Zope-web mailing list