[ZODB-Dev] Cache Query (why doesn't RAM usage ever drop?)

Chris Withers chrisw@nipltd.com
Wed, 30 Oct 2002 14:00:33 +0000


Toby Dickenson wrote:
>>I just hit the help button:
>>http://x/HelpSys?help_url=/Control_Panel/Products/OFSP/Help/Database-Manage
>>ment_Database.stx
> 
> I dont see anything wrong with the 2.6 version????

Hmmm, I'm actually using the Alpha here, did you fix it up after that?

> A malicious script? yes.  I think this is a bogus argument for a change to 
> ZODB; a malicious script could also bring the server down by creating large 
> numbers of non-persistent objects

Not malicious, consider this:

for brain in context.some_catalog(an_index='fish'):
   object = brain.getObject()
   object.doSomeMaintenence()

...now if an_index goes walkies, you've just loaded all the obejcts you've 
cataloged into memory by mistake.

> A well-intentioned script? you just need to call the garbage collecter 
> intermittantly (plus use subtransactions if you are modifying large numbers 
> of objects) and everything will be happy.

Can you do either of those from Script (Python)'s?

> The problem is not the hardness of the limit, but rather that the limit is 
> applied at the right time. Any automatic enforcement mid-transaction is 
> liable to cause problems (for example, think about _v_ attributes)

how would raising a MemoryError (as happens when you run of RAM for real) affect 
these?

> If it is using Zope's 'File' object then the 500k block-o-bytes will have been 
> split into 7 64k chunks, and each chunk stored in a ZODB object.

Oh, cool, didn't know that :-)

> I can suggest a change to your indexing loop that will make this happen 
> automatically. Would that help? ; -)

What would the change be?

cheers,

Chris