[ZODB-Dev] Cache Query (why doesn't RAM usage ever drop?)
Toby Dickenson
tdickenson@geminidataloggers.com
Wed, 30 Oct 2002 15:32:17 +0000
On Wednesday 30 October 2002 2:00 pm, Chris Withers wrote:
> Hmmm, I'm actually using the Alpha here, did you fix it up after that?
no. Ill spam you off-list about this.
> > A malicious script? yes. I think this is a bogus argument for a chan=
ge
> > to ZODB; a malicious script could also bring the server down by creat=
ing
> > large numbers of non-persistent objects
>
> Not malicious, consider this:
>
> for brain in context.some_catalog(an_index=3D'fish'):
> object =3D brain.getObject()
> object.doSomeMaintenence()
>
> ...now if an_index goes walkies, you've just loaded all the obejcts you=
've
> cataloged into memory by mistake.
Yes. IMO this script *needs*:
1. either batching, as is traditional for a search results page. The sear=
ch=20
might be huge, but memory useage will be controlled if the batch size is=20
reasonable:
2. or a garbage collection tickler as part of the loop. See ZCatalog's=20
"reindex all" implementation for an example.
> Can you do either of those from Script (Python)'s?
Probably not, but I dont do much ttw work.
> > The problem is not the hardness of the limit, but rather that the lim=
it
> > is applied at the right time. Any automatic enforcement mid-transacti=
on
> > is liable to cause problems (for example, think about _v_ attributes)
>
> how would raising a MemoryError (as happens when you run of RAM for rea=
l)
> affect these?
Not a bad idea. This could be done easily and efficiently in Connection.p=
y
Of course it still doesnt help with the case of memory usage by non-persi=
stent=20
objects.
> > I can suggest a change to your indexing loop that will make this happ=
en
> > automatically. Would that help? ; -)
>
> What would the change be?
sys.exit(1)
;-)