[Zope] Memory and Large Zope Page Templates
Santi Camps
scamps at earcon.com
Thu Jan 20 01:52:52 EST 2005
En/na Chris Withers ha escrit:
> Santi Camps wrote:
>
>> Yes, I understant how ZODB cache works with threads. The behaviour
>> I didn't understood is why, once transaction is finished, the ZODB
>> Cache decrese in number of objects but the RAM process not.
>
>
> This is the pythonic behaviour I was talking about.
> If you want it fixed, bug Guido ;-)
>
>> I'm already using brains. I've tried your suggestion of
>> subtransactions, but there is no change (I think it's normal, I'm not
>> changing any persistent object). I'm not sure how a _p_jar.sync()
>> can improve something.
>
>
> ...you need to force the cache to garbage collect before there are too
> many objects in memory. The catalog does this using subtransactions.
> there's also a cache minimize call somewhere, but you're probably
> better off asking on zodb-dev at zope.org about where that is...
>
Hi again,
I reply just to put the answer here, in order to be useful someone else.
Following Chris instructions, I've looked at ZCatalogs code and found
the way to free memory:
1) Use subtransactions when changing data, adding a
get_transaction().commit(1) every X steps
2) Explicitly call the garbage collector of the connection every X
steps, forcing the conection to release objects from cache and be
adjusted to cache parameters defined in zope.conf. This can be done
with a self._p_jar.cacheGC() (where self is a persistent object,
obviously)
In my case, a readonly report, the second method is enought to keep
memory in place.
Thanks very much to everybody
Santi Camps
More information about the Zope
mailing list