Santi Camps wrote:
Yes, I understant how ZODB cache works with threads. The behaviour I didn't understood is why, once transaction is finished, the ZODB Cache decrese in number of objects but the RAM process not.
This is the pythonic behaviour I was talking about. If you want it fixed, bug Guido ;-)
I'm already using brains. I've tried your suggestion of subtransactions, but there is no change (I think it's normal, I'm not changing any persistent object). I'm not sure how a _p_jar.sync() can improve something.
...you need to force the cache to garbage collect before there are too many objects in memory. The catalog does this using subtransactions. there's also a cache minimize call somewhere, but you're probably better off asking on zodb-dev@zope.org about where that is...
I think that the best solution will be run the report in a new process, spawning a ZEO Client to do the work and waiting for it.
...yeah, get put the ZEO client on a different machine :-) cheers, Chris -- Simplistix - Content Management, Zope & Python Consulting - http://www.simplistix.co.uk