Hi Van, You're on the right track.. nice deduction work! ;-) The cache code in 2.5.1 and below is not adequate for keeping memory usage in check when lots of database stores are happening in sequence. The cache code is much improved in 2.6 (thanks to Toby Dickenson), and it does keep memory usage in check. I suggest using 2.6 (or the Zope trunk) for this reason. One further hint: in your object loader code, do a commit every "n" records (maybe 1000), and after the commit call "self._p_jar.cacheMinimize()" ("self" can actually be any persistent object). This attempts to gc the pickle cache for that database connection). This will keep memory usage steady. HTH, - C On Tue, 2002-08-13 at 12:49, VanL wrote:
Hello,
We have a script that does a'database import' into the ZODB. This script iterates through a series of records (currently in text files) and creates corresponding Zope objects. We can do small (<100) imports relatively well, but on a large import (1000+) the Zope process balloons in size and eventually becomes unresponsive. Sending a SIGHUP to Zope restores normal functionality and performance, and the created objects all seem to work fine. However, we don't want to have to restart Zope every time we do an import.
My suspicion, based on the above evidence, is that the ZODB caching mechanism is to blame for the slowdown. Can anybody confirm this, or offer an alternate explanation?
And, if caching is the culprit, is there a way to programmatically turn off caching for the duration of a script run?
Thanks,
VanL
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )