At 02:10 PM 9/11/99 -0400, A.M. Kuchling wrote:
I'm experimenting with using ZODB3 to store a tree of objects, but don't understand what I'm doing. My expectation is that the amount of memory used should be bounded, because objects should be flushed from the cache when they haven't been accessed in a while. Instead, the Python process grows continually.
This should be expected, as FileStorage objects keep an index mapping persistent ID's to file seek positions. Even though the cache should be flushed of the objects themselves, the index will grow with each commit of newly created objects. Thus, memory consumption will grow by at least 32 bytes per object, and probably more, since there is overhead for the mapping object itself when it refactors itself as it grows, and it may cause some memory fragmentation as well. This memory size will never be reduced by much (unless objects are deleted and the DB packed) because any process opening the FileStorage will build up the exact same index upon startup. If you plan on having a million-object DB, plan on having 32-64M of RAM available just for OID indexing. It is probably possible to cut this overhead by having FileStorage use a custom-built mapping object written in C specifically designed for FileStorage's needs, but I have not tried anything like this yet.
participants (1)
-
Phillip J. Eby