On Thu, 13 Apr 2000, Michel Pelletier wrote:
Only current revisions of objects are ever activated, so this is not a good measure. There is generally no correlation between Data.fs size
Michel I am not seeking a good measure. I am seeking an easily estimated upper limit given that my Zope accesses no files on the filesystem and has no external components active (DAs, etc). Including all objects in Data.fs, current revisions or not, makes my estimation easier.
and memory consumption. Also, the size of an object when serialized (pickled in the Data.fs file) does not correlate to the size of the object in memory in any straightforward way.
Correlation might not be straightforward but upper limit estimation should be. cPickle is a binary representation of the instance data plus a lot of extra info declaring types etc. Ignoring cached objects/data coming from external sources (RDBM, etc) which I don't have, then the pickled version of the object should place an approximate upper limit for its RAM usage, unless during object activation/utilization the object requires a lot of RAM to do its job (Catalog comes to mind). Or for instance if you have a __setstate__ method that does something like: for i in range(1000): a.append(a*100) In anycase any activity under the above constrains which increases RAM usage indefinetely is IMO a memory leak. Pavlos