Hi, if I get the theory right it worked like expected: even the memory usage can not be much more then the resulting Data.fs - which we could expect would not be more then 360MB. That is, during the packing process, ZODB goes from top to bottom of the Data.fs, locating any object and keep it say... in a list. Every further occurence of the object lets ZODB skip over it because the most recent version of the object must have been on top and is in the list already. HTH Tino Wildenhain --On Freitag, 15. November 2002 00:40 -0800 Howard Hansen <howardahansen@yahoo.com> wrote:
Well, I've got 128GB of RAM on the server, so it wouldn't matter.
Joking!
It worked just fine. I watched the start of the process on top. It ate less CPU than I expected and didn't appreciably affect RAM usage. So, from empirical evidence, I'd say that if packing uses RAM in proportion to the db size, it's only a few KB/GB.
YMMV, though. My Data.fs was filled with an N-square bunch of transactions each superceding its predecessor. It had a very low signal/noise ratio. I don't know how it would have worked if it were clearing 350MB of transactions out of a 33GB database.
Howard Hansen http://howardsmusings.com
--- sean.upton@uniontrib.com wrote:
Hmm... I heard somewhere that packing a filestorage used memory proportional to the storage size. I would consider posting to the ZODB-dev list, perhaps for some more pointers on how to get out of this?
Sean
__________________________________________________ Do you Yahoo!? Yahoo! Web Hosting - Let the expert host your site http://webhosting.yahoo.com