Re: [Zope] ZODB size limitations?
(not bragging) my ZODB currently is at around 90,000 objects and grows from 270MB -> 500 MB. It used to house almost double that and grew to 1.5GB with no problems. Obviously I did some house cleaning. There are file size limitations in some OS's (I think Linux is 2GB). 150,000 objects is no biggy and if it grew to 500,000, you would still be fine. BZ -_ http://www.zopezone.com http://www.bluewildfire.com
In an effort to put our Zope-based system into production, I ran an import script to pull in historical data external to Zope. At this point we are experiencing some very strange behaviors - seemingly related to traversal of the object tree - that did not occur prior to the import. At this point we have close to 150,000 objects, each containing a handful of metadata and nothing more. A large number of these are folderish. I expect this database to grow much larger in a very short time, providing it does not die a premature death. Our file system is Reiser FS and our ZODB is running on DirectoryStorage, rather than the Data.fs file. I worry that we're pushing Zope beyond its capabilities with this number of objects, but I don't really know. Does anyone have experience with databases of this size? Any ideas on limitations to number of objects the ZODB can comfortably manage, or the size to which the ZODB can comfortably grow? _______________________________________________ Zope maillist - Zope@zope.org http://mail.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope-dev )
There are file size limitations in some OS's (I think Linux is 2GB). 150,000 objects is no biggy and if it grew to 500,000, you would still be fine.
Only if you have a very old one. Linux 2.4 with, say, XFS can do up to 64TB, and I'd be very impressed if you can afford one of those. Later with 64-bit block devices it'll become something like 9 million TB. Of course, the maximum filesystem size is only 2TB, which is something of a wet blanket, but that'll change. If you need to work around it, I suppose you can use DirectoryStorage. --jcc
I think to have data.fs >2GB you still have to build python with large file support? Also, beware that often the /tmp or /usr/tmp directory is on a small partition and that limits the size of any one uploaded object. On Wednesday, October 15, 2003, at 05:28 PM, J Cameron Cooper wrote:
There are file size limitations in some OS's (I think Linux is 2GB). 150,000 objects is no biggy and if it grew to 500,000, you would still be fine.
Only if you have a very old one.
Linux 2.4 with, say, XFS can do up to 64TB, and I'd be very impressed if you can afford one of those. Later with 64-bit block devices it'll become something like 9 million TB.
Of course, the maximum filesystem size is only 2TB, which is something of a wet blanket, but that'll change. If you need to work around it, I suppose you can use DirectoryStorage.
--jcc
_______________________________________________ Zope maillist - Zope@zope.org http://mail.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope-dev )
Marc Lindahl wrote at 2003-11-7 19:31 -0500:
I think to have data.fs >2GB you still have to build python with large file support?
Also, beware that often the /tmp or /usr/tmp directory is on a small partition and that limits the size of any one uploaded object.
There is an environment variable controlling where temporary files are placed. I think it is "TMPDIR". -- Dieter
participants (4)
-
BZ -
Dieter Maurer -
J Cameron Cooper -
Marc Lindahl