[Zope] zope.org down

ethan mindlace fremen mindlace@digicool.com
Mon, 24 Jul 2000 11:11:16 -0400


Cary O'Brien wrote:

> Hold on. I am confused, and I really need to understand this.
> Why do you need a 40 gb drive to store something that might get to 2GB?
> Do thing get really big during packing?  (Thinking PostgreSQL index creation
> here, where it uses temp files for sorting, which has bit me more than
> once).  Or do you just want to keep N old versions around?

I need to have n versions for various testing purposes: this is what got
me in trouble in the first place- I was packing to try to make room. 
The pack makes a full backup of your data, so it needs about 1.75*n
space where n is the size of your current Data.fs

> Also, (besides "upgrade to alpha"), what is the workaround for Data.fs
> bigger than 2GB?  Are there any plans to split it across multiple
> files? (Once again, the PostgreSQL people recently added this -- database
> files don't exceed 1GB, when the table gets that big it splits them
> across multiple files).

One solution is to have your Data.fs actually be split using mounted
databases, something I consider a sub-optimal solution.  Another option
would be to store the entire zodb in a relational database, but as far
as I know none of the products that attempt to provide that are
full-featured.

The bottom line is that the OS should support the creation of very large
files.  Note that any of the BSD variants, or Solaris for x86, or SCO
unix, all will let you have a ZODB larger than 2gb, assuming python was
compiled properly.

I have spent literally about 40 hours trying to get Linux to support
large files.  While the code is integrated into kernels 2.3.27 and up,
something still isn't quite right because I cannot get python to support
large files although I can get c to create 17GB files using lseek().

Hope that helps,
-- 
ethan mindlace fremen
Zopatista Community Liason
Abnegate I!