[Zope-dev] >2GB Data.fs files on FreeBSD

R. David Murray bitz@bitdance.com
Thu, 13 Apr 2000 11:31:04 -0400 (EDT)


OK, I've seen it bandied about that Linux doesn't support large files
but FreeBSD does, and to run Zope on FreeBSD if you want a large database
file.  So how do you do that?  On my BSDI box I started getting
"File too large" errors when the file exceeded some magic number around
2GB.  I moved the file to a FreeBSD box hoping to be able to trim
it there, but I get an error when I try to start Zope:

Traceback (innermost last):
  File "/usr/local/zope/Zope-2.1.2-src/z2.py", line 436, in ?
    exec "import "+MODULE in {}
  File "<string>", line 1, in ?
  File "/usr/local/zope/Zope-2.1.2-src/lib/python/Zope/__init__.py", line 109, in ?
    DB=ZODB.FileStorage.FileStorage(Globals.BobobaseName)
  File "/usr/local/zope/Zope-2.1.2-src/lib/python/ZODB/FileStorage.py", line 301, in __init__
    self._pos, self._oid, tid = read_index(
  File "/usr/local/zope/Zope-2.1.2-src/lib/python/ZODB/FileStorage.py", line 1231, in read_index
    if file_size < start: raise FileStorageFormatError, file.name
ZODB.FileStorage.FileStorageFormatError: /usr/local/zope/Zope-2.1.2-src/var/Data.fs


So it looks like there is a problem using Zope with a large database
no matter what the platform.  Has anyone figured out how to fix this?

My main goal is to be able to get the last transactions out of this
database, after which I'll be able to shrink it well under the 2BG
limit...I'm going to try just truncating the last few bytes to see
if I can get Zope to open it and let me recover most of the transactions.
(Side note: tranalyzer dies before it processes the whole file, probably
some sort of memory resource limit on my server.)

But given the number of people who have said "use FreeBSD if you want
big files", I'm really wondering about this.  What if later I
have an application where I really need a >2GB database?

--RDM