On Monday 12 Aug 2002 4:50 pm, Joachim Werner wrote:
Hi!
I know of exactly two cases that could really cause a ZODB loose data: if you reach the 2GB limit with a Python not compiled for larger files and if you reach the physical limit of your storage. That is, if your case doesn't add a third one ...
FileStorage is robust and mature, but its not a good as this statement suggests. There have been a number of bugs that cause packing to delete more than it should (a few very small holes still remain), bugs that cause FileStorage to overwrite the middle of its log file, and bugs that cause its position index to get muddled.
Have you already tried the usual things, i.e. run fstest.py and/or fsrecover.py? It's quite unlikely that you'd loose a whole tree, as the data is not physically stored in trees, but added sequentially. You might have deleted a tree, but that can be rolled back by getting rid of the ZODB transaction that did the delete.
The first thing I would recommend trying today is shutting down, removing data.fs.index, and restarting. In recebnt versions data.fs.index make very heavy use of BTrees, and all released versions of the BTree code have small bugs. <plug>I am currently developing DirectoryStorage, and one design goal is fault tolerance. http://dirstorage.sourceforge.net/ </plug>