On Tuesday 14 January 2003 6:32 pm, Dieter Maurer wrote:
How does anyone know for sure that there is no problem in their storages today without using a "fsck" tool?
You are much deeper in "Storages" than I am.
I have the feeling that a "FileStorage" is a linear sequence of transaction records. When Zope builds the "FileStorage" index, I expect (I never verified) that is analyses the linear sequence and checks that at least the record sizes are correct
When building the index FileStorage scans through these headers as fast as possible. This will detect some damage but not all. Of course this check is only performed when the index is built - on startup after an unclean shutdown. During a clean shutdown the old index is persisted in data.fs.index. If you always shutdown cleanly then this may never happen!
Besides this elementary structure, there are backpointers from newer to older versions.
Yes, plus other redundant information. For FileStorage, all of this is thoroughly tested with the fstest.py script. IMO it is prudent to run fstest at least as often as you pack, perhaps on Data.fs.old.
Damage inside a transaction record may not be detected.
For FileStorage, damage at pickle level and ZODB level (dangling references etc) can be checked with fsrefs.py. For DirectoryStorage, checkds.py checks as much as it can, plus you can run it on a live storage. Im not sure about the situation for BerkelyStorages. -- Toby Dickenson http://www.geminidataloggers.com/people/tdickenson