Hi!
I have a serious problem: I think about a week ago my database was damaged by this nasty pack()ing bug. Unfortunately this error was not noticeable until my zope restart yesterday. :-( Hopefully, you did not pack again in the meantime. In this case, you get at least the state from the pack in "Data.fs.old". In fact, I did. Every night with a cronjob which does packing (days=7) and backups the database after packing... This is my usual procedure and it was also done last week because I didn't about the problem. It is a pity, that Zope doesn't check if the result of the packing process is OK. Perhaps I should set up a separate zope instance which is used to check the backup of the database after it is created and will issue some error mail if zope doesn't even start correctly with this backup. But I think that would not help very much, because not all errors in Data.fs result in Zope not starting... :-/
Is there any possibility to recover the database Pack has physically removed a lot of transaction and object records. There is no way to reconstruct them from this (corrupted) file. Ok, I expected and feared this... ;-)
or to "extract" the transactions of the last 8 days and to prevent the complete loss of data in that time? Has anybody an idea, which I could try? This is possible but it will not be easy: The idea is that emulate packing in some way.
Packing essentially consists of 2 phases. [...] The idea to recover your data would be to emulate packing by processing your "Data.fs.old" in the packing phase and all new transaction records in the copying phase.
This will require some tweaking as in your case, both segments reside in different files (while usually, they come from the same file). OK, so I would have to write a program which does this? Uuhh... I think I have a very basic idea, how Zope works and a some intermediate experience in Python, but I don't believe, that I could actually write a program, which performs this process, you described... Anyway, because I packed the database in the meantime (several times), this procedure would not work here, doesn't it? Seems, that I have to live with the data loss. :-( But nonetheless: Thank you for your efforts. ;-)
Bye, Dominik