Hi to all, I run two instances of zope on a Sun sparc with Solaris 9. Data.fs of one instance is 2Gb big The actual problem is the lack of space on its partition => there is about 2 Gb of disk space left. One zope instance was running for 24 days and the 2Gb of disk space had been gone (although the Data.fs hadn't grown that much) Yesterday I made a copy on another server (as a backup) and restarted zope => it wouldn't start! So I removed all the extra files around Data.fs (that is all Data.fs.*), and retried to strart zope => then it finally started BUT: 1) the 2Gb that had gone during these 24 days came back => does Zope keep a temp file with a copy of Data.fs ? 2) the users sent me mail telling that they lost what they did for the 4 last days !!! => I thought Zope would gather all its temp (buffer) data into the Data.fs when it is asked to restart, does it do that or not ? What could I do to get back to the situation I was in before the restart (in terms of data => for my users) ? Should I try to restart with the removed Data.fs.* files (kept somewhere else) ? Since I used version 2.7.3 => I had problem with restart => I have had to remove the extra Data.fs.* files to manage a restart => But I hadn't lose data with that !!! Since version 2.7.3 was leaking I moved to 2.7.4, but I'm wondering if 2.7.4 has not another bug in storage? Thank you for your answers! serge here is my config: Zope Version (Zope 2.7.4-0, python 2.3.4, sunos5) Python Version 2.3.4 (#1, Nov 18 2004, 14:19:24) [GCC 3.3.2] System Platform sunos5 SOFTWARE_HOME /prd/data/Zope/var/zope2.7.4/lib/python ZOPE_HOME /prd/data/Zope/var/zope2.7.4 INSTANCE_HOME /prd/data/Zope/var/zope8080 CLIENT_HOME /prd/data/Zope/var/zope8080/var Network Services ZServer.HTTPServer.zhttp_server (Port: 8080) ZServer.FTPServer.FTPServer (Port: 8021) Process Id 979 (7) Running For 16 hours 51 min 21 sec
Serge Renfer wrote:
I run two instances of zope on a Sun sparc with Solaris 9. Data.fs of one instance is 2Gb big
The actual problem is the lack of space on its partition => there is about 2 Gb of disk space left.
Right, so this instance of Zope has a Data.fs approaching 2GB in size and there is about 2GB of disk space free? So you're talking about a 4GB partition? Where is the other Zope instance?
Yesterday I made a copy on another server (as a backup) and restarted zope => it wouldn't start!
Well, okay, but you need ot give us information: What exception type, exception value and traceback were shown?
So I removed all the extra files around Data.fs (that is all Data.fs.*), and retried to strart zope => then it finally started BUT:
Hmmm, okay, that'll certainly slow down your startup a lot...
1) the 2Gb that had gone during these 24 days came back
=> does Zope keep a temp file with a copy of Data.fs ?
Are you packing the ZODB?
2) the users sent me mail telling that they lost what they did for the 4 last days !!!
That sounds odd, did the Data.fs file become truncated in any way?
=> I thought Zope would gather all its temp (buffer) data into the Data.fs when it is asked to restart, does it do that or not ?
Generally, at the end of each transaction, all data ends up in Data.fs. Each transaction generally exists for the length of time it takes to answer one web request. So, unless on an extremely high volume site, restart, graceful or not, is unlikely to loose much data.
What could I do to get back to the situation I was in before the restart (in terms of data => for my users) ?
Well, try any backed up Data.fs files you have...
/prd/data/Zope/var/zope8080/var Network Services
Urm, I do hope this isn't trying to say you have your Data.fs on a network-mounted drive?! cheers, Chris -- Simplistix - Content Management, Zope & Python Consulting - http://www.simplistix.co.uk
Serge Renfer wrote at 2005-2-22 08:37 +0100:
... 1) the 2Gb that had gone during these 24 days came back
=> does Zope keep a temp file with a copy of Data.fs ?
After pack, the old storage file (before the pack) remains their as "Data.fs.old". It is overridden by the next pack.
2) the users sent me mail telling that they lost what they did for the 4 last days !!!
=> I thought Zope would gather all its temp (buffer) data into the Data.fs when it is asked to restart, does it do that or not ?
It does on "commit". However, I once had a defective disk. I was able to work a complete day without any apparent data loss (however, Linux was telling me about write failures to disk). However, after a reboot, my data was lost... Of course, the reason was easy: as Linux could not write to disk, it kept the data in the cache. After the reboot, the cache was empty. A new disk fixed the problem. Similarly, you may loose data, when Zope cannot write it because you disk is full or because you are on a 2GB limited file system and you reached this limit.
... What could I do to get back to the situation I was in before the restart (in terms of data => for my users) ?
You hopefully have a backup?
Should I try to restart with the removed Data.fs.* files (kept somewhere else) ?
For example... -- Dieter
participants (3)
-
Chris Withers -
Dieter Maurer -
Serge Renfer