Zope 2.3.0 (binary release, python 1.5.2, linux2-x86) 1.5.2 (#10, Dec 6 1999, 12:16:27) [GCC 2.7.2.3] I've uploaded pretty large files into ZOPE and always had fairly decent success. but I just tried upload a 150MB file and this is the console logs: 2001-02-22T23:26:44 INFO(0) ZServer Successful login. ------ 2001-02-22T23:26:48 PROBLEM(100) ZServer unhandled connect event ------ 2001-02-22T23:26:50 PROBLEM(100) ZServer unhandled connect event ------ 2001-02-23T00:45:00 ERROR(200) ZServer uncaptured python exception, closing chan nel <zope_ftp_channel connected 208.123.214.200:25172 at 87b8bd8> (socket.error: (9, 'Bad file descriptor') [/home/zope/Zope-2.3.0-linux2-x86/ZServer/medusa/asyn chat.py|initiate_send|211] [/home/zope/Zope-2.3.0-linux2-x86/ZServer/medusa/asyn core.py|send|282]) ZOPE appears to keep all incoming xfer byte in memory and when the xfer completes it writes it out to disk. The first attempt to upload this file, I was playing around and tried to upload a smaller file to zope in a another directory, and the original large file connection broke. So I decided to just leave it alone until the file xfer completed. so I get home file is still xfering and watch the memory creep up until ~the file size. it got to be about 175MB and then something really odd happened.. the CPU went from ~1% idle to 60% idle and the memory starting balooning (scaring me because I only had ~400MB of total memory on the box) it got to like 380MB and the CPU went back to idle, I never ran out of virtual memory. when I logged into ZOPE, my file didnt save. :( I've uploaded large files (i believe larger than 100MB) before but it was on a local network and didnt have a problem. but 'in the wild' it didnt work as I thought. anyone have any insight? is this just not a good idea -- uploading files >XXX bytes? is this commonly known?
I've uploaded large files (i believe larger than 100MB) before but it was on a local network and didnt have a problem. but 'in the wild' it didnt work as I thought. anyone have any insight? is this just not a good idea -- uploading files >XXX bytes? is this commonly known?
As the zope db is a single large file, I would recommend using an external file, with for example the file system product. I noticed that when you pack the database, zope makes a backup (it's a good thing), but then you need twice the space of your files... Ideally, zope should have a setting to store files bigger than xxx bytes (defined by user) outside the zope db in a dedicated folder on the server file system. The file should be transparently available in zope and with ftp or any other methos to access files in zope. The main advantage is the fact that backups made of databases would never be huge. Who needs to backup all his images? Only textual content and scripts need to be backed up. Those large files are like blob's in any other db, and there are probably solutions for normal db, why not use them with zope? Just my 0.0002 Philippe
participants (2)
-
alan runyan -
Philippe J