Re: [Zope] uploading large files w/ LocalFS
Hello Paul, sorry for having not argumented my reply. The major problem I had with big upload/download is memory consumption: the last time I try it (zope 2.5.1 or 2.6) downloading a 800Mb file (stored on a LocalFS directory) was taking lots of memory. I think the zserver has to keep the complete request/response object in memory before using/sending it. I asked to this list, and got the: "put static/big stuff in apache" reply. This is point 1 of the 2 things I don't like with Zserver/zope. The point 2 is the 'hanging' capability. I think there is no plan to solve these minor annoyances, they are present (afaik) since Zope 0.9. Gilles ----- Original Message ----- From: "Paul Winkler" <pw_lists@slinkp.com> To: <Gilles.Lavaux@esa.int> Cc: <zope@zope.org> Sent: Wednesday, May 28, 2003 4:26 AM Subject: Re: [Zope] uploading large files w/ LocalFS
On Wed, May 28, 2003 at 11:32:19AM -0500, Gilles wrote:
Yes, big upload/download with zserver is very bad,
This could use a little more discussion I think. * Why is it bad? I can think of one reason: because each upload will keep one thread and one ZODB connection busy as long as it's going, and zope has a fixed number of threads and ZODB connections (set at startup time).
* What are the alternatives? It's easy to say "use Apache", but this is rather glib if you're working on a content management system for which upload of large files is a requirement. And using an external browser means you lose so many nice things like zope's security management.
I've thought it would be nice to have something like "ExtBrowserFile" which would work as follows:
* select "ExtBrowserFile" from the "Add..." menu (zmi or cmf). * Form has a typical file upload widget * Target of form is a (configurable) external upload script running on e.g. apache * Zope stores only a small object which contains properties such as title, image height & width (not sure how to get these), and most importantly, a valid URL at which the image or file can be viewed.
That solves a lot of the problems, but not security. You can of course secure the litle meta-object in Zope but what about the actual data?
--
Paul Winkler http://www.slinkp.com Look! Up in the sky! It's THE ADAMANTIUM! (random hero from isometric.spaceninja.com)
_______________________________________________ Zope maillist - Zope@zope.org http://mail.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://mail.zope.org/mailman/listinfo/zope-announce http://mail.zope.org/mailman/listinfo/zope-dev )
On Wed, May 28, 2003 at 04:57:47PM -0500, Gilles wrote:
Hello Paul,
sorry for having not argumented my reply. The major problem I had with big upload/download is memory consumption: the last time I try it (zope 2.5.1 or 2.6) downloading a 800Mb file (stored on a
oh, you mean REALLY big :)
LocalFS directory) was taking lots of memory. I think the zserver has to keep the complete request/response object in memory before using/sending it.
It shouldn't. At least with Image and File objects, AFAICT they are loaded from the zodb in 64k chunks as needed. If LocalFS tries to load everything in memory, i'd say that's a mis-feature of LocalFS. Memory issues aside, I've found that downloading even a single ~40 MB file makes my zope instance slow to a crawl until it finishes. Not good. -- Paul Winkler http://www.slinkp.com Look! Up in the sky! It's APACHE PHASE! (random hero from isometric.spaceninja.com)
LocalFS directory) was taking lots of memory. I think the zserver has to keep the complete request/response object in memory before using/sending it.
It shouldn't. At least with Image and File objects, AFAICT they are loaded from the zodb in 64k chunks as needed. If LocalFS tries to load everything in memory, i'd say that's a mis-feature of LocalFS.
Last time I checked LocalFS read the whole file into memory, its broken. -- Andy McKay
Paul Winkler wrote at 2003-5-28 08:22 -0400:
... Memory issues aside, I've found that downloading even a single ~40 MB file makes my zope instance slow to a crawl until it finishes. Not good.
When I recently analysed a long upload, most of the time was spent before Zope's ZPublisher got any control (i.e. the time was spent in ZServer/medusa/cgi.py). Moreover, during this time, the processor had very high IOWait (50 to 90 %) while the Zope process did not use any observable CPU time. Maybe, the way ZServer/medusa/cgi is writing files to disc is somehow broken? I may analyse this further some day, but it is not yet planned. Dieter
participants (4)
-
Andy McKay -
Dieter Maurer -
Gilles -
Paul Winkler