WHY ? -- Performace in serving large media files from ZODB vs from external file system
Hi Zopers In the performance for large image files discussion Dieter Wrote
Both will be quite slow when servered by ZServer. What could be the reason for being "slow". Since the files are large, the setup time can be ignored ( time to locate the object) Once the object is found only thing that remains is the reading the file from DB ( should be a plain read) and push it to the connection. I do not see a reason for beeing slow on the db read once the data is located inside the DB. Data may or may not be in a continuous block as in the file system. I see no code / decode or copmress/decompress in the process. So why should the ZServer be slow? Do I miss something here? IMHO moving some stuff from zope for minimal performance reasons breaks zope OOWEB idea! Sedat Yilmazer Kibele Iletisim Sis. ve Serv. Ltd.
-----Original Message----- From: zope-admin@zope.org [mailto:zope-admin@zope.org]On Behalf Of Dieter Maurer Sent: Thursday, July 05, 2001 1:48 AM To: Dirksen Cc: zope@zope.org Subject: Re: [Zope] Performace in serving large media files from ZODB vs from external file system Dirksen writes:
Judging only from response time, throughput handling, download speed, which way to serve large files has better performance, from ZOSB or as external file by LocalFS product? Both will be quite slow when servered by ZServer. With LocalFS, you can them serve however with a standard WWW server, e.g. Apache.
Dieter _______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
On Thu, 5 Jul 2001 11:46:37 +0300, "Sedat Yilmazer" <sedat@kibele.com> wrote:
Hi Zopers
In the performance for large image files discussion Dieter Wrote
Both will be quite slow when servered by ZServer. What could be the reason for being "slow". Since the files are large, the setup time can be ignored ( time to locate the object) Once the object is found only thing that remains is the reading the file from DB ( should be a plain read) and push it to the connection. I do not see a reason for beeing slow on the db read once the data is located inside the DB. Data may or may not be in a continuous block as in the file system. I see no code / decode or copmress/decompress in the process. So why should the ZServer be slow? Do I miss something here? IMHO moving some stuff from zope for minimal performance reasons breaks zope OOWEB idea!
The policy used by the default File and Image classes for large content is to spool the content from ZODB into a temporary file as quickly as possible, allowing ZServer to trickle data back to the client as quickly as the client will allow. This policy is a good one, but by itself is not as quick as serving a flat file from a convential http server. The best of both worlds is to sit Zope behind a caching proxy (I favor squid), so that Zope is not involved with most requests for big files. I hope this helps, Toby Dickenson tdickenson@geminidataloggers.com
From: Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk>
The policy used by the default File and Image classes for large content is to spool the content from ZODB into a temporary file as quickly as possible, allowing ZServer to trickle data back to the client as quickly as the client will allow.
I'm not sure I see where that is happening, can you provide some pointers? It looks to me like File and Image pull a large (>64KB) file out 64K at a time and feed it to RESPONSE, which would be in RAM, not a temp disk file...
marc lindahl wrote:
From: Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk>
The policy used by the default File and Image classes for large content is to spool the content from ZODB into a temporary file as quickly as possible, allowing ZServer to trickle data back to the client as quickly as the client will allow.
I'm not sure I see where that is happening, can you provide some pointers? It looks to me like File and Image pull a large (>64KB) file out 64K at a time and feed it to RESPONSE, which would be in RAM, not a temp disk file...
Yes that is correct. Zope will stream large file data in 64K chunks so that it doesn't have to load the whole thing into RAM when it is requested. It doesn't use any files. -- | Casey Duncan | Kaivo, Inc. | cduncan@kaivo.com `------------------>
On Fri, 06 Jul 2001 11:25:30 -0400, marc lindahl <marc@bowery.com> wrote:
From: Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk>
The policy used by the default File and Image classes for large content is to spool the content from ZODB into a temporary file as quickly as possible, allowing ZServer to trickle data back to the client as quickly as the client will allow.
I'm not sure I see where that is happening, can you provide some pointers? It looks to me like File and Image pull a large (>64KB) file out 64K at a time and feed it to RESPONSE, which would be in RAM, not a temp disk file...
Look at the implementation of RESPONSE.write in ZServer/HTTPResponse.py.... It checks to see if the caller has set a Content-Length http header, and spools chunks onto disk if the total specified size is large. Toby Dickenson tdickenson@geminidataloggers.com
From: Toby Dickenson <tdickenson@devmail.geminidataloggers.co.uk>
Look at the implementation of RESPONSE.write in ZServer/HTTPResponse.py.... It checks to see if the caller has set a Content-Length http header, and spools chunks onto disk if the total specified size is large.
Yeah, it sure does, if the size is >128K. I wonder what the reason is? Wouldn't it be better to not do that, and let the OS's VM system deal with it? Also, what's the comparison to, say Apache? Does it do something similar?
participants (4)
-
Casey Duncan -
marc lindahl -
Sedat Yilmazer -
Toby Dickenson