[Zope] large images to a database via zope.

Toby Dickenson tdickenson@geminidataloggers.com
Tue, 17 Apr 2001 11:51:11 +0100


On Tue, 17 Apr 2001 05:00:35 -0400, ethan mindlace fremen
<mindlace@digicool.com> wrote:

>--On Tuesday, April 17, 2001 02:41:05 -0400 marc lindahl =
<marc@bowery.com>=20
>wrote:
>
>> So much for size, now for performance.  Ethan, though zope isn't
>> 'optimized for the rapid delivery of large binary objects', is it =
better
>> at pulling them out of an object than the local FS?  OR via a DB =
adapter?
>> For any particular reasons (multithreading, maybe?)
>
>Well, a thread is locked for the entire time it is writing out to the =
end=20
>user. So get 4 simultaneous requests for a large file, and your site is =
now=20
>unresponsive (in a stock install.)

Im sure thats not true. Data is queued by medusa 'producers', and the
separate medusa thread takes care of trickling responses back to
clients.

The worker threads (4 of them in a stock install) are only blocked for
as long as it takes them calculate the response and then hand it over
to medusa.

Most responses get buffered in memory, but zope's File and Image
object take some special steps to ensure that the producer buffers
data in a temporary file if the data is 'large' (see HTTPResponse.py
for details). This file copy is the only 'unnatural' overhead of
serving large objects from zope.




Toby Dickenson
tdickenson@geminidataloggers.com