[Zope-dev] Re: [Zope3-dev] proposal: serving static content faster

Shane Hathaway shane at zope.com
Thu Apr 8 23:22:00 EDT 2004


On Thu, 8 Apr 2004 zope at netchan.cotse.net wrote:

> I'm working on a product which serves files from the filesystem. The
> data retrieval method is the usual:
> 
> def pushData(self, f, outstream):
>     finished = False
>     while not finished:
>         block = f.read(blocksize)
>         if len(block) < blocksize:
>             finished = True
>         outstream.write(block)
> 
> f is the file on the filesystem, outstream is the request object.
> 
> Testing with a 1Mbyte file (ab -n 12 -c 4), I get ~4.4 req/sec -
> ~4.7Mbyte/sec after a few iterations (the os caches the file).

Zope reads the file from a cache but then forces its entire contents into
dirty buffers.  If the file is sent before the OS decides the flush the
buffers to disk, and the OS is somehow smart enough to cancel the write of
those buffers, you're in luck--not much penalty.  You might even 
decide to put these files on a RAM disk.  Most network connections
aren't that fast, though, so you have to expect a concurrency level much
higher than 4.  The penalty comes when the OS has to write all those
concurrent copies of the same data to disk, then delete each of them when
the download is finished.  Zope could make a good file server if it just
didn't make so many temporary copies.

> It seems from these results, that ZServer's tempfile strategy causes
> some (~20% if everything is cached) performance hit, but I think that
> there should other bottleneck(s) beside this.

Your test is too optimistic.  Try a concurrency level of 200.  Another
bottleneck is the asyncore select loop, which has an O(n) delay for each
I/O operation, where n is the number of connections currently open.

Shane



More information about the Zope-Dev mailing list