[ZODB-Dev] ZODB memory problems (was: processing a Very Large
file)
Shane Hathaway
shane at hathawaymix.org
Sun May 29 05:08:45 EDT 2005
Tino Wildenhain wrote:
> Am Sonntag, den 29.05.2005, 09:51 +0200 schrieb Andreas Jung:
>>The Pdata approach in general is not bad. I have implemented a CVS-like file
>>repository lately where we store binary content using a pdata like
>>structure.
>>Our largest files are around (100MB) and the performance and efficiency is
>>not bad
>>although it could be better. The bottleneck is either the ZEO communication
>>or just the network.
>>I reach about 3.5 MB/second while reading such a large file from the ZEO
>>server.
>
>
> Thats not bad given that this might at least saturate most customers
> downstream :)
> Would a multi thread ZEO server improve anything here? Especially
> with concurrent access?
It's possible. Although ZEO talks over the network using async sockets,
it reads files synchronously, so I suspect it will frequently sit around
doing nothing for 10 ms, waiting for the disk to read data. If your ZEO
server has a load of 1.0 or more but low CPU usage, this is likely
happening. The easiest way to overcome this is to buy gigabytes of RAM
for the ZEO server--ideally, enough gigabytes to hold your whole database.
Also, the design of ZEO clients tends to serialize communication with
the ZEO server, so the throughput between client and server is likely to
be limited significantly by network latency. "ping" is a good tool for
measuring latency; 1 ms is good and .1 ms is excellent. There are ways
to tune the network. You can also reduce the effects of network latency
by creating and load balancing a lot of ZEO clients.
Shane
More information about the ZODB-Dev
mailing list