[ZODB-Dev] ZODB memory problems (was: processing a Very Large
file)
Tino Wildenhain
tino at wildenhain.de
Sun May 29 04:37:50 EDT 2005
Am Sonntag, den 29.05.2005, 09:51 +0200 schrieb Andreas Jung:
>
> --On 29. Mai 2005 11:29:06 +0200 Christian Theune <ct at gocept.com> wrote:
>
> > Am Samstag, den 21.05.2005, 17:38 +0200 schrieb Christian Heimes:
> >> Grab the Zope2 sources and read lib/python/OFS/Image.py. Zope's
> >> OFS.Image.Image class (and also Zope3's implementation) is using a so
> >> called possible large data class (Pdata) that is a subclass of
> >> Persistent.
> >>
> >> Pdata is using a simple and genious approach to minimize the memory
> >> usage when storing large binary data in ZODB. The data is read from a
> >> [...]
> >
> > Actually Pdata has some drawbacks. When the blobsupport branch gets
> > declared stable (I think it's not gonna happen in 3.4, but nobody told
> > me otherwise) we'll have really good blob support without this black
> > magic.
Especially the ZEO handling of blobs could be improved IIRC.
>
> The Pdata approach in general is not bad. I have implemented a CVS-like file
> repository lately where we store binary content using a pdata like
> structure.
> Our largest files are around (100MB) and the performance and efficiency is
> not bad
> although it could be better. The bottleneck is either the ZEO communication
> or just the network.
> I reach about 3.5 MB/second while reading such a large file from the ZEO
> server.
Thats not bad given that this might at least saturate most customers
downstream :)
Would a multi thread ZEO server improve anything here? Especially
with concurrent access?
More information about the ZODB-Dev
mailing list