[Zope] large images to a database via zope.

Casey Duncan cduncan@kaivo.com
Fri, 13 Apr 2001 10:22:51 -0600


marc lindahl wrote:
> 
> Is this really true?  Has anyone benchmarked it?  After all data.fs is
> stored in the filesystem and benefits from the same RAID, cache, etc.  Seems
> like some relevant questions would be:
> 
> 1. how much space would, say, a 5MB image take in data.fs (of course, it
> takes 5MB stored in the FS)
> 
> 2. how long would it take to serve the same 5MB image to 1000 separate
> requests, comparatively? (this should test caching)
> 
> 3. how long would it take to serve, 1000 different 5MB images to 1000
> requestors simultaneously, comparatively? (this should test uncached
> performance)
> 
> > From: "Philippe Jadin" <philippe.jadin@123piano.com>
> >
> > It must be the same in zope : it must take more time to extract some data
> > from a large file in the filesystem (the data.fs) instead of storing a link
> > to it in the db, and store it on the filesystem. I guess the filesystem is
> > allways the fastest way to retrieve data (for example because it uses
> > caching, raid optimizations...)
> 

Speed is probably not the biggest issue with storing large objects in
the ZODB. What makes it less practical is the undo support. Every time
you change a property on one of these large objects, the entire object
is replicated. This can quickly eat up space.

Also, there are some issues with Data.fs files when they grow to 2GB in
size (both OS and Python). This can happen rather quickly if you have
many large objects stored that are changed. There are ways around this,
but they take some effort.

Speed wise I think you will find the bottleneck to be ZPublisher more
than the ZODB. Although I do not have hard numbers on this. If speed is
the biggest issue, put an HTTP accellerator (like Squid) in front of
Zope.

-- 
| Casey Duncan
| Kaivo, Inc.
| cduncan@kaivo.com
`------------------>