[Zope] large images to a database via zope.
ethan mindlace fremen
mindlace@digicool.com
Mon, 16 Apr 2001 23:18:45 -0400
--On Friday, April 13, 2001 09:25:22 -0400 ghaley@mail.venaca.com wrote:
>
> hi,
>
> thank you, phillippe, for your thoughtful comments.
>
> the concern about the size of the data.fs is exactly why we have used an
> external db to store most of the content for our site. i have worried
> about the integrity of the binary data being moved in and out of a db,
> especially the larger data. things below a meg or so don't seem to be
> affected at all, but i've not done enough testing of the files that are
> split then retrieved and rejoined to be able to state with the same
> confidence that they are or are not getting corrupted.
Hi,
The zope.org Data.fs recently cleared 2GB in size, with something on the
order of 200,000 objects. We store a number of reasonably sized objects
(1.5mb) in the ZODB, and while I am confident that the ZODB does not
corrupt them, I do recommend using some proxy-cache technology as the ZODB
is not optimized for the rapid delivery of large binary objects. The other
issue is that Zope currently has a "hard" time out where if a request takes
more than 30minutes to send, it will give up ... which is another reason
for the proxy-cache technology.
Until I see compelling evidence otherwise, my stance is that the ZODB is
just as safe to store your data in as a relational database, and soon (with
replicated storage and berkely storage) it will offer compelling
scalability for read-predominant environments that I don't think any
relational database can match for less than six figures.
--
-mindlace-
zopatista community liason