[ZODB-Dev] [ZEO] Storage error with big transactions.
Jim Fulton
jim at zope.com
Tue Feb 13 06:03:03 EST 2007
On Feb 12, 2007, at 12:25 PM, Andreas Jung wrote:
> I have the following script to emulate a long running writing ZEO
> client
> by writing 100MB to a page template:
>
> import transaction
>
> pt = app.foo
> while 1:
> data = '*'*100000000
> T = transaction.begin()
> pt.pt_edit(data, 'text/html')
> T.commit()
> print 'done'
>
> This script fails badly during during the first commit() call. Is
> this a bug
> or feature? I am using Zope 2.10.2 on MacOSX Intel.
Based on the traceback you gave, this looks like a bug. I've
noticed, however, that large database records can lead to memory
errors at sizes much smaller than I would expect. If the problem is
ultimately traced to a hidden memory error, there's not much that can
be done. In the long run, I expect we'll advise that "large" objects
be put in blobs, where "large" might be smaller than one might
expect. For example, I've seen 90MB records lead to memory errors
even on machines with a hundreds of megabytes free.
Jim
--
Jim Fulton mailto:jim at zope.com Python Powered!
CTO (540) 361-1714 http://www.python.org
Zope Corporation http://www.zope.com http://www.zope.org
More information about the ZODB-Dev
mailing list