RE: [Zope] Zope Scalability
Jens wrote:
There is no answer to this question because it cannot be answered. No one has come against a "limit" yet as far as I know. Hitting RAM limits depends on the settings you apply to the Zope/ZEO instance(s) that serve(s) the content and how much RAM you put in, you can control ZODB memory cache size via zope.conf. No one sets up Zope so that the whole ZODB is loaded into memory, if that's what you mean.
I got the impression that he was asking about the maximum object size. I don't know if there is a limit in Zope itself, but I've run into problems with PloneExFile when uploading files larger than about 25 MB. Who knows - it might be a bug in PloneExFile, AttachmentField, or Archetypes. I lack the expertise required to troubleshoot the error, so take this with the proverbial grain of salt. BTW: I'm using LDAPUserFolder in a 10,000-user Active Directory environment. It works great. Best wishes, Matthew -- jsoffron: I'm generally pretty high on national defense... Mr. Bad Example: Careful...it's a gateway policy. Before you know it, you'll be mainlining the hard stuff like trade agreements. jsoffron: Too late...I've been freebasing Nafta all day... Sweet, sweet NAFTA. - As seen on Slashdot
On 5 Oct 2005, at 20:35, Matthew X. Economou wrote:
There is no answer to this question because it cannot be answered. No one has come against a "limit" yet as far as I know. Hitting RAM limits depends on the settings you apply to the Zope/ZEO instance(s) that serve(s) the content and how much RAM you put in, you can control ZODB memory cache size via zope.conf. No one sets up Zope so that the whole ZODB is loaded into memory, if that's what you mean.
I got the impression that he was asking about the maximum object size.
Well, the question was actually "What is the maximum size of this file and/or maximum object ID? => just how many objects can the zodb hold?". There is only a theoretical limit that has to do with the largest index key for the ZODB index I believe, and that is some ludicrously high number that no one has ever reached. jens
[Jens Vagelpohl]
Well, the question was actually "What is the maximum size of this file and/or maximum object ID? => just how many objects can the zodb hold?". There is only a theoretical limit that has to do with the largest index key for the ZODB index I believe, and that is some ludicrously high number that no one has ever reached.
If you're using FileStorage, a technical detail in the implementation of the FileStorage index limits the maximum file offset that can be used to 2**48-1, or about 281 terabytes. Object IDs are effectively 64-bit integers (masquerading as 8-byte strings).
--On 5. Oktober 2005 15:57:14 -0400 Tim Peters <tim.peters@gmail.com> wrote:
[Jens Vagelpohl]
Well, the question was actually "What is the maximum size of this file and/or maximum object ID? => just how many objects can the zodb hold?". There is only a theoretical limit that has to do with the largest index key for the ZODB index I believe, and that is some ludicrously high number that no one has ever reached.
If you're using FileStorage, a technical detail in the implementation of the FileStorage index limits the maximum file offset that can be used to 2**48-1, or about 281 terabytes. Object IDs are effectively 64-bit integers (masquerading as 8-byte strings).
This would require how much RAM for the index? :-) -aj
[Tim Peters]
If you're using FileStorage, a technical detail in the implementation of the FileStorage index limits the maximum file offset that can be used to 2**48-1, or about 281 terabytes. Object IDs are effectively 64-bit integers (masquerading as 8-byte strings).
[Andreas Jung]
This would require how much RAM for the index? :-)
Unfortunately, that's a complicated question -- the index is an OOBTree mapping 6-byte strings to a specialized kind of BTree mapping 2-byte strings to 6-byte strings. The complications add up. The good news is that if there are only two objects, say each consuming 128 terabytes, the index has only two entries and is actually very small <wink>.
On 10/5/05, Andreas Jung <lists@andreas-jung.com> wrote:
This would require how much RAM for the index? :-)
It can hold 16 billion billion pointers, with each pointer being 48 bits, thats 96 exabytes (100 million terabytes). Just for the index. :) As an absolute minimum. Indexes normally being btrees or something like that, you would surely need several times that in practice. :) Sorry for any calculation errors, but I can be rather a lot off without it making a difference, so I can't be bothered to check. ;) -- Lennart Regebro, Nuxeo http://www.nuxeo.com/ CPS Content Management http://www.cps-project.org/
On 5 Oct 2005, at 20:57, Tim Peters wrote:
[Jens Vagelpohl]
Well, the question was actually "What is the maximum size of this file and/or maximum object ID? => just how many objects can the zodb hold?". There is only a theoretical limit that has to do with the largest index key for the ZODB index I believe, and that is some ludicrously high number that no one has ever reached.
If you're using FileStorage, a technical detail in the implementation of the FileStorage index limits the maximum file offset that can be used to 2**48-1, or about 281 terabytes. Object IDs are effectively 64-bit integers (masquerading as 8-byte strings).
If that's not falling in the "ludicrous" category, I don't know what is ;) I suppose we'll all be safe for a while. jens
participants (5)
-
Andreas Jung -
Jens Vagelpohl -
Lennart Regebro -
Matthew X. Economou -
Tim Peters