[Zope-dev] 60GB Data.fs?
Bjorn Stabell
bjorn@exoweb.net
Wed, 6 Jun 2001 11:57:18 +0800
Hi there,
We're planning a Yahoo! Clubs like system that should scale to about 30,
000 users. Assuming about 3,000 groups and 20MB per group (group
functionality includes photo albums), gives a database size of 60GB.
Assuming on average 3,000 users per day, 20 page views per users, gives
about 60,000 page views (not a lot, but if it's all dynamically
generated?).
We'd like to use Zope for this, if it is possible. Other options are ok
too; anyone have any experience with other ready-made systems?
At this scale, how would ZODB hold up with respect to memory use and
speed? I've heard rumors that it loads an index into memory on
start-up.
How would using Oracle as a Storage (ZODB semantics) help?
Going full-scale RDBMS means we'd have to reimplement a lot of existing
useful tools, so we'd rather not do that if using Zope.
I know we'll have to play with cacheing as well, and as I see there are
these options:
- SQL method cacheing
- Using StandardCacheManagers to cache Python and DTML methods
- Using StandardCacheManagers to cache pages (using, e.g., Squid as an
HTTP accelerator)
- ZEO client object cacheing
Any other ideas?
Bye,
--=20
Bjorn Stabell <bjorn@exoweb.net>