[Zope] The Principles of Designing a Heavy Load Site with Zop e
sean.upton@uniontrib.com
sean.upton@uniontrib.com
Mon, 04 Mar 2002 13:13:24 -0800
I'm chime in with some of my thoughts and questions...
Storage: why not something like logical volume management and a LV group
over a handful of DAS devices; seems cheaper than one big array? I would
think 5 TB is doable on a 2-box cluster with 4-6 midrange DAS units
connected on dual-bus SCSI or Fibre per box, at likely less than $250,000 if
you shop for some bargains... Then again, size may not be as much of an
issue as load?
Load balancing: L4 switch (or LVS?) in front of a group of Squid boxes in
front of a decent sized ZEO cluster. Most L4 switches will do DSR/OPR
(direct-send or out-of-path return) instead of proxy mode, sending packets
directly from the balanced servers to the requesting client, but this packet
rewriting technique means that load-balancer cannot detect whether or not
the node its sending packets to is up, since it can't get a response, so you
usually have to use IP-takeover based clustering (Linux FailSafe or
Heartbeat) to takeover a dead proxy/caching server. In theory, this
approach would eliminate a single load-balancer as a bottleneck because it
becomes a packet pusher and not a brokering proxy, which would make
something like this ideal for directing traffic to multiple proxy servers
under a high load... Price tag: 20,000 for 2x L4 switches, 20,000 for 4
decent Proxy boxes to run Squid.
Caching/Proxying: in response to the original post, I would think it would
make more sense to bypass Apache altogether and proxy to ZServer (with ICP
patches) directly from a competent load-balancing, ICP-capable, caching
proxy like Squid. If nothing else, have Squid cache the page elements (all
static pages, images, client-loaded javascript source files) that can be
cached to alleviate some stress on your Zope server and allow you to still
keep things relatively simple. Then again, I've never tried Apache+PCGI
with Zope...
Rendering: my question is when does a ZEO connection, for ODB-stored content
become the bottleneck? I realize using a replicated RDB solves this problem
to some extent for those using ONLY relational data for content in their
apps, but I'm personally more curious about intensive use of ZODB. There are
issues of bandwidth to and processing power of a ZSS. Can ZEO (yet) support
using ClientStorage on an intermediary ZSS to use client caching in a
hierarchy of ZSS boxes? I know that this wasn't technically possible a
while back, at least from what I remember reading on the ZODB-dev list, and
I wonder if it would actually yield better performance in most real world
scenarios?
Page Impressions & Rendering: I'm looking for an example of how Zope would
handle under extreme dynamic loads with a fairly straightforward
example...For a single Zope server (direct to ZServer), what kind of numbers
(Req/sec) would it be reasonable to assume that you could serve for a
otherwise static page that used a dtml-method that rendered results from a
Catalog query returning a dozen results (assuming direct reads from metadata
tables - no getObject() calls) on new-fast hardware (say, an Athlon 1.6GHz +
lots of DDR using FileStorage on a striped volume on fast disks)? Could a
single box like this sustain 25-30Rec/sec in ideal conditions, assuming a
very aggressive use of aggregated HTTP keepalive connections was put in
front of it via proxy? Personally, what I would like to be able to do is
eventually be able to serve 100 simple and uncached Req/Sec along these
lines from a 4-5 node cluster of ZEO Clients on 2CPU Athlon MP boxes running
2 Zope+ClientStorage instances each, with load distributed via Squid+ICP
(assuming that these requests were always a cache MISS in Squid)...
Sean
-----Original Message-----
From: Matthew T. Kromer [mailto:matt@zope.com]
Sent: Monday, March 04, 2002 7:49 AM
To: zope@zope.org
Cc: iap@y2fun.com
Subject: Re: [Zope] The Principles of Designing a Heavy Load Site with Zope
I think there are some fundamental misunderstandings going on here, but
I thought it would be interesting to try to respond anyway.
iap@y2fun.com wrote:
> Hi,
>
> This issue has been discussed again and again,
>
> I would like to clarify my idea and your comments will be very
> appreciated.
>
> Suppose we want to provide a server which is:
>
> 1) Hosting 1,000,000 members' profile. Each member's disk quota is 5MB.
>
> Which means we need at least 5,000GB (5 TeraGB) disk space.
>
> 2) Assume the concurrent accessing to a URL is 1000 request/per second
>
> 3) Assume all the requests retrieve dynamic content.
>
> 4) We want to leverage the power of Zope which means all the pages
> should be
>
> rendered by zope.
>
Having 5 TB of disk space usually means some very high-powered RAID
gear; my personal favorite is the EMC Symmetrix units; I think you would
probably want at least two of those to provide your coverage. Estimated
cost for this is about $5,000,000 (but *very* dependant on EMC's pricing
strategies).
You could get by for less, by distributing each disk with each CPU (the
breadrack approach.)
1000 requests/second isnt terribly high; Zope installations have done
400/sec with no problem. However, these are in situations where Zope is
being heavily cached; less than 10% of the requests are actually being
rendered by Zope. So, if you wanted no caching (ala everything is
completely dynamic content), my estimate is you would need something
like 100 1Ghz Intel Pentium III class machines to perform that amount of
dynamic rendering. If each of those machines had a 50 GB disk drive,
you'd theoretically have your 5TB of disk space.
At a rough commercial cost of $4,000 per unit (probably a bit high),
that's only $400,000.
As a practical matter, you'd then need some pretty hefty load balancing
servers; at least two, possibly more.
However, that begs the question of how you organize that much disk
space. It's not an easy task. Whether or not you use an RDBMS is
irrelevant until you can work out a strategy for using the disk space
scattered amongst all of those machines.
* * *
Now, if you forget for a moment about requiring each page to be
dynamically rendered each time it is viewed, and you set aside the
storage questions, you could estimate that, with a 90% caching rate, you
could serve 1000 requests/sec with only about 14 machines (10 renderers,
2 cache servers, and two load balancers). Estimated cost for that is
$56,000.
What is most unrealistic about this scenario is your assumptions about
the member base, and its ratio to expected activity. One million users
may only generate 1,000 requests/sec, but they certainly could generate
a lot more. In fact, a critical strategy for large systems like this is
anticipating for "peak demand" events. Lets say you send an e-mail out
to all million people, telling them to log in and check out a particular
URL. That timed event will generate a demand curve that is not evenly
distributed over time; in fact, it is usually very front-loaded. Within
about 5 minutes, more than 10% of the user base will respond.
This is a raw rate of about 333 requests/sec, but that prprobably esumes
that the
single URL is the only thing they load; usually, a page contains images
and other content (style sheets etc) which also much be fetched. Pages
with a high art content can have 25 elements or more on them. That
pushes the request rate up to 8333 requests/sec; way out of the 1000
request/sec bound.
> The priciples I would like to verify are:
>
> 1) Some database (RDBMS) should be used instead of FileStorage for ZODB.
>
> 2) The ZEO should be used for constructing a cluster computing.
>
> 3) The Apache should be the front end instead of ZServer.
>
> 4) The PCGI should be the connection between Apache and Zope.
>
> 5) I shouldn't create static instance into the ZODB but query the
>
> external database.
>
> 6) The "Cache" of zope is useless since all the responses are dynamic
> rendered.
>
> By the way, how much will this kind of system costs, regardless the
> hardware?
>
> Iap, Singuan
>
--
Matt Kromer
Zope Corporation http://www.zope.com/
_______________________________________________
Zope maillist - Zope@zope.org
http://lists.zope.org/mailman/listinfo/zope
** No cross posts or HTML encoding! **
(Related lists -
http://lists.zope.org/mailman/listinfo/zope-announce
http://lists.zope.org/mailman/listinfo/zope-dev )