Terry Hancock wrote I have a Zope site which I'm doing a lot of development on, and all internal safeguards aside, I feel there's a significant chance of wrecking Zope in the process. Then it might take anywhere from hours to days to get it back up again.
Aside from making sure you have a sane development and deployment process, you could consider using a load balancer, set up to point to the primary (development?) server normally, and switching it to a backup one when things fail. There's a bunch of hardware or software ones - I've used both. Hardware's better, obviously, but you could consider something like the tool 'balance' from balance.sf.net Anthony
Linux Virtual Server works well also as a load balancer. Additionally, Toby Dickenson's recent patch for Zope that allows it to peer with Squid as an ICP server (http://www.zope.org/Members/htrd/icp) is a really interesting thing; it allows for easy failover at the app level as well as rudimentary balancing at the IP level. In comparison, LVS requires that you poll each Zope behind the balancer every so often if you want to detect a failure and react, and unlike any of the fancy $30k load balancers it doesn't look at HTTP response codes to decide if a server returning a "503" error for instance can be taken out of rotation. Zope as an ICP peer has this built-in because it takes the first nonerror response it gets from any Squid peer. ----- Original Message ----- From: "Anthony Baxter" <anthony@interlink.com.au> To: "Terry Hancock" <hancock@anansispaceworks.com> Cc: <zope@zope.org> Sent: Monday, January 21, 2002 7:18 PM Subject: Re: [Zope] Static fail-over
Terry Hancock wrote I have a Zope site which I'm doing a lot of development on, and all internal safeguards aside, I feel there's a significant chance of wrecking Zope in the process. Then it might take anywhere from hours to days to get it back up again.
Aside from making sure you have a sane development and deployment process, you could consider using a load balancer, set up to point to the primary (development?) server normally, and switching it to a backup one when things fail. There's a bunch of hardware or software ones - I've used both. Hardware's better, obviously, but you could consider something like the tool 'balance' from balance.sf.net
Anthony
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
On Tuesday 22 January 2002 08:48 am, Chris McDonough wrote:
Linux Virtual Server works well also as a load balancer. Additionally, Toby Dickenson's recent patch for Zope that allows it to peer with Squid as an ICP server (http://www.zope.org/Members/htrd/icp) is a really interesting thing; it allows for easy failover at the app level as well as rudimentary balancing at the IP level. In comparison, LVS requires that you poll each Zope behind the balancer every so often if you want to detect a failure and react, and unlike any of the fancy $30k load balancers it doesn't look at HTTP response codes to decide if a server returning a "503" error for instance can be taken out of rotation. Zope as an ICP peer has this built-in because it takes the first nonerror response it gets from any Squid peer.
Look at piranha (ha.redhat.com) which is a LVS +some other tools. i'm current ly using piranha to loadbalance zeoclients with nanny(piranha tool) set to check for zeo clients availability by checking the HTTP response (200). if nanny receives a 200, all's ok. else, nanny will take the particular zeo client out of available zeo pool, and add it when it gets the proper response automatically.
On Mon, 21 Jan 2002 19:48:41 -0500, "Chris McDonough" <chrism@zope.com> wrote:
Linux Virtual Server works well also as a load balancer. Additionally, Toby Dickenson's recent patch for Zope
hey, that me.
that allows it to peer with Squid as an ICP server (http://www.zope.org/Members/htrd/icp) is a really interesting thing; it allows for easy failover at the app level
Failover seems to be working well.
as well as rudimentary balancing at the IP level.
load *balancing* was an unexpected bonus, and I am suprised how well it works. I actually developed the system because I wanted the opposite; load *clustering*. I have some large objects that are expensive to transfer from my ZEO server. It is better to send all the requests relating to that object to Zopes that already have the object cached, even if that leads to an uneven load distribution.
In comparison, LVS requires that you poll each Zope behind the balancer every so often if you want to detect a failure and react, and unlike any of the fancy $30k load balancers it doesn't look at HTTP response codes to decide if a server returning a "503" error for instance can be taken out of rotation. Zope as an ICP peer has this built-in because it takes the first nonerror response it gets from any Squid peer.
Thats the goal, but my ICP patches arent quite there yet. It is possible for a half-dead Zope to still respond to ICP. I think the ideal solution would use both methods: ICP involves checking before *every* request, therefore the check has to be quick and it cant be very thorough. However, it can respond quickly to catastrophic failures. Separately, perform more thorough checks every so often, where you can tune the frequency of the checks against the cost and thoroughness of each one. I cant think of anything better than the LVS-style poll, since the data path excercises the whole system. Squid has a similar system, but not yet merged into the trunk: http://squid.sourceforge.net/rproxy/backend.html#healtcheck Toby Dickenson tdickenson@geminidataloggers.com
I have some large objects that are expensive to transfer from my ZEO server. It is better to send all the requests relating to that object to Zopes that already have the object cached, even if that leads to an uneven load distribution.
"Object affinity" or somesuch. Nifty.
Zope as an ICP peer has this built-in because it takes the first nonerror response it gets from any Squid peer.
Thats the goal, but my ICP patches arent quite there yet. It is possible for a half-dead Zope to still respond to ICP.
What makes this good on top of polling is that you reduce the chance of anybody seeing a proxy error, even for the short amount of time that it takes to detect the failure case and take the client out of rotation. This makes maintenance pretty easy because if the ICP system works you can just take machines out and add them "willy nilly" without worrying about changing balancer rules. I think this is pretty important... like the patch! ;-) - C
-> deployment process, you could consider using a load balancer, -> set up to point to the primary (development?) server normally, -> and switching it to a backup one when things fail. FYI: "load balancing" is not equal to "fail over". "Switching it to a backup on when things fail" describes a fail over system, where no load balancing is happening. (If it were load balanced, the 'backup' would already be taking traffic just like the 'primary' normally.) -> tool 'balance' from balance.sf.net Thanks for the reference, I'm very interested in any load-balanced and/or fail over systems for Zope. Thanks, Derek
participants (5)
-
Anthony Baxter -
bak -
Chris McDonough -
Derek Simkowiak -
Toby Dickenson