On Thu, 11 Feb 1999, Michel Pelletier wrote:
Thinking a bit more about this, I guess the problem is that each virtual interface on the machine; which is the typical method to do multihosting, must go through its own RewriteRule thus leading to it's own Zope process.
The reason we decided to launch multiple process is so that we wouldn't compound the blocking issue that occurs when multiple requests come in at the same time. I'd love to have only one server handle all this. Or even better have it behave like apache where I can launch 10 servers and have it serve information from 100 DBs.
Perhaps we can solve the problem a bit with ZServer, maybe different Medusa interfaces could hook into different branches of the same Zope Module, and all of those interfaces then funnelling into the same database. This is obviously not a rigorous examination.
I want them in seperate DBs. The DB is where most of the concurentcey issues are, correct?
Paul and I would like to do a rigorous examination of the memory fluctuations that the zope.org python process goes through throughout the day. Paul told the list that it was 21.4 meg this morning (8am) I told the list that it was 19 some meg a couple hours later, a half hour ago it was at 28(!) meg, and now it's at 31.
So all you Linux fellows out there need to suggest a good way for us to track this on a regular basis. We propose a cron job sniffs the /proc filesystem. What kind of information do *you* want to see in this study? *Scripts welcome*
What files report information under /proc? If you receive any of these scripts please pass them on to me. I think we will create a few beta sites for anybody interested on the list and then monitor their usage. --------------------------------------------------- - Scott Robertson Phone: 714.972.2299 - - CodeIt Computing Fax: 714.972.2399 - - http://codeit.com - ---------------------------------------------------