RE: [Zope] Better Memeory Management Feature Request
-----Original Message----- From: Scott Robertson [mailto:sroberts@codeit.com]
You are very correct. That's why are first Zope Virtual Hosting Machine will have between .5GB-1GB of memory to start with. But it only took me 5 minutes to get the process to swell up to 18MB. That means some industrious/mallicious user could conceivably swell theirs up to 100MB even the full GB and then nobody else would get to play.
Thinking a bit more about this, I guess the problem is that each virtual interface on the machine; which is the typical method to do multihosting, must go through its own RewriteRule thus leading to it's own Zope process. Perhaps we can solve the problem a bit with ZServer, maybe different Medusa interfaces could hook into different branches of the same Zope Module, and all of those interfaces then funnelling into the same database. This is obviously not a rigorous examination. Paul and I would like to do a rigorous examination of the memory fluctuations that the zope.org python process goes through throughout the day. Paul told the list that it was 21.4 meg this morning (8am) I told the list that it was 19 some meg a couple hours later, a half hour ago it was at 28(!) meg, and now it's at 31. So all you Linux fellows out there need to suggest a good way for us to track this on a regular basis. We propose a cron job sniffs the /proc filesystem. What kind of information do *you* want to see in this study? *Scripts welcome* -Michel
--------------------------------------------------- - Scott Robertson Phone: 714.972.2299 - - CodeIt Computing Fax: 714.972.2399 - - http://codeit.com - ---------------------------------------------------
_______________________________________________ Zope maillist - Zope@zope.org http://www.zope.org/mailman/listinfo/zope
On Thu, 11 Feb 1999, Michel Pelletier wrote:
Thinking a bit more about this, I guess the problem is that each virtual interface on the machine; which is the typical method to do multihosting, must go through its own RewriteRule thus leading to it's own Zope process.
The reason we decided to launch multiple process is so that we wouldn't compound the blocking issue that occurs when multiple requests come in at the same time. I'd love to have only one server handle all this. Or even better have it behave like apache where I can launch 10 servers and have it serve information from 100 DBs.
Perhaps we can solve the problem a bit with ZServer, maybe different Medusa interfaces could hook into different branches of the same Zope Module, and all of those interfaces then funnelling into the same database. This is obviously not a rigorous examination.
I want them in seperate DBs. The DB is where most of the concurentcey issues are, correct?
Paul and I would like to do a rigorous examination of the memory fluctuations that the zope.org python process goes through throughout the day. Paul told the list that it was 21.4 meg this morning (8am) I told the list that it was 19 some meg a couple hours later, a half hour ago it was at 28(!) meg, and now it's at 31.
So all you Linux fellows out there need to suggest a good way for us to track this on a regular basis. We propose a cron job sniffs the /proc filesystem. What kind of information do *you* want to see in this study? *Scripts welcome*
What files report information under /proc? If you receive any of these scripts please pass them on to me. I think we will create a few beta sites for anybody interested on the list and then monitor their usage. --------------------------------------------------- - Scott Robertson Phone: 714.972.2299 - - CodeIt Computing Fax: 714.972.2399 - - http://codeit.com - ---------------------------------------------------
On Thu, 11 Feb 1999, Michel Pelletier wrote:
So all you Linux fellows out there need to suggest a good way for us to track this on a regular basis. We propose a cron job sniffs the /proc filesystem. What kind of information do That depends. If you want to have many samples (say every minute), it would be cheaper to have a long running process doing the collection.
For general consumation, TRS, DRS (SWAP?) would be ok. For your internal analyzing, you should probably write the data into the httpd access_log: This way you will have a reference to see which requests resulted in which changes. Perhaps even make Zope log it's memory size after each request. Andreas -- Win95: n., A huge annoying boot virus that causes random spontaneous system crashes, usually just before saving a massive project. Easily cured by UNIX. See also MS-DOS, IBM-DOS, DR-DOS, Win 3.x, Win98.
participants (3)
-
Andreas Kostyrka -
Michel Pelletier -
Scott Robertson