Mem: 13608K av, 12824K used, 784K free,
WHAT? Yes, only 12MB of memory... The machine has lost all of its memory and is just running on the system board memory. Oh yeah, and it is a 166 MHz Pentium.
Sorry, but I don't quite understand what your message meant. (I am serious and not making fun.) You only mentioned 12MB memory. Is that all the RAM memory on the machine? Do you know that a person just complained about Zope because it was sucking up 2GB of memory? No, there is no typo. It's 2GB, not 2MB. I mean, to run a website, at very least be prepared to have 64MB, it's about US$70 nowadays. I hope I have not misunderstood, but I can't possibly imagine Zope running on a computer that has only 12MB! :) regard, Hung Jung ______________________________________________________ Get Your Private, Free Email at http://www.hotmail.com
You read correctly, just not the entire post. I said that the machine was broken and was not seeing the 128 MB of RAM I have installed, instead it was only using the 12 on the system board (DELL). As for Python sucking up memory (and one of the reasons I posted) it is true. I have it on another machine with 256 MB or memory and here is what it is doing.. TSIZE SIZE SWAP RSS SHARE STAT LIB %CPU %MEM CTIME COMMAND 403 44376 0 43M 1424 S 0 0.0 17.2 6:08 python 403 44376 0 43M 1424 S 0 0.0 17.2 0:00 python 403 44376 0 43M 1424 S 0 0.0 17.2 2:53 python 403 44376 0 43M 1424 S 0 0.0 17.2 1:33 python 403 44376 0 43M 1424 S 0 0.0 17.2 7:57 python 403 44376 0 43M 1424 S 0 0.0 17.2 6:11 python That is 43MB per! Zope uses what it can take. On the machine with 12MB it uses this.. SIZE SWAP RSS SHARE STAT LIB %CPU %MEM CTIME COMMAND 6592 816 332 S 0 0.0 5.9 3:06 python 6592 816 332 S 0 0.0 5.9 0:00 python 6592 816 332 S 0 0.0 5.9 2:04 python 6592 816 332 S 0 0.0 5.9 1:54 python 6592 816 332 S 0 0.0 5.9 1:39 python 6592 816 332 S 0 0.0 5.9 2:46 python I am sure on a machine with 5 GB it would use most of it. Such is the nature of Linux (I think). It goes up and down with usage and with packing the database. J
From: "Hung Jung Lu" <hungjunglu@hotmail.com> Date: Thu, 13 Apr 2000 10:18:01 PDT To: Jatwood@bwanazulia.com, zope@zope.org Subject: [Zope] Minimal Zope Install - True Story
Mem: 13608K av, 12824K used, 784K free,
WHAT? Yes, only 12MB of memory... The machine has lost all of its memory and is just running on the system board memory. Oh yeah, and it is a 166 MHz Pentium.
Sorry, but I don't quite understand what your message meant. (I am serious and not making fun.)
You only mentioned 12MB memory. Is that all the RAM memory on the machine?
Do you know that a person just complained about Zope because it was sucking up 2GB of memory? No, there is no typo. It's 2GB, not 2MB.
I mean, to run a website, at very least be prepared to have 64MB, it's about US$70 nowadays. I hope I have not misunderstood, but I can't possibly imagine Zope running on a computer that has only 12MB! :)
regard,
Hung Jung
______________________________________________________ Get Your Private, Free Email at http://www.hotmail.com
On Thu, 13 Apr 2000, J. Atwood wrote:
Zope uses what it can take. On the machine with 12MB it uses this..
SIZE SWAP RSS SHARE STAT LIB %CPU %MEM CTIME COMMAND 6592 816 332 S 0 0.0 5.9 3:06 python 6592 816 332 S 0 0.0 5.9 0:00 python 6592 816 332 S 0 0.0 5.9 2:04 python 6592 816 332 S 0 0.0 5.9 1:54 python 6592 816 332 S 0 0.0 5.9 1:39 python 6592 816 332 S 0 0.0 5.9 2:46 python
I am sure on a machine with 5 GB it would use most of it. Such is the nature of Linux (I think). It goes up and down with usage and with packing the database.
What I fail to understand is how can Zope consume more memory than (number of threads*size of Data.fs file), which assumes that all objects are awakened even their existing older versions. If on the other hand one accesses files on the filesystem then it is conceivable that those files (depending on OS) will be cached, by why should they appear under RSS? In anycase I am not accerssing any files on the filesystem so am I right to assume an upper limit of (number of threads*size of Data.fs file)? Pavlos
Pavlos Christoforou wrote:
What I fail to understand is how can Zope consume more memory than (number of threads*size of Data.fs file), which assumes that all objects are awakened even their existing older versions. If on the other hand one accesses files on the filesystem then it is conceivable that those files (depending on OS) will be cached, by why should they appear under RSS?
In anycase I am not accerssing any files on the filesystem so am I right to assume an upper limit of (number of threads*size of Data.fs file)?
Only current revisions of objects are ever activated, so this is not a good measure. There is generally no correlation between Data.fs size and memory consumption. Also, the size of an object when serialized (pickled in the Data.fs file) does not correlate to the size of the object in memory in any straightforward way. Lot's of objects like database adapters cache run-time information like result rows, connection objects, etc. None of this information gets serialized. -Michel
On Thu, 13 Apr 2000, Michel Pelletier wrote:
Only current revisions of objects are ever activated, so this is not a good measure. There is generally no correlation between Data.fs size
Michel I am not seeking a good measure. I am seeking an easily estimated upper limit given that my Zope accesses no files on the filesystem and has no external components active (DAs, etc). Including all objects in Data.fs, current revisions or not, makes my estimation easier.
and memory consumption. Also, the size of an object when serialized (pickled in the Data.fs file) does not correlate to the size of the object in memory in any straightforward way.
Correlation might not be straightforward but upper limit estimation should be. cPickle is a binary representation of the instance data plus a lot of extra info declaring types etc. Ignoring cached objects/data coming from external sources (RDBM, etc) which I don't have, then the pickled version of the object should place an approximate upper limit for its RAM usage, unless during object activation/utilization the object requires a lot of RAM to do its job (Catalog comes to mind). Or for instance if you have a __setstate__ method that does something like: for i in range(1000): a.append(a*100) In anycase any activity under the above constrains which increases RAM usage indefinetely is IMO a memory leak. Pavlos
On Thu, 13 Apr 2000 15:01:59 -0400 (EDT), Pavlos Christoforou <pavlos@gaaros.com> wrote:
Correlation might not be straightforward but upper limit estimation should be. cPickle is a binary representation of the instance data plus a lot of extra info declaring types etc. Ignoring cached objects/data coming from external sources (RDBM, etc) which I don't have,
OK...
then the pickled version of the object should place an approximate upper limit for its RAM usage,
...almost...
unless during object activation/utilization the object requires a lot of RAM to do its job (Catalog comes to mind).
The catch is that this is true for almost all objects. Here are two examples: 1. A integer stored in a pickle may consume as little as 3 bytes. However in memory it will exist in a separately malloced integer object, which typically takes 16 bytes. 2. A DTML method in a pickle is only a little bigger than the size of it's text. However in memory it will create many instance objects representing the DTML parse tree. These objects are the ones that actually perform the rendering. The text is stored in memory too, but it's only used for management operations. I haven't measured it, but I wouldn't be suprised if it was 10 times bigger than the pickle. Toby Dickenson tdickenson@geminidataloggers.com
On Fri, 14 Apr 2000, Toby Dickenson wrote:
2. A DTML method in a pickle is only a little bigger than the size of it's text. However in memory it will create many instance objects representing the DTML parse tree. These objects are the ones that actually perform the rendering. The text is stored in memory too, but it's only used for management operations. I haven't measured it, but I wouldn't be suprised if it was 10 times bigger than the pickle.
If it is that much bigger, then I suppose I couldn't easily place an upper limit ... darn an upper limit could have caught the memory leak (if any) quickly. Pavlos
"J. Atwood" wrote:
You read correctly, just not the entire post. I said that the machine was broken and was not seeing the 128 MB of RAM I have installed, instead it was only using the 12 on the system board (DELL).
Been there. :) append "mem=128M" to lilo (no quotes) for the reboot (as root: 'lilo -R <imagename> append mem=128M' where <imagename> is the kenrel image you boot to). Should work fine ... unless of course the RAM (or motherboard) is dead :<
As for Python sucking up memory (and one of the reasons I posted) it is true. I have it on another machine with 256 MB or memory and here is what it is doing..
TSIZE SIZE SWAP RSS SHARE STAT LIB %CPU %MEM CTIME COMMAND 403 44376 0 43M 1424 S 0 0.0 17.2 6:08 python 403 44376 0 43M 1424 S 0 0.0 17.2 0:00 python 403 44376 0 43M 1424 S 0 0.0 17.2 2:53 python 403 44376 0 43M 1424 S 0 0.0 17.2 1:33 python 403 44376 0 43M 1424 S 0 0.0 17.2 7:57 python 403 44376 0 43M 1424 S 0 0.0 17.2 6:11 python
That is 43MB per!
NO, IT ISN'T! Sorry if that sounds snappy, but I have grown quite weary of all these repeated claims. That is not how it is, and this has been posted many, many times. For those who really need a low level explanations, a trip down memory lane, combined with a visit to the Kernel list is in order. Think about it, do the math. Is this machine swapping for every action it takes? It would have to be _IF_ you were correct. Just waht you pasted above would be consuming 258MB ram. Nearly _anything_ you did with your machine would require swapping. Ask youself this: What are the chances that _every_ thread is consuming the *exact* same amount of all resources reported, *all* the time, regardless of what any individual thread's task? I realize that not everyone here is versed in the particulars, but I would hope that we (collectively) could realize (after repeated posts by more than one or two people) that on Linux (at least), when you look at the processes' data, threads report the totals, not individual usage. One of my machines has anywhere between 5 and 10 Zservers running at any given time. many have thread counts of around 6. I also have other processes spawning _lots_ of threads. If I added up the 'totals' on a 'per-thread basis', It would amount to a lot more memory than is in the machine. Specifically, the above 'formula' would claim that my Roxen server on this machine is using (22 threads * 10MB/thread) 220MB of physical RAM! Roxen is certainly _not_ using 220MB. Yet I can compile, start up an additional Xserver, etc. with _no_ swapping (until I _actually_ exceed the physical RAM). It would certainly help us track down the alleged memory leak if we could get to the actual data, and not the assumptions. BTW, RSS _includes_ library pages for ELF processes.
Zope uses what it can take. On the machine with 12MB it uses this..
SIZE SWAP RSS SHARE STAT LIB %CPU %MEM CTIME COMMAND 6592 816 332 S 0 0.0 5.9 3:06 python 6592 816 332 S 0 0.0 5.9 0:00 python 6592 816 332 S 0 0.0 5.9 2:04 python 6592 816 332 S 0 0.0 5.9 1:54 python 6592 816 332 S 0 0.0 5.9 1:39 python 6592 816 332 S 0 0.0 5.9 2:46 python
I am sure on a machine with 5 GB it would use most of it. Such is the nature of Linux (I think). It goes up and down with usage and with packing the database.
AIUI, Python limits the memory in that if it isn't available, it doesn't try to force the issue. If it is available, _and_ it needs it, it will take it. In this case, it is only taking what it can get; that doesn't mean it doesn't _need_ more. :) I hope that all didn't come off wrong (mean or something). As I said, it is really hard to verify and track memory leak claims with incorrect data. When you are serving/proposing Zope with/to Fortune 500 companies, this becomes a very important issue, believe me. I am not saying there is not a memory leak, just that all these incorrec tposts are making the process of verifying and identifying one very difficult. That and those-who-do't-want-Zope-to-be-used see these posts(yes, they monitor the list solely to see problems) and my customers/prospects come questioning me about it. Then I have to explain this whole thing to them. The we, as a community, look bad. Bill -- In flying I have learned that carelessness and overconfidence are usually far more dangerous than deliberately accepted risks. -- Wilbur Wright in a letter to his father, September 1900
<everything snipped> Thanks Bill. No, you don't sound mean but everyone has to learn and start somewhere. If fortune 500 companies are reading this list to get an idea of Zope, they are certainly not very bright. Dev lists mostly contain problems, fixes and newbies not success stories and Yahoo like sites. As for top. After that message yest I went looking for more information on how to read top. I find it very valuable tool in figuring out just what type of programs are eating what type of resources. From what I (NOW) understand each one of those python threads reports the total amount of memory that all of the threads combined are using. That would make much more sense. I appreciate you helping out in this. (now for all of those Fortune 500 CEO's and other lurkers) <PLUG> I have been using Zope for 6 months now (after wanting to use it for 12). It is one of the most impressive pieces of software I have ever come across. In terms of its content management, easy of use, functionality and speed (yes SPEED) there is nothing out there that can compete. Don't worry about service and support, hire a smart person, who knows basic Perl or Python, Apache and some HTML and set them down with all of the documentation, this list and a shopping spree at the local bookstore. The Zope movement is strong and growing everyday. Digital Creations is a huge positive factor and their success only strengthens the overall power of Zope. Open Source allows freedom of development and plentiful resources. For the cost of a developer, $1000 in hardware and a good connection you can build and support a site that will handle millions of hits a month. ZEO will allow your company to grow to its hearts desire and never let your website be the limiting factor. </PLUG> J
<PLUG> I have been using Zope for 6 months now (after wanting to use it for 12). It is one of the most impressive pieces of software I have ever come across. In terms of its content management, easy of use, functionality and speed (yes SPEED) there is nothing out there that can compete. Don't worry about service and support, hire a smart person, who knows basic Perl or Python, Apache and some HTML and set them down with all of the documentation, this list and a shopping spree at the local bookstore. The Zope movement is strong and growing everyday. Digital Creations is a huge positive factor and their success only strengthens the overall power of Zope. Open Source allows freedom of development and plentiful resources. For the cost of a developer, $1000 in hardware and a good connection you can build and support a site that will handle millions of hits a month. ZEO will allow your company to grow to its hearts desire and never let your website be the limiting factor. </PLUG>
A shopping spree at the local bookstore does little good if none of the documentation for Zope is available at the local bookstore.... While I agree with you that Zope is a very impressive product, I'd be an awful lot more impressed if it came with documentation that was actually useful. I really don't fancy the prospect of reverse-engineering code to figure out how to use something -- and that's about the only way to figure out how to use Zope at this time.
A shopping spree at the local bookstore does little good if none of the documentation for Zope is available at the local bookstore....
While I agree with you that Zope is a very impressive product, I'd be an awful lot more impressed if it came with documentation that was actually useful. I really don't fancy the prospect of reverse-engineering code to figure out how to use something -- and that's about the only way to figure out how to use Zope at this time.
What I meant about the books is that there are at least 4 Python books and countless Unix, Apache etc, books that will help in any Zope installation (good to know more than just the Zope part). The documentation is weak, but not the weakest. I tried installing Apache's JServe a week ago and it was the most frustrating event ever. After 10 hours, many printed notes, FAQs, etc I was able to get it via one stupid little configuration in one of the **4** config files. This also goes for most Open Source software products I have played with. Very basic, very crude, very matter-of-fact documentation for the beginner. Zope will get better, we will help. J
A shopping spree at the local bookstore does little good if none of the documentation for Zope is available at the local bookstore....
What I meant about the books is that there are at least 4 Python books and countless Unix, Apache etc, books that will help in any Zope installation (good to know more than just the Zope part).
Ah! Understood. Although books on UNIX don't interest me in the slightest (seeing as I escaped from the UNIX jail a long time ago), a good book on Apache is quite definitely worth my while. (I practically *BREATHE* Python these days, so the Python books aren't quite as useful to me....:-)
This also goes for most Open Source software products I have played with. Very basic, very crude, very matter-of-fact documentation for the beginner.
Agreed. Open Source documentation seems to be by geeks for geeks, and specifically for geeks who think that firing up a debugger is the best way to get program documentation. Personally I'm more task and tool oriented. I have a task, I want to use a tool to accomplish this task. I don't want to have to wrestle with the tool (or the tool's documentation) before I wrestle with the task. For that reason and primarily for that reason I've not had much use for Open Source (or other free) software. Python is a notable exception -- it's so cleanly designed that the occasional hole in the documentation doesn't bug me as much. Zope is another exception. Its documentation is miserable, but it is such a powerful product that I'm willing to wrestle with it (although I'm not happy about it).
Zope will get better, we will help.
If I didn't think so, I'd have ditched Zope like I ditched most Open Source software I've evaluated. Instead I'm lurking in the Zope mailing list until I get my Z legs and can contribute.
participants (7)
-
Bill Anderson -
Hung Jung Lu -
J. Atwood -
Michael T. Richter -
Michel Pelletier -
Pavlos Christoforou -
Toby Dickenson