Performance Testing / Expections
Ok, I know it has been asked a couple of times before on the list, but there hasn't really been any absolute conclusions on the type of performance to expect from zope and/or how to tweak it for the best kind of performance. Over the last two days I have been doing some testing and would like to hear on opinions if these results are similar to other peoples. Firstly the machine specs are: PIII 450Mhz 128Mb IDE Harddrive Running Debian Potato Zope is running through Apache with PCGI. All tests were run using ab. First I tested apache to see how fast it was as a benchmark. 1000 requests, 25 Concurrent. /usr/sbin/ab -c 25 -n 1000 "http://www.eminecorp.com.au/docs/testemc.html" Results: 384 requests per seconds (rps). Avg wait: 54ms. Max Wait. 83ms. Throughput: 20MB/s 1000 requests, 1 at a time. /usr/sbin/ab -c 1 -n 1000 "http://www.eminecorp.com.au/docs/testemc.html" 425 rps AvgWait: 2ms MaxWait: 3ms Throughput: 21MB/s The I tested the same document servered statically through zope. (From zope's root) /usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/testemc" 42 rps Avg Wait: 514ms MaxWait: 743ms Throughput: 2MB/s /usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/testemc" 45 rps AvgWait: 21ms MaxWait: 29ms Throughput 2.3MB/s Next on line was a moderately dynamic page. Dtml only, no DB or Ext. Methods. /usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/sesgroup?cu rrentid=0&cur_page=http%3a//www.eminecorp.com.au/eminecorp/ecom2/siteinfo/index_html?" 4.97 rps AvWait: 4.4s MaxWait: 5.6s Throughput 21kb/s /usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/eminecorp/ ecom2/sesgroup?currentid=0&cur_page=http%3a//www.eminecorp.com.au/eminecorp/ecom2/ siteinfo/index_html?" 5.09 rps AvWait: 195ms MaxWait: 198ms Throughput 22kb/s Then finally testing a page with the same amount of DTML complexity but some external methods which call a database using pygresql /usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/products/?s tartitem=1¤tid=22" 1.3rps Avwait 16s MaxWait 20s Throughput 19.51 kb/s /usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/products/?s tartitem=1¤tid=22" 1.3rps Avwait 768s MacWait 789 Throughput 19.55 kb/s In summary serving straight of apache is very fast, and doesn't suffer if multiple people access the site at once. Static Zope (which you wouldn't really do anyway) was acceptable speed and didn't really suffer too badly if there were multiple accesses. Dynamic Zope was fair in speed, but proably still fast enough, at least for my site anyway. It started to suffer in a bad way with multiple accesses, up to 5 sec response time. Highly dynamic Zope was very bad in rps and hopeless more multiple access. Now I have only been in this game I a while so I don't know how critical the multiple access thing is going to be. I am expecting at most 1000 visitors each day (say a 10 hour period). So 100 visitor each hour shouldn't cuase me any problems at the moment. I guess how I would like to know if anyone out there knows how to improve this performance at all. Can I throw memory and processors at it and make it go away? Obviously there are some things in the highly dynamic bit which I will need to speed up, however how can I make just normal dtml execute faster? The prospect of ZEO has certainly helped in this area but surely I must be able to sqeeze some more performance out of this machine. Thanks in advance. Benno
Ben Leslie wrote:
I guess how I would like to know if anyone out there knows how to improve this performance at all. Can I throw memory and processors at it and make it go away? Obviously there are some things in the highly dynamic bit which I will need to speed up, however how can I make just normal dtml execute faster? The prospect of ZEO has certainly helped in this area but surely I must be able to sqeeze some more performance out of this machine.
1. Generate Expires headers for your pages, and maybe handle If-Modified-Since headers too (see OFS.Image.File for an example of the latter.) This lets the browser cache results of yor pages, which won't up your requests-per-second but will reduce the number of requests you have to handle. 2. Cache results of dynamic pages and then serve that up instead. This can speed things up by a factor of 5 for pages that are only a little dynamic. I'll follow this up in a short while with an example of generating Expires headers in DTML. -- Itamar S.T. itamars@ibm.net
The following dtml code will tell the browser that the page expires one hour in the future: <dtml-call "RESPONSE.setHeader('Expires', _.DateTime(_.DateTime().timeTime() + 3600).toZone('GMT').rfc822())"> -- Itamar S.T. itamars@ibm.net
I guess how I would like to know if anyone out there knows how to improve this performance at all. Can I throw memory and processors at it and make it go away? Obviously there are some things in the highly dynamic bit which I will need to speed up, however how can I make just normal dtml execute faster? The prospect of ZEO has certainly helped in this area but surely I must be able to sqeeze some more performance out of this machine.
1. Generate Expires headers for your pages, and maybe handle If-Modified-Since headers too (see OFS.Image.File for an example of the latter.) This lets the browser cache results of yor pages, which won't up your requests-per-second but will reduce the number of requests you have to handle.
Ok. Thanks, I didn't think of that. This should probably help quite a bit.
2. Cache results of dynamic pages and then serve that up instead. This can speed things up by a factor of 5 for pages that are only a little dynamic.
Yeah I am currently thinking of the best way todo this. I guess I now reach another point. Does anyone know how to make External Methods persitent, or would I have to store cached information on the filesystem. Does ZCache work for External Methods, are there any plans todo this? I don't mind writing code for ZCache todo this if the current framework easily allows this. Thanks, Benno
On Sun, 26 Mar 2000, you wrote:
2. Cache results of dynamic pages and then serve that up instead. This can speed things up by a factor of 5 for pages that are only a little dynamic.
Yeah I am currently thinking of the best way todo this. I guess I now reach another point. Does anyone know how to make External Methods persitent, or would I have to store cached information on the filesystem. Does ZCache work for External Methods, are there any plans todo this? I don't mind writing code for ZCache todo this if the current framework easily allows this.
I usually create python classes instead of plain external methods and instantiate them as zclasses. This way I can cache information with inthe python class as class members. If there are hooks in the zope framework to do somthing similar (ie make use of zcache etc) please post Thanks sathya ########################## necessity is the mother of invention ##########################
Here are my tests... Machine: 300 MHz 256 RAM IDE Hardrive Apache on same machine (different port) Document Length: 590 bytes Concurrency Level: 25 Time taken for tests: 0.165 seconds Complete requests: 100 Failed requests: 0 Total transferred: 92340 bytes HTML transferred: 63720 bytes Requests per second: 606.06 Transfer rate: 559.64 kb/s received Connnection Times (ms) min avg max Connect: 0 6 14 Processing: 22 25 34 Total: 22 31 48 Zope (ZServer Only, test document, just a few html bits and <dtml-var standard_html_footer>) Document Length: 682 bytes Concurrency Level: 25 Time taken for tests: 2.451 seconds Complete requests: 100 Failed requests: 0 Total transferred: 88500 bytes HTML transferred: 68200 bytes Requests per second: 40.80 Transfer rate: 36.11 kb/s received Connnection Times (ms) min avg max Connect: 0 66 333 Processing: 339 467 342 Total: 339 533 675 I upped it to 1000 and got... Concurrency Level: 25 Time taken for tests: 23.074 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 885885 bytes HTML transferred: 682682 bytes Requests per second: 43.34 Transfer rate: 38.39 kb/s received I put it against a normal page with a lot dtml and a few highly cached sql calls (in DTML's) Document Length: 17000 bytes Concurrency Level: 25 Time taken for tests: 13.314 seconds Complete requests: 100 Failed requests: 0 Total transferred: 1727400 bytes HTML transferred: 1700000 bytes Requests per second: 7.51 Transfer rate: 129.74 kb/s received Connnection Times (ms) min avg max Connect: 0 82 379 Processing: 386 2898 4312 Total: 386 2980 4691 When I turned it on a highly dynamic page (lots of SQL, lots of DTML) It got worse.. Server Hostname: fundraising.gotschool.com Server Port: 80 Document Path: /search?category=11 Document Length: 22373 bytes Concurrency Level: 1 Time taken for tests: 25.820 seconds Complete requests: 100 Failed requests: 73 (Connect: 0, Length: 73, Exceptions: 0) Total transferred: 2263791 bytes HTML transferred: 2236391 bytes Requests per second: 3.87 Transfer rate: 87.68 kb/s received Connnection Times (ms) min avg max Connect: 0 0 2 Processing: 197 257 315 Total: 197 257 317 I upped that cache on the SQL and that id not really help... I then put the test against just one of the parts of the page (that is DTML, calls another DTML and has a SQL Select in it) Concurrency Level: 1 Time taken for tests: 19.768 seconds Complete requests: 100 Failed requests: 0 Total transferred: 987100 bytes HTML transferred: 966700 bytes Requests per second: 5.06 Transfer rate: 49.93 kb/s received Connnection Times (ms) min avg max Connect: 0 0 2 Processing: 151 197 211 Total: 151 197 213 Which was not bad... I upped to a 25 Concurrency... Document Length: 9667 bytes Concurrency Level: 25 Time taken for tests: 15.912 seconds Complete requests: 100 Failed requests: 81 (Connect: 0, Length: 81, Exceptions: 0) Total transferred: 691350 bytes HTML transferred: 670542 bytes Requests per second: 6.28 Transfer rate: 43.45 kb/s received Connnection Times (ms) min avg max Connect: 0 158 718 Processing: 727 3342 4341 Total: 727 3500 5059 Well, that is food for thought today. If anyone has some really good analysis of this (since this is my first time using AB) that would be great. JMA On Sun, 26 Mar 2000, Ben Leslie wrote:
Ok,
I know it has been asked a couple of times before on the list, but there hasn't really been any absolute conclusions on the type of performance to expect from zope and/or how to tweak it for the best kind of performance.
Over the last two days I have been doing some testing and would like to hear on opinions if these results are similar to other peoples.
Firstly the machine specs are:
PIII 450Mhz 128Mb IDE Harddrive
Running Debian Potato
Zope is running through Apache with PCGI.
All tests were run using ab.
First I tested apache to see how fast it was as a benchmark.
1000 requests, 25 Concurrent.
/usr/sbin/ab -c 25 -n 1000 "http://www.eminecorp.com.au/docs/testemc.html"
Results:
384 requests per seconds (rps). Avg wait: 54ms. Max Wait. 83ms. Throughput: 20MB/s
1000 requests, 1 at a time.
/usr/sbin/ab -c 1 -n 1000 "http://www.eminecorp.com.au/docs/testemc.html"
425 rps AvgWait: 2ms MaxWait: 3ms Throughput: 21MB/s
The I tested the same document servered statically through zope. (From zope's root)
/usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/testemc"
42 rps Avg Wait: 514ms MaxWait: 743ms Throughput: 2MB/s
/usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/testemc"
45 rps AvgWait: 21ms MaxWait: 29ms Throughput 2.3MB/s
Next on line was a moderately dynamic page. Dtml only, no DB or Ext. Methods.
/usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/sesgroup?cu rrentid=0&cur_page=http%3a//www.eminecorp.com.au/eminecorp/ecom2/siteinfo/index_html?"
4.97 rps AvWait: 4.4s MaxWait: 5.6s Throughput 21kb/s
/usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/eminecorp/ ecom2/sesgroup?currentid=0&cur_page=http%3a//www.eminecorp.com.au/eminecorp/ecom2/ siteinfo/index_html?"
5.09 rps AvWait: 195ms MaxWait: 198ms Throughput 22kb/s
Then finally testing a page with the same amount of DTML complexity but some external methods which call a database using pygresql
/usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/products/?s tartitem=1¤tid=22"
1.3rps Avwait 16s MaxWait 20s Throughput 19.51 kb/s
/usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/products/?s tartitem=1¤tid=22"
1.3rps Avwait 768s MacWait 789 Throughput 19.55 kb/s
In summary serving straight of apache is very fast, and doesn't suffer if multiple people access the site at once.
Static Zope (which you wouldn't really do anyway) was acceptable speed and didn't really suffer too badly if there were multiple accesses.
Dynamic Zope was fair in speed, but proably still fast enough, at least for my site anyway. It started to suffer in a bad way with multiple accesses, up to 5 sec response time.
Highly dynamic Zope was very bad in rps and hopeless more multiple access.
Now I have only been in this game I a while so I don't know how critical the multiple access thing is going to be. I am expecting at most 1000 visitors each day (say a 10 hour period). So 100 visitor each hour shouldn't cuase me any problems at the moment.
I guess how I would like to know if anyone out there knows how to improve this performance at all. Can I throw memory and processors at it and make it go away? Obviously there are some things in the highly dynamic bit which I will need to speed up, however how can I make just normal dtml execute faster? The prospect of ZEO has certainly helped in this area but surely I must be able to sqeeze some more performance out of this machine.
Thanks in advance.
Benno
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
___________________________ Not for self, but for Bwana
Here are my tests... Machine: 300 MHz 256 RAM IDE Hardrive Apache on same machine (different port) Document Length: 590 bytes Concurrency Level: 25 Time taken for tests: 0.165 seconds Complete requests: 100 Failed requests: 0 Total transferred: 92340 bytes HTML transferred: 63720 bytes Requests per second: 606.06 Transfer rate: 559.64 kb/s received Connnection Times (ms) min avg max Connect: 0 6 14 Processing: 22 25 34 Total: 22 31 48 Zope (ZServer Only, test document, just a few html bits and <dtml-var standard_html_footer>) Document Length: 682 bytes Concurrency Level: 25 Time taken for tests: 2.451 seconds Complete requests: 100 Failed requests: 0 Total transferred: 88500 bytes HTML transferred: 68200 bytes Requests per second: 40.80 Transfer rate: 36.11 kb/s received Connnection Times (ms) min avg max Connect: 0 66 333 Processing: 339 467 342 Total: 339 533 675 I upped it to 1000 and got... Concurrency Level: 25 Time taken for tests: 23.074 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 885885 bytes HTML transferred: 682682 bytes Requests per second: 43.34 Transfer rate: 38.39 kb/s received I put it against a normal page with a lot dtml and a few highly cached sql calls (in DTML's) Document Length: 17000 bytes Concurrency Level: 25 Time taken for tests: 13.314 seconds Complete requests: 100 Failed requests: 0 Total transferred: 1727400 bytes HTML transferred: 1700000 bytes Requests per second: 7.51 Transfer rate: 129.74 kb/s received Connnection Times (ms) min avg max Connect: 0 82 379 Processing: 386 2898 4312 Total: 386 2980 4691 When I turned it on a highly dynamic page (lots of SQL, lots of DTML) It got worse.. Server Hostname: fundraising.gotschool.com Server Port: 80 Document Path: /search?category=11 Document Length: 22373 bytes Concurrency Level: 1 Time taken for tests: 25.820 seconds Complete requests: 100 Failed requests: 73 (Connect: 0, Length: 73, Exceptions: 0) Total transferred: 2263791 bytes HTML transferred: 2236391 bytes Requests per second: 3.87 Transfer rate: 87.68 kb/s received Connnection Times (ms) min avg max Connect: 0 0 2 Processing: 197 257 315 Total: 197 257 317 I upped that cache on the SQL and that id not really help... I then put the test against just one of the parts of the page (that is DTML, calls another DTML and has a SQL Select in it) Concurrency Level: 1 Time taken for tests: 19.768 seconds Complete requests: 100 Failed requests: 0 Total transferred: 987100 bytes HTML transferred: 966700 bytes Requests per second: 5.06 Transfer rate: 49.93 kb/s received Connnection Times (ms) min avg max Connect: 0 0 2 Processing: 151 197 211 Total: 151 197 213 Which was not bad... I upped to a 25 Concurrency... Document Length: 9667 bytes Concurrency Level: 25 Time taken for tests: 15.912 seconds Complete requests: 100 Failed requests: 81 (Connect: 0, Length: 81, Exceptions: 0) Total transferred: 691350 bytes HTML transferred: 670542 bytes Requests per second: 6.28 Transfer rate: 43.45 kb/s received Connnection Times (ms) min avg max Connect: 0 158 718 Processing: 727 3342 4341 Total: 727 3500 5059 Well, that is food for thought today. If anyone has some really good analysis of this (since this is my first time using AB) that would be great. JMA
Here are my tests... Machine: 300 MHz 256 RAM IDE Hardrive Apache on same machine (different port) Document Length: 590 bytes Concurrency Level: 25 Time taken for tests: 0.165 seconds Complete requests: 100 Failed requests: 0 Total transferred: 92340 bytes HTML transferred: 63720 bytes Requests per second: 606.06 Transfer rate: 559.64 kb/s received Connnection Times (ms) min avg max Connect: 0 6 14 Processing: 22 25 34 Total: 22 31 48 Zope (ZServer Only, test document, just a few html bits and <dtml-var standard_html_footer>) Document Length: 682 bytes Concurrency Level: 25 Time taken for tests: 2.451 seconds Complete requests: 100 Failed requests: 0 Total transferred: 88500 bytes HTML transferred: 68200 bytes Requests per second: 40.80 Transfer rate: 36.11 kb/s received Connnection Times (ms) min avg max Connect: 0 66 333 Processing: 339 467 342 Total: 339 533 675 I upped it to 1000 and got... Concurrency Level: 25 Time taken for tests: 23.074 seconds Complete requests: 1000 Failed requests: 0 Total transferred: 885885 bytes HTML transferred: 682682 bytes Requests per second: 43.34 Transfer rate: 38.39 kb/s received I put it against a normal page with a lot dtml and a few highly cached sql calls (in DTML's) Document Length: 17000 bytes Concurrency Level: 25 Time taken for tests: 13.314 seconds Complete requests: 100 Failed requests: 0 Total transferred: 1727400 bytes HTML transferred: 1700000 bytes Requests per second: 7.51 Transfer rate: 129.74 kb/s received Connnection Times (ms) min avg max Connect: 0 82 379 Processing: 386 2898 4312 Total: 386 2980 4691 When I turned it on a highly dynamic page (lots of SQL, lots of DTML) It got worse.. Server Hostname: fundraising.gotschool.com Server Port: 80 Document Path: /search?category=11 Document Length: 22373 bytes Concurrency Level: 1 Time taken for tests: 25.820 seconds Complete requests: 100 Failed requests: 73 (Connect: 0, Length: 73, Exceptions: 0) Total transferred: 2263791 bytes HTML transferred: 2236391 bytes Requests per second: 3.87 Transfer rate: 87.68 kb/s received Connnection Times (ms) min avg max Connect: 0 0 2 Processing: 197 257 315 Total: 197 257 317 I upped that cache on the SQL and that id not really help... I then put the test against just one of the parts of the page (that is DTML, calls another DTML and has a SQL Select in it) Concurrency Level: 1 Time taken for tests: 19.768 seconds Complete requests: 100 Failed requests: 0 Total transferred: 987100 bytes HTML transferred: 966700 bytes Requests per second: 5.06 Transfer rate: 49.93 kb/s received Connnection Times (ms) min avg max Connect: 0 0 2 Processing: 151 197 211 Total: 151 197 213 Which was not bad... I upped to a 25 Concurrency... Document Length: 9667 bytes Concurrency Level: 25 Time taken for tests: 15.912 seconds Complete requests: 100 Failed requests: 81 (Connect: 0, Length: 81, Exceptions: 0) Total transferred: 691350 bytes HTML transferred: 670542 bytes Requests per second: 6.28 Transfer rate: 43.45 kb/s received Connnection Times (ms) min avg max Connect: 0 158 718 Processing: 727 3342 4341 Total: 727 3500 5059 Well, that is food for thought today. If anyone has some really good analysis of this (since this is my first time using AB) that would be great. JMA At 11:27 PM +1000 3/26/2000, Ben Leslie wrote:
Ok,
I know it has been asked a couple of times before on the list, but there hasn't really been any absolute conclusions on the type of performance to expect from zope and/or how to tweak it for the best kind of performance.
Over the last two days I have been doing some testing and would like to hear on opinions if these results are similar to other peoples.
Firstly the machine specs are:
PIII 450Mhz 128Mb IDE Harddrive
Running Debian Potato
Zope is running through Apache with PCGI.
All tests were run using ab.
First I tested apache to see how fast it was as a benchmark.
1000 requests, 25 Concurrent.
/usr/sbin/ab -c 25 -n 1000 "http://www.eminecorp.com.au/docs/testemc.html"
Results:
384 requests per seconds (rps). Avg wait: 54ms. Max Wait. 83ms. Throughput: 20MB/s
1000 requests, 1 at a time.
/usr/sbin/ab -c 1 -n 1000 "http://www.eminecorp.com.au/docs/testemc.html"
425 rps AvgWait: 2ms MaxWait: 3ms Throughput: 21MB/s
The I tested the same document servered statically through zope. (From zope's root)
/usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/testemc"
42 rps Avg Wait: 514ms MaxWait: 743ms Throughput: 2MB/s
/usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/testemc"
45 rps AvgWait: 21ms MaxWait: 29ms Throughput 2.3MB/s
Next on line was a moderately dynamic page. Dtml only, no DB or Ext. Methods.
/usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/sesgroup?cu rrentid=0&cur_page=http%3a//www.eminecorp.com.au/eminecorp/ecom2/siteinfo/index_html?"
4.97 rps AvWait: 4.4s MaxWait: 5.6s Throughput 21kb/s
/usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/eminecorp/ ecom2/sesgroup?currentid=0&cur_page=http%3a//www.eminecorp.com.au/eminecorp/ecom2/ siteinfo/index_html?"
5.09 rps AvWait: 195ms MaxWait: 198ms Throughput 22kb/s
Then finally testing a page with the same amount of DTML complexity but some external methods which call a database using pygresql
/usr/sbin/ab -c 25 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/products/?s tartitem=1¤tid=22"
1.3rps Avwait 16s MaxWait 20s Throughput 19.51 kb/s
/usr/sbin/ab -c 1 -n 100 "http://www.eminecorp.com.au/eminecorp/ecom2/products/?s tartitem=1¤tid=22"
1.3rps Avwait 768s MacWait 789 Throughput 19.55 kb/s
In summary serving straight of apache is very fast, and doesn't suffer if multiple people access the site at once.
Static Zope (which you wouldn't really do anyway) was acceptable speed and didn't really suffer too badly if there were multiple accesses.
Dynamic Zope was fair in speed, but proably still fast enough, at least for my site anyway. It started to suffer in a bad way with multiple accesses, up to 5 sec response time.
Highly dynamic Zope was very bad in rps and hopeless more multiple access.
Now I have only been in this game I a while so I don't know how critical the multiple access thing is going to be. I am expecting at most 1000 visitors each day (say a 10 hour period). So 100 visitor each hour shouldn't cuase me any problems at the moment.
I guess how I would like to know if anyone out there knows how to improve this performance at all. Can I throw memory and processors at it and make it go away? Obviously there are some things in the highly dynamic bit which I will need to speed up, however how can I make just normal dtml execute faster? The prospect of ZEO has certainly helped in this area but surely I must be able to sqeeze some more performance out of this machine.
Thanks in advance.
Benno
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
On Sun, Mar 26, 2000 at 11:27:38PM +1000, Ben Leslie wrote:
Highly dynamic Zope was very bad in rps and hopeless more multiple access.
You are using postgres and the postgres adapter is only level 2 (serializing).
-Petru
Yes I am using postgres, however I am _not_ using the postgres adapter. I am using External Methods and python to do all data access since the data requires formatting after it comes out of the database which is much more easily done in python cf. dtml. Cheers, Benno
On Mon, Mar 27, 2000 at 08:38:45PM +1000, Ben Leslie wrote:
On Sun, Mar 26, 2000 at 11:27:38PM +1000, Ben Leslie wrote:
Highly dynamic Zope was very bad in rps and hopeless more multiple access.
You are using postgres and the postgres adapter is only level 2 (serializing).
-Petru
Yes I am using postgres, however I am _not_ using the postgres adapter. I am using External Methods and python to do all data access since the data requires formatting after it comes out of the database which is much more easily done in python cf. dtml.
Then you loose all the data caching done by the ZSQL methods. -Petru
On Mon, Mar 27, 2000 at 08:38:45PM +1000, Ben Leslie wrote:
On Sun, Mar 26, 2000 at 11:27:38PM +1000, Ben Leslie wrote:
Highly dynamic Zope was very bad in rps and hopeless more multiple access.
You are using postgres and the postgres adapter is only level 2 (serializing).
-Petru
Yes I am using postgres, however I am _not_ using the postgres adapter. I am using External Methods and python to do all data access since the data requires formatting after it comes out of the database which is much more easily done in python cf. dtml.
Then you loose all the data caching done by the ZSQL methods.
-Petru
Yeah I know, which is a shame, however the postgres db adapter is only a development version and as I said we need to manipulate the data in python before it comes out of the db. I plan on putting in my own caching routines here, however this will still only get me up to at most 8 request per second, which is proably reasonable for my application. Cheers, Benno
participants (6)
-
Ben Leslie -
Itamar Shtull-Trauring -
J. Atwood -
J. Atwood -
Petru Paler -
sathya