Has anybody attempted to do this? I'm finding that even though I specify CC, CXX, put the Forte directories ahead of /usr/local/bin in my path, Python is still falling back on gcc to build Zope. Where can I modify this behaviour? Our Sun rep tells us that we can kiss performance goodbye using gcc... Since we have Forte anyways, I'd like to try it out. thanks, Paul. -- horbal@atips.ca
On Friday, November 30, 2001, at 08:13 PM, Paul Horbal wrote:
Has anybody attempted to do this?
I'm finding that even though I specify CC, CXX, put the Forte directories ahead of /usr/local/bin in my path, Python is still falling back on gcc to build Zope. Where can I modify this behaviour?
Our Sun rep tells us that we can kiss performance goodbye using gcc... Since we have Forte anyways, I'd like to try it out.
thanks, Paul.
Good luck Paul - there are some issues with python under sun hw and/or solaris, which inevitably lead to issues with Zope. #1: speed: the pystone benchmark on our enterprise 3500 (400 MHz Ultrasparc) is about 4500 (using gcc compiler). My TiPB using MacOS-X gets 6500 and a PIII/700MHz gets over 10,000. This has lead to me trying to use ZEO on our system (which has multiple processors), but I ran up against a problem using CoreSessionTracking needing a mountable storage that I couldn't get my head around. Don't get me wrong - all our Zope systems work (although I'm getting python2.1 core dumps which are *maddening*). #2: threading: Zope is a multi-threaded beast and this allows it to handle more than one request at a time. This does not seem to work under Solaris. Check the archives for "threading, dtml and performance". Here's a URL from NIPltd http://zope.nipltd.com/public/lists/zope- archive.nsf/47ba74c812dbc5dd8025687f0024bb5f/ebe6af5e2cd7b04780256af1002b7440? OpenDocument&Highlight=0,threading and one from Activestate http://aspn.activestate.com/ASPN/search?query=threading&type=Archive_zope-Li... Try out the script that Oliver Erlewein posted and see what happens on your own Solaris box. What seems to indicate 'good' threading is if you get this kind of output from 155[8:22]tonymcd@localhost ~ % python /usr/local/lib/python2.1/test/test_thread.py creating task 1 task 1 will run for 1.9 sec creating task 2 task 2 will run for 4.6 sec creating task 3 task 3 will run for 3.0 sec creating task 4 task 4 will run for 2.6 sec creating task 5 task 5 will run for 8.8 sec creating task 6 task 6 will run for 9.3 sec creating task 7 task 7 will run for 0.8 sec creating task 8 task 8 will run for 8.8 sec creating task 9 task 9 will run for 0.9 sec creating task 10 task 10 will run for 4.0 sec waiting for all tasks to complete ..etc... whilst solaris gives 201[8:25]tone@solaris-box ~ > python /usr/local/lib/python2.1/test/test_thread.py creating task 1 creating task 2 creating task 3 creating task 4 creating task 5 creating task 6 creating task 7 creating task 8 creating task 9 creating task 10 waiting for all tasks to complete ..etc... I have done the following to try and sort this out. #1 built a python 2.2b as that mentions problems with Solaris threading may have improved. It does to an extent, in that this happens; 202[8:26]tone@solaris-box ~ > python2.2-thread /usr/local/lib/python2.2/test/test_thread.py creating task 1 creating task 2 task 1 will run for 1.0 sec creating task 3 creating task 4 creating task 5 task 3 will run for 9.1 sec creating task 6 creating task 7 creating task 8 task 5 will run for 1.3 sec creating task 9 task 2 will run for 1.6 sec creating task 10 waiting for all tasks to complete but Zope is not (yet) compatible with python2.2 (quite reasonable as python2.2 is a beta), so I have no way of checking it. #2 grabbed (what I think) is the relevant parts of the 2.2 source tree and built a hybrid 2.1 python. That didn't work. #3 inserted time.sleep(0.01) calls anywhere I could see a thread_create (can't remember the exact method) call in a clean 2.4.1 Zope installation and used the python2.1 from #2. That didn't work. #3 installed a whole host of Solaris patches on a spare Solaris 8 box that even remotely mentioned threads, built a clean python2.1, zope 2.4.1 and tried that out (that was my Friday night high-jinks - woo-hoo). That didn't work either. (this is starting to sound like the Monty Python 'cardboard box in the road' sketch). So, I'm at a loss here. I may well be doing something unutterably stupid, but I'm not too sure that I'm not. I also wouldn't believe the Sun guy, sometimes they are *so* smug. A combination of getting threading and CST via an external file storage working (I freely admit I'm at fault on the latter) would increase our throughput by a conservative factor of 24 (4 threads * 6 processors), so come on board!. I'd be interested in seeing what a Forte compiled python was capable of. If it has threading sorted out and is just a few %-age points faster than gcc-built python, I'd *really* like to download it!. Not a lot of help I know, but it's better you know the tiger whose tail you're tweaking, no? :) hth, tone
Well... it looks like that Sun guy wasn't lying about performance with Forte... I rebuilt Python 2.1.1 from source and ran several passes of Pystone. With gcc, Pystone averages about 4000 pystones/sec. With Forte, it's 4300! Consistently. That's a good 7-8% increase in performance. As for threads... both gcc and Forte compiled python seems to do the same thing with threads (like you described... it creates all threads, then waits for completion). I've attached the output of both tests. GCC python was compiled with the following options: ./configure --with-threads Forte python compiled with: ./configure --without-gcc -with-cxx=CC --with-threads --prefix=/opt/python Both python executables were 32-bit with no Ultrasparc optimizations... I'm using the latest version of Forte - v6 update 2. For fun, I was going to try building a 64-bit UltraSparc II optimized python (didn't work, I'm still trying though), but I did find in the Python source distro README, that using Sun Workshop compilers you should add the -Xc -mt CFLAGS. -mt for "multi-threading". I then went through and built another python like this.... Pystone benchmarks are now 4360-4380. That's an almost 10% boost over straight gcc (no change in the threads test). Looks like gcc has some work to do on Sparc. I'd be happy to distribute the compiled versions of Python I have... What's a good way to do this for maximum portability? cheers, Paul. At 08:45 AM 12/1/2001 +0000, tone.mcdonald@ncl.ac.uk wrote:
On Friday, November 30, 2001, at 08:13 PM, Paul Horbal wrote:
Has anybody attempted to do this?
I'm finding that even though I specify CC, CXX, put the Forte directories ahead of /usr/local/bin in my path, Python is still falling back on gcc to build Zope. Where can I modify this behaviour?
Our Sun rep tells us that we can kiss performance goodbye using gcc... Since we have Forte anyways, I'd like to try it out.
thanks, Paul.
Good luck Paul - there are some issues with python under sun hw and/or solaris, which inevitably lead to issues with Zope.
#1: speed: the pystone benchmark on our enterprise 3500 (400 MHz Ultrasparc) is about 4500 (using gcc compiler). My TiPB using MacOS-X gets 6500 and a PIII/700MHz gets over 10,000. This has lead to me trying to use ZEO on our system (which has multiple processors), but I ran up against a problem using CoreSessionTracking needing a mountable storage that I couldn't get my head around.
Don't get me wrong - all our Zope systems work (although I'm getting python2.1 core dumps which are *maddening*).
#2: threading: Zope is a multi-threaded beast and this allows it to handle more than one request at a time. This does not seem to work under Solaris. Check the archives for "threading, dtml and performance". Here's a URL from NIPltd http://zope.nipltd.com/public/lists/zope- archive.nsf/47ba74c812dbc5dd8025687f0024bb5f/ebe6af5e2cd7b04780256af1002b7440? OpenDocument&Highlight=0,threading and one from Activestate http://aspn.activestate.com/ASPN/search?query=threading&type=Archive_zope-Li... Try out the script that Oliver Erlewein posted and see what happens on your own Solaris box.
What seems to indicate 'good' threading is if you get this kind of output from
155[8:22]tonymcd@localhost ~ % python /usr/local/lib/python2.1/test/test_thread.py creating task 1 task 1 will run for 1.9 sec creating task 2 task 2 will run for 4.6 sec creating task 3 task 3 will run for 3.0 sec creating task 4 task 4 will run for 2.6 sec creating task 5 task 5 will run for 8.8 sec creating task 6 task 6 will run for 9.3 sec creating task 7 task 7 will run for 0.8 sec creating task 8 task 8 will run for 8.8 sec creating task 9 task 9 will run for 0.9 sec creating task 10 task 10 will run for 4.0 sec waiting for all tasks to complete ..etc...
whilst solaris gives
201[8:25]tone@solaris-box ~ > python /usr/local/lib/python2.1/test/test_thread.py creating task 1 creating task 2 creating task 3 creating task 4 creating task 5 creating task 6 creating task 7 creating task 8 creating task 9 creating task 10 waiting for all tasks to complete ..etc...
I have done the following to try and sort this out.
#1 built a python 2.2b as that mentions problems with Solaris threading may have improved. It does to an extent, in that this happens;
202[8:26]tone@solaris-box ~ > python2.2-thread /usr/local/lib/python2.2/test/test_thread.py creating task 1 creating task 2 task 1 will run for 1.0 sec creating task 3 creating task 4 creating task 5 task 3 will run for 9.1 sec creating task 6 creating task 7 creating task 8 task 5 will run for 1.3 sec creating task 9 task 2 will run for 1.6 sec creating task 10 waiting for all tasks to complete
but Zope is not (yet) compatible with python2.2 (quite reasonable as python2.2 is a beta), so I have no way of checking it.
#2 grabbed (what I think) is the relevant parts of the 2.2 source tree and built a hybrid 2.1 python. That didn't work.
#3 inserted time.sleep(0.01) calls anywhere I could see a thread_create (can't remember the exact method) call in a clean 2.4.1 Zope installation and used the python2.1 from #2. That didn't work.
#3 installed a whole host of Solaris patches on a spare Solaris 8 box that even remotely mentioned threads, built a clean python2.1, zope 2.4.1 and tried that out (that was my Friday night high-jinks - woo-hoo). That didn't work either.
(this is starting to sound like the Monty Python 'cardboard box in the road' sketch).
So, I'm at a loss here. I may well be doing something unutterably stupid, but I'm not too sure that I'm not.
I also wouldn't believe the Sun guy, sometimes they are *so* smug. A combination of getting threading and CST via an external file storage working (I freely admit I'm at fault on the latter) would increase our throughput by a conservative factor of 24 (4 threads * 6 processors), so come on board!.
I'd be interested in seeing what a Forte compiled python was capable of. If it has threading sorted out and is just a few %-age points faster than gcc-built python, I'd *really* like to download it!.
Not a lot of help I know, but it's better you know the tiger whose tail you're tweaking, no? :)
hth, tone
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
Paul Horbal wrote:
.....
whilst solaris gives
201[8:25]tone@solaris-box ~ > python /usr/local/lib/python2.1/test/test_thread.py creating task 1 creating task 2 creating task 3 creating task 4 creating task 5 creating task 6 creating task 7 creating task 8 creating task 9 creating task 10 waiting for all tasks to complete ..etc... I just recognized that this is not necessary a problem...
There are different policies as to when a new thread gets CPU control: * give control to the new thread before the parent can continue * let the parent continue and start the new thread later Apparently, Linux uses the first and Solaris the second policy. The threads are indeed run in parallel (in coroutine fashion), as the order of task completion demonstrates. All tasks terminate in increasing execution length. Does not look bad for Solaris... Dieter
On Friday, November 30, 2001, at 08:13 PM, Paul Horbal wrote:
Has anybody attempted to do this?
I'm finding that even though I specify CC, CXX, put the Forte directories ahead of /usr/local/bin in my path, Python is still falling back on gcc to build Zope. Where can I modify this behaviour?
Our Sun rep tells us that we can kiss performance goodbye using gcc... Since we have Forte anyways, I'd like to try it out.
thanks, Paul.
Good luck Paul - there are some issues with python under sun hw and/or solaris, which inevitably lead to issues with Zope. #1: speed: the pystone benchmark on our enterprise 3500 (400 MHz Ultrasparc) is about 4500 (using gcc compiler). My TiPB using MacOS-X gets 6500 and a PIII/700MHz gets over 10,000. This has lead to me trying to use ZEO on our system (which has multiple processors), but I ran up against a problem using CoreSessionTracking needing a mountable storage that I couldn't get my head around. Don't get me wrong - all our Zope systems work (although I'm getting python2.1 core dumps which are *maddening*). #2: threading: Zope is a multi-threaded beast and this allows it to handle more than one request at a time. This does not seem to work under Solaris. Check the archives for "threading, dtml and performance". Here's a URL from NIPltd http://zope.nipltd.com/public/lists/zope- archive.nsf/47ba74c812dbc5dd8025687f0024bb5f/ebe6af5e2cd7b04780256af1002b7440? OpenDocument&Highlight=0,threading and one from Activestate http://aspn.activestate.com/ASPN/search?query=threading&type=Archive_zope-Li... Try out the script that Oliver Erlewein posted and see what happens on your own Solaris box. What seems to indicate 'good' threading is if you get this kind of output from 155[8:22]tonymcd@localhost ~ % python /usr/local/lib/python2.1/test/test_thread.py creating task 1 task 1 will run for 1.9 sec creating task 2 task 2 will run for 4.6 sec creating task 3 task 3 will run for 3.0 sec creating task 4 task 4 will run for 2.6 sec creating task 5 task 5 will run for 8.8 sec creating task 6 task 6 will run for 9.3 sec creating task 7 task 7 will run for 0.8 sec creating task 8 task 8 will run for 8.8 sec creating task 9 task 9 will run for 0.9 sec creating task 10 task 10 will run for 4.0 sec waiting for all tasks to complete ..etc... whilst solaris gives 201[8:25]tone@solaris-box ~ > python /usr/local/lib/python2.1/test/test_thread.py creating task 1 creating task 2 creating task 3 creating task 4 creating task 5 creating task 6 creating task 7 creating task 8 creating task 9 creating task 10 waiting for all tasks to complete ..etc... I have done the following to try and sort this out. #1 built a python 2.2b as that mentions problems with Solaris threading may have improved. It does to an extent, in that this happens; 202[8:26]tone@solaris-box ~ > python2.2-thread /usr/local/lib/python2.2/test/test_thread.py creating task 1 creating task 2 task 1 will run for 1.0 sec creating task 3 creating task 4 creating task 5 task 3 will run for 9.1 sec creating task 6 creating task 7 creating task 8 task 5 will run for 1.3 sec creating task 9 task 2 will run for 1.6 sec creating task 10 waiting for all tasks to complete but Zope is not (yet) compatible with python2.2 (quite reasonable as python2.2 is a beta), so I have no way of checking it. #2 grabbed (what I think) is the relevant parts of the 2.2 source tree and built a hybrid 2.1 python. That didn't work. #3 inserted time.sleep(0.01) calls anywhere I could see a thread_create (can't remember the exact method) call in a clean 2.4.1 Zope installation and used the python2.1 from #2. That didn't work. #3 installed a whole host of Solaris patches on a spare Solaris 8 box that even remotely mentioned threads, built a clean python2.1, zope 2.4.1 and tried that out (that was my Friday night high-jinks - woo-hoo). That didn't work either. (this is starting to sound like the Monty Python 'cardboard box in the road' sketch). So, I'm at a loss here. I may well be doing something unutterably stupid, but I'm not too sure that I'm not. I also wouldn't believe the Sun guy, sometimes they are *so* smug. A combination of getting threading and CST via an external file storage working (I freely admit I'm at fault on the latter) would increase our throughput by a conservative factor of 24 (4 threads * 6 processors), so come on board!. I'd be interested in seeing what a Forte compiled python was capable of. If it has threading sorted out and is just a few %-age points faster than gcc-built python, I'd *really* like to download it!. Not a lot of help I know, but it's better you know the tiger whose tail you're tweaking, no? :) hth, tone
----- Original Message ----- From: "Tony McDonald" <tony.mcdonald@ncl.ac.uk>
Good luck Paul - there are some issues with python under sun hw and/or solaris, which inevitably lead to issues with Zope.
...<snip>...
I'd be interested in seeing what a Forte compiled python was capable of. If it has threading sorted out and is just a few %-age points faster than gcc-built python, I'd *really* like to download it!.
Not a lot of help I know, but it's better you know the tiger whose tail you're tweaking, no? :)
hth, tone
Well, I have similar experices in relation to Solaris including _extremely_ poor performace, threading problems/strangness (that affect even console-logging), and repeated core dumps. With Tony's report about his Solaris troubles, I feel safe to exclude DCO2 as the mayor culprit. So... Would it be reasonable to conclude that it **currently is not possible to deploy a High Availability Solaris based Zope solution**? I ask, because we have had very very bad experiences during development using our Enterprise 450s and I need to ask for funding for a proper linux box if the Solaris path is impossible (Linux comes to mind, since we need oracle - which is not supported in bsd - and windows is simply out of the question). I have about 20 days until launch of a _mayor_ student portal application and I need to tell if we should stick with our Solaris boxen for deployment or change to another solution. Thank you. /dario - -------------------------------------------------------------------- Dario Lopez-Kästen Systems Developer Chalmers Univ. of Technology dario@ita.chalmers.se ICQ will yield no hits IT Systems & Services
Well, I have similar experices in relation to Solaris including _extremely_ poor performace, threading problems/strangness (that affect even console-logging), and repeated core dumps. With Tony's report about his Solaris troubles, I feel safe to exclude DCO2 as the mayor culprit. So...
Would it be reasonable to conclude that it **currently is not possible to deploy a High Availability Solaris based Zope solution**? I have build serveral Zope sites under Solaris (for my old employer).
All have been slow, much slower than expected, but have been reasonably stable. * Some lightly loaded sites sometimes stopped answering. The problem disappeared almost completely, when they were regularly checked..... * For the main production site, Zope was much more stable than the Oracle backend database. Most outages was caused by Oracle problems calling for frequent Oracle restarts. After an Oracle restart, Zope needed to be restarted, too. Threading was definitely working. Dieter
On Saturday, December 1, 2001, at 07:38 PM, Dieter Maurer wrote:
Well, I have similar experices in relation to Solaris including _extremely_ poor performace, threading problems/strangness (that affect even console-logging), and repeated core dumps. With Tony's report about his Solaris troubles, I feel safe to exclude DCO2 as the mayor culprit. So...
Would it be reasonable to conclude that it **currently is not possible to deploy a High Availability Solaris based Zope solution**? I have build serveral Zope sites under Solaris (for my old employer).
All have been slow, much slower than expected, but have been reasonably stable.
On our Solaris 7 system, most of our sites are stable. Under Solaris 8, I'm getting python SIGILL core dumps.
* Some lightly loaded sites sometimes stopped answering. The problem disappeared almost completely, when they were regularly checked.....
I can confirm that has happened with us too - maddeningly, it's very difficult to track down.
* For the main production site, Zope was much more stable than the Oracle backend database. Most outages was caused by Oracle problems calling for frequent Oracle restarts. After an Oracle restart, Zope needed to be restarted, too.
We don't use Oracle, MySQL works fine for the sorts of things we need to do.
Threading was definitely working.
Dieter, if you still have access to this Solaris box, can you try out the script that Oliver posted in the 'threading and performance' thread on it and see what happens? What I've done is this (and if I'm doing something wrong here I would really appreciate people telling me); I run this script; <dtml-var standard_html_header> <dtml-var ZopeTime> <br> <dtml-in "_.range(80)"><dtml-in "_.range(499)">X</dtml-in><br></dtml-in> <br> <dtml-var ZopeTime> <dtml-var standard_html_footer> it takes about 15 seconds on a small Solaris 8 box I've been testing on. I open up a management screen on Control_Panel and open up two browser windows *on separate browsers* (I use IE5 and Mozilla) on this script. Click 'view' on one, wait a few seconds, switch to the second browser, select 'view' and then switch to the Control_Panel / debuginfo view to see what's going on. output is... (solaris) 2001/12/01 21:01:09.61767 GMT ... xxx ... 2001/12/01 21:01:25.8206 GMT and 2001/12/01 21:01:27.995 GMT ... xxx ... 2001/12/01 21:01:42.5978 GMT ie the second script *waits* until the first one has finished. This does *not* happen on MacOS-X, ie 2001/12/01 22:10:13.1872 GMT ... xxx ... 2001/12/01 22:10:44.208 GMT and 2001/12/01 22:10:21.1134 GMT ... xxx ... 2001/12/01 22:10:52.157 GMT From the timestamps, you can tell its late - and I'm very tired! :) hth tone
Tony et al, I went to those threads you originally mentioned about threading problems and tried some of the test scripts there. Specifically, I tried Oliver's script and I also ran Dieter's sleep script. Even while running BOTH concurrently, Zope's management interface was still responsive and I was able to browse my site with seemingly no difficulty. The same applies when I run your ZopeTime script (it takes about 20 seconds on my Netra X1, btw). So I think threading works for me. =) This is using my gcc-compiled python (--with-threads). Building python w/ Forte is no picnic... my last build compiled fine and then the executable puked up something about not being able to find ELF. Beats me... I'll keep plugging away though. One thing I've noticed... I ran the 'testall.py' script first using my mt python built with Forte then with the gcc build. For both pythons, the thread test did something like this: test_thread test_thread creating task 1 task 1 will run for 8.3 sec creating task 2 task 2 will run for 5.4 sec creating task 3 task 3 will run for 3.1 sec creating task 4 task 4 will run for 2.7 sec creating task 5 task 5 will run for 5.9 sec creating task 6 creating task 7 creating task 8 creating task 9 creating task 10 waiting for all tasks to complete task 6 will run for 7.9 sec task 7 will run for 2.5 sec task 8 will run for 7.6 sec task 9 will run for 5.7 sec task 10 will run for 9.3 sec task 7 done task 4 done task 3 done task 2 done task 9 done task 5 done task 8 done task 6 done task 1 done task 10 done all tasks done That seems like it's the kind of thing you were looking for previously... Although when run independently, the thread test doesn't do that sort of thing. Interestingly, the gcc test hung once results were printed... I sent a few breaks and it gave me a traceback that seemed to indicate it had hung waiting to acquire a new thread?? At any rate, maybe test_thread.py's behaviour isn't an indication of multi-threading not working, just the scheduling... Is it possible Solaris handles thread-creation a little differently than BSD? Perhaps the overhead of creating a new thread is smaller with Solaris, so that the original thread is able to crank out a whole bunch in one shot before those threads get their turn? You mentioned that you were comparing with Mac OS X. Aqua is a huge resource hog... maybe your python process isn't getting all the CPU it wants and the new threads pop up sooner than they otherwise would? I don't know... this is all just pure speculation... BTW, test results using Forte-compiled Python: 116 tests OK. 6 tests failed: test___all__ test_asynchat test_nis test_socket test_socketserver test_sundry 17 tests skipped: test_al test_bsddb test_cd test_cl test_dl test_gdbm test_gl test_imgfile test_largefile test_linuxaudiodev test_minidom test_openpty test_pyexpat test_sax test_sunaudiodev test_winreg test_winsound Results using vanilla gcc python: 119 tests OK. 2 tests failed: test_nis test_socketserver 18 tests skipped: test_al test_bsddb test_cd test_cl test_dl test_gdbm test_gl test_imgfile test_largefile test_linuxaudiodev test_minidom test_openpty test_pyexpat test_sax test_sunaudiodev test_sundry test_winreg test_winsound I'm not sure what the failed tests are for exactly... gcc chose to skip the test_sundry test that failed with forte, but I'd like to fix them up if possible. If they aren't show-stoppers... I might put my Forte-built, 10% faster python into action... cheers, Paul. On Sat, Dec 01, 2001 at 10:21:12PM +0000, Tony McDonald wrote:
On Saturday, December 1, 2001, at 07:38 PM, Dieter Maurer wrote:
Well, I have similar experices in relation to Solaris including _extremely_ poor performace, threading problems/strangness (that affect even console-logging), and repeated core dumps. With Tony's report about his Solaris troubles, I feel safe to exclude DCO2 as the mayor culprit. So...
Would it be reasonable to conclude that it **currently is not possible to deploy a High Availability Solaris based Zope solution**? I have build serveral Zope sites under Solaris (for my old employer).
All have been slow, much slower than expected, but have been reasonably stable.
On our Solaris 7 system, most of our sites are stable. Under Solaris 8, I'm getting python SIGILL core dumps.
* Some lightly loaded sites sometimes stopped answering. The problem disappeared almost completely, when they were regularly checked.....
I can confirm that has happened with us too - maddeningly, it's very difficult to track down.
* For the main production site, Zope was much more stable than the Oracle backend database. Most outages was caused by Oracle problems calling for frequent Oracle restarts. After an Oracle restart, Zope needed to be restarted, too.
We don't use Oracle, MySQL works fine for the sorts of things we need to do.
Threading was definitely working.
Dieter, if you still have access to this Solaris box, can you try out the script that Oliver posted in the 'threading and performance' thread on it and see what happens?
What I've done is this (and if I'm doing something wrong here I would really appreciate people telling me);
I run this script; <dtml-var standard_html_header> <dtml-var ZopeTime> <br> <dtml-in "_.range(80)"><dtml-in "_.range(499)">X</dtml-in><br></dtml-in> <br> <dtml-var ZopeTime> <dtml-var standard_html_footer>
it takes about 15 seconds on a small Solaris 8 box I've been testing on.
I open up a management screen on Control_Panel and open up two browser windows *on separate browsers* (I use IE5 and Mozilla) on this script. Click 'view' on one, wait a few seconds, switch to the second browser, select 'view' and then switch to the Control_Panel / debuginfo view to see what's going on.
output is... (solaris) 2001/12/01 21:01:09.61767 GMT ... xxx ... 2001/12/01 21:01:25.8206 GMT
and
2001/12/01 21:01:27.995 GMT ... xxx ... 2001/12/01 21:01:42.5978 GMT
ie the second script *waits* until the first one has finished.
This does *not* happen on MacOS-X, ie
2001/12/01 22:10:13.1872 GMT ... xxx ... 2001/12/01 22:10:44.208 GMT
and
2001/12/01 22:10:21.1134 GMT ... xxx ... 2001/12/01 22:10:52.157 GMT
From the timestamps, you can tell its late - and I'm very tired! :)
hth tone
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
-- horbal@atips.ca
Just to throw in some more performance tests for those interested... I've built a version of Python 2.1.1 from source using Sun's Forte C compiler. For anybody wanting to reproduce these results: Environment: CC='cc -Xc -mt' cd /python-src/ ./configure --without-gcc --with-cxx=CC --with-threads make make install Now, running the Pystone benchmark python /pythonhome/lib/python2.1/test/pystone.py Results: Sun Netra X1 (UltraSparc IIe 400 MHz) ---- gcc ~= 4000 pystones/sec Forte ~= 4370 pystones/sec ==== difference ~= 10% SunBlade 1000 (dual UltraSparc III 600 MHz) ---- ActiveState 2.1 build 210 ~= 5000 pystones/sec gcc ~= 6750 pystones/sec Forte ~= 7500 pystones/sec ==== difference ~= 25% and 10%, respectively SunFire 3800 (quad UltraSparc III 750 MHz) ---- ActiveState 2.1 build 210 ~= 7300 pystones/sec gcc ~= 8500 pystones/sec Forte ~= 9300 pystones/sec ==== difference ~= 25% and 10%, respectively (I'm not sure if the ActiveState build was the binary distro or built locally, since it was just on our system... I'd have to ask our sysadmin. But either way... it's getting the proverbial ass-kicking) cheers, Paul.
Paul Horbal writes:
Just to throw in some more performance tests for those interested... .... Results:
Sun Netra X1 (UltraSparc IIe 400 MHz) ---- gcc ~= 4000 pystones/sec Forte ~= 4370 pystones/sec A 800 USD AMD Athlon 1.4 GHz PC under Linux with "gcc" features
17,543 pystones/sec its a factor of 4 faster and (I think) about a factor of 4 less expensive... Dieter
Well, in all fairness... your Athlon is clocked 3.5 times higher than my Netra and happens to be the architecture for which both Python and gcc likely receive hefty optimization (x86). The Netra X1 is also sub-$1000, fits in 1U and has some really nice Lights-Out Management features (I can poweroff/poweron via external console or have the watchdog automatically reboot a stalled system, among other things). There's also Sun's support, which - good or bad (I've had the former) - you just can't get for a commodity PC. I don't dispute Athlon price/performance (writing this on an Athlon atm)... The UltraSparc II isn't exactly a performance leader these days, if it ever was. We're talking about a chip that was originally released over 5 years ago. But the Netra is not intended to be a speed demon, just a solid web server. That's what we bought it for... =) At 08:23 PM 12/2/2001 +0100, Dieter Maurer wrote:
Paul Horbal writes:
Just to throw in some more performance tests for those interested... .... Results:
Sun Netra X1 (UltraSparc IIe 400 MHz) ---- gcc ~= 4000 pystones/sec Forte ~= 4370 pystones/sec A 800 USD AMD Athlon 1.4 GHz PC under Linux with "gcc" features
17,543 pystones/sec
its a factor of 4 faster and (I think) about a factor of 4 less expensive...
Dieter
Tony McDonald writes:
... Dieter, if you still have access to this Solaris box, can you try out the script that Oliver posted in the 'threading and performance' thread on it and see what happens? As Paul reported the problem you see is not a general Solaris problem.
Could you check whether your Zope is started with any special parameters, especially: -t controls number of threads -t0 or -t1 would show something you see -i controls the frequency (number of Python interpreter instructions) before the Global Interpreter Lock is released and other threads get a chance to run. A very large value would be bad (several thousand). If this is not the case, then maybe, someone modified "z2.py". Or, there is some environment variable that badly affects Zope's operation... Dieter
Paul Horbal writes:
Has anybody attempted to do this?
I'm finding that even though I specify CC, CXX, put the Forte directories ahead of /usr/local/bin in my path, Python is still falling back on gcc to build Zope. Where can I modify this behaviour? You may need to configure Python not to use "gcc":
In your Python source distribution folder: ./configure --without-gcc Then "make", "make test", "make install" From the "configure" source: --without-gcc never use gcc
Our Sun rep tells us that we can kiss performance goodbye using gcc... Since we have Forte anyways, I'd like to try it out. Please let us know, whether or not they are right...
I do not always trust reps... Dieter
participants (6)
-
Dario Lopez-Kästen -
Dieter Maurer -
Paul Horbal -
Paul Horbal -
tone.mcdonald@ncl.ac.uk -
Tony McDonald