Hi Matthew Thanx for your answer. I'm still batteling furiously with Solaris and Zope. As opposed to Win NT / MacOS X / Linux Solaris seems to be running Zope in a single thread. For details see the zope-news-thread on "threading, dtml and performance". You can check this by running a complex dtml-method and trying to access anythin else on that site at the same time. My system freezes for as long as the huge task takes. My Solaris servers: Zopeserver 1 (Sun R280/4GB Ram/4 CPU's/ Solaris 2.8 - I think /not dedicated) Pystone(1.1) time for 10000 passes = 2.03 This machine benchmarks at 4926.11 pystones/second Zopeserver 2 (PIII 1Ghz DELL 19"/512Mb Ram/1 CPU/ Solaris 2.7/dedicated Zope) Pystone(1.1) time for 10000 passes = 0.85 This machine benchmarks at 11764.7 pystones/second That sortof justifies your hunch on what is cheaper (the sun is about 15 times the price of the DELL rack). If you use the server for other programs aswell then the Intel looses the cutting edge due to the one processor. Oliver Erlewein
Hi Oliver,
Threading priority is a dynamic computation by the Solaris scheduler -- the thread with the lower priorities are dispatched first -- but once you get dispatched your priority raises if you finish your timeslice without entering a wait state (well, that may be a simplification).
Because Python is controlled by a large central lock, only one thread can be doing any useful work most of the time. There are exceptions to this rule, but most of Zope is compute-bound, not IO-bound. The net effect is that you rarely see more than a few threads in Zope doing work.
More interesting to note is that the performance of Zope is pretty much linear with the results of the "pystone" benchmark. So, if you run
python /usr/local/lib/python2.1/test/pystone.py
you might get a number like
blade(3)$ python /usr/local/lib/python2.1/test/pystone.py Pystone(1.1) time for 10000 passes = 1.77 This machine benchmarks at 5649.72 pystones/second
That's my results on a SunBlade 100 (500 Mhz UltraSPARC IIe).
Its worthwhile noting that I get the following result on a 500 Mhz Intel Celeron:
djinn(12)$ python2.1 /usr/local/python-2.1.2/lib/python2.1/test/pystone.py Pystone(1.1) time for 10000 passes = 1.47 This machine benchmarks at 6802.72 pystones/second
from which you may draw two conclusions: 1) Gcc on SPARC doesn't emit the best bytecodes, and 2) Intel CPUs run Python more cost-effectively than SPARC chipsets do.
-- Matt Kromer Zope Corporation http://www.zope.com/