oliver.erlewein@sqs.de wrote:
Hi all
Me again with my Solaris problems (threading, dtml and performance)...
I've now checked on my threads. I've got 8 of them. Most have differing priorities and the same nice level.
Can somebody on a Solaris Zope please call "ps -eflLj | grep python" and tell me if they have a similar result? I'm wondering about the thread with the PRIO=99. That's the thread doing all the work. Is that correct?? I don't suppose so. The other threads don't seem to work.
Thanx a lot! Oliver Erlewein
Hi Oliver, Threading priority is a dynamic computation by the Solaris scheduler -- the thread with the lower priorities are dispatched first -- but once you get dispatched your priority raises if you finish your timeslice without entering a wait state (well, that may be a simplification). Because Python is controlled by a large central lock, only one thread can be doing any useful work most of the time. There are exceptions to this rule, but most of Zope is compute-bound, not IO-bound. The net effect is that you rarely see more than a few threads in Zope doing work. More interesting to note is that the performance of Zope is pretty much linear with the results of the "pystone" benchmark. So, if you run python /usr/local/lib/python2.1/test/pystone.py you might get a number like blade(3)$ python /usr/local/lib/python2.1/test/pystone.py Pystone(1.1) time for 10000 passes = 1.77 This machine benchmarks at 5649.72 pystones/second That's my results on a SunBlade 100 (500 Mhz UltraSPARC IIe). Its worthwhile noting that I get the following result on a 500 Mhz Intel Celeron: djinn(12)$ python2.1 /usr/local/python-2.1.2/lib/python2.1/test/pystone.py Pystone(1.1) time for 10000 passes = 1.47 This machine benchmarks at 6802.72 pystones/second from which you may draw two conclusions: 1) Gcc on SPARC doesn't emit the best bytecodes, and 2) Intel CPUs run Python more cost-effectively than SPARC chipsets do. -- Matt Kromer Zope Corporation http://www.zope.com/