Trevor Warren wrote at 2004-5-18 22:31 -0700:
... So this means if i have a Single CPU system, the max i can get from Zope multithreaded towards handling 100 diff threads simultaenously would be really meagre as compared to any other application server architectured to handle multiple threads simultaenously.
Python's threading model is good for *single* CPU systems. It usually cannot fully use the power of a *multi* processor system. The reason for this: Python uses reference counting for primary "garbage" detection. Reference count fields are a global resource and require protection. Reference count updates are very frequent. Therefore, it is more efficient (at least on a single CPU system) to protect large sequences of Python code by a lock (the so called Global Interpreter Lock (GIL)) than lock locally for every reference count update.
Does this esplain the 30TPS as compared to the 100+ TPS for the simple short circuit(no sql) tests conducted on my SMP system here at the labs???.
You can use my "ZopeProfiler" product to find out where the time is spent. Be warned, profiling drastically slows Zope down. You do it only to find out the bottlenecks. <http://www.dieter.handshake.de/pyprojects/zope> -- Dieter