On Thursday 23 October 2003 10:12, Bjorn Stabell wrote:
From: Toby Dickenson [mailto:tdickenson@geminidataloggers.com] Yes. The default of 4 threads is arguably too high. All my applications work better with 2 threads, and double the cache size (but still using the same total amount of memory)
I'm experimenting with that now. A catalog query that used to take 40-100 seconds with the default target number of objects in the cache now takes 1-2 seconds (or less) with the target number increased to 15,000. This 100x speed increase comes at the cost of using 0.5 GB of resident memory with 4 threads.
15k objects sounds like alot for a catalog query. 0.5G/15k = average 33k per object...... that sounds more like the size of a document object rather than a btree node. Both of these facts suggest you might be able to improve performance by optimising your application. There are pages in control panel that will tell you what these 15k objects are.
They need to be seperate to give isolation between transactions. Some classes that allocate large amounts of memory for immutabe objects have custom optimisations to share those between threads.
This is very unfortunate, especially given that most of these objects won't be changed often. Something like the the way virtual memory works (copy-on-change) would have saved a lot of memory, or allowed you to keep many more objects in the cache, increasing speed drastically.
yes, this is certainly true.
Is this in any way related to the inherent global interpreter lock limitation in Python?
no. -- Toby Dickenson