From: Toby Dickenson [mailto:tdickenson@geminidataloggers.com] Yes. The default of 4 threads is arguably too high. All my applications work better with 2 threads, and double the cache size (but still using the same total amount of memory)
I'm experimenting with that now. A catalog query that used to take 40-100 seconds with the default target number of objects in the cache now takes 1-2 seconds (or less) with the target number increased to 15,000. This 100x speed increase comes at the cost of using 0.5 GB of resident memory with 4 threads.
They need to be seperate to give isolation between transactions. Some classes that allocate large amounts of memory for immutabe objects have custom optimisations to share those between threads.
This is very unfortunate, especially given that most of these objects won't be changed often. Something like the the way virtual memory works (copy-on-change) would have saved a lot of memory, or allowed you to keep many more objects in the cache, increasing speed drastically. Is this in any way related to the inherent global interpreter lock limitation in Python? Bye, -- Bjorn