From: Toby Dickenson [mailto:tdickenson@geminidataloggers.com] 15k objects sounds like alot for a catalog query.
0.5G/15k = average 33k per object...... that sounds more like the size of a document object rather than a btree node.
Both of these facts suggest you might be able to improve performance by optimising your application. There are pages in control panel that will tell you what these 15k objects are.
I've been trying to figure that out (hence my other post about browsing the ZODB). Actually, 15,000 target objects per thread. The actual number of objects in memory with 4 threads is around 60,000, so it's more like 8kb per object. Which makes me wonder... [ Sorry for asking so much today, but this is really useful :) ] So you should really try to make sure the entire catalog could fit in memory, or each catalog query will result in activating/deactivating (thrashing) objects? What impacts the amount of memory a catalog query actually uses? The indexes, all of them, loaded in memory? Or only the indexes you use? Or something else? I guess the metadata and the objects themselves won't be touched. Thanks for the clarifying answers, by the way! Regards, -- Bjorn