[Zope-dev] Streaming Response
Toby Dickenson
tdickenson@geminidataloggers.com
Thu, 24 Apr 2003 11:46:10 +0100
On Thursday 24 April 2003 10:58 am, Oliver Bleutgen wrote:
> Toby Dickenson wrote:
> > Deactivating *every* block looks like a bad idea. This defeats the
> > in-memory cache that will be a big win for small files.
> >
> > The memory cache is only a disadvange if your files are large enough that
> > they have enough chunks to disrupt LRU cache policy. How about comparing
> > the number of chunks to a fraction of the cache target size?
> >
> > (sorry, no patch today. Ive a large email backlog to clear)
>
> Just something which came to my mind when reading about cache policies:
>
> With the advent of more and more alternative storages esp. adaptable
> storage and directory storage, maybe the interaction with other caches
> should be taken into account.
> For instance, does it make sense for zope to cache a file in memory when
> it's also cached in the kernel VFS, which it likely could be when some
> FS based storage is used? How can the choice of the backend storage
> affect caching policies inside zope?
> For the "frontend" caches policies, zope is quite configurable thanks to
> the various cache managers etc., maybe it's also worth thinking about
> making such cache policies for the "backend" more configurable.
Yes, todays LRU policy is good enough, but far from optimal. A better cache
eviction scheme could also balance:
1. The cost of keeping an object in memory. This is largely dependant on the
byte size of the object.
2. The benefit of keeping an object in memory. Objects that know they are a
waste of space (like the chunks that started this thread) can pass on a hint
to the cache manager.
3. The cost of a subsequent cache miss. This depends on the pickling overhead,
and costs for touching the underlying storage.
The effect of this would be to keep a larger proportion of small objects in
memory. That is more objects, in the same space.
--
Toby Dickenson
http://www.geminidataloggers.com/people/tdickenson