[Zope-dev] Re: Caching ZCatalog results
Roché Compaan
roche at upfrontsystems.co.za
Fri Feb 23 12:34:01 EST 2007
On Fri, 2007-02-23 at 12:09 -0500, Tres Seaver wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Tres Seaver wrote:
> > Roché Compaan wrote:
> >>> On Fri, 2007-02-23 at 06:55 -0500, Tres Seaver wrote:
> >>>> Roché Compaan wrote:
> >>>>> I'm curious, has anybody played around with the idea of caching ZCatalog
> >>>>> results and if I submitted a patch to do this would it be excepted?
> >>>>>
> >>>>> I quickly coded some basic caching of results on a volatile attribute
> >>>>> and I was really surprised with the amount of cache hits I got
> >>>>> (especially with a Plone site that is a heavy user of the catalog)
> >>>> +1. I think using the 'ZCachable' stuff (e.g., adding a RAMCacheManager
> >>>> and associating a catalog to it) would be the sanest path here.
> >>> Cool idea. I haven't done any coding involving OFS.Cache though. Looking
> >>> at it briefly it looks like one can modify the catalog to subclass
> >>> OFS.Cacheable and then use the ZCacheable_get, ZCacheable_set and
> >>> ZCacheable_invalidate methods to interact with a cache manager. This
> >>> needs to be pretty explicit though. Are there any side effects that I
> >>> should guard against if the catalog subclasses OFS.Cache?
> >
> > I don't think so. Here are some random thoughts on the idea:
> >
> > - The 'searchResults' method must pass its keyword arguments as
> > part of the cache key.
> >
> > - I don't know if there is a reasonable way to do 'mtime' for
> > the catalog: we would like to be able to get an mtime cheaply
> > for the BTrees (indexes, the 'data' container), but I don't know
> > if that is possible.
> >
> > - The "right" place to do this feels like the 'searchResults' of
> > ZCatalog, just before it calls 'self._catalog.searchResults'.
> >
> > - The CMF's catalog overrides 'searchResults', but calls it at
> > the end, so everything there should work.
In my prototype I also wired the caching into searchResults:
def searchResults(self, REQUEST=None, used=None, _merge=1, **kw):
...
cache_key = None
if args:
cache_key = self._makeCacheKey(args)
result = self._getCachedResult(cache_key)
if result:
return result
return self._cacheResult(cache_key, self.search(args,
sort_index, reverse, sort_limit, _merge))
>
> Hmm, on further thought:
>
> - It isn't safe to stash persistent objects in the RAM Cache manager,
> because they can't be used safely from another database connection.
But the lazy map of brains isn't persistent?
>
> - The result set you get back from a query is a "lazy", which will
> be consumed by each client: no two clients will see the same
> thing.
I don't follow. The Lazy will contain a set of document ids that will be
the same for all clients, not?
I got satisfactory results by storing results in a volatile attribute
(and they are not shared by clients).
I'm still curious to see what can be achieved with ZCacheable to extend
the lifetime of the cache.
--
Roché Compaan
Upfront Systems http://www.upfrontsystems.co.za
More information about the Zope-Dev
mailing list