[Zope] caching in zope; tips, tricks, & lessons learned
Jamie Heilman
jamie@audible.transient.net
Mon, 2 Jun 2003 18:28:53 -0700
We all care about speed right? Sure we do. If you've been using a
RAM Cache Manager to reduce those painfully long render times of
wrought by zope page templates or complex zsql methods, there are some
things you might not have known which I plan to elucidate here.
First, its important to understand exactly what knobs the caching API
gives us. Briefly they are, the ability to set a key, value pair in
the cache, and the ability to get a value from the cache given a key,
and the ability to delete any entries associated with an object. (Its
really not unlike a dictionary when you get right down to it.) For
the Product author, there are two areas of this API that deserve
special focus, control over the "key" used by the cache and control
over the peripheral logic dictating the overall "cost" of a cache
entry.
I'll quickly address the issue of cost first, as its simple - the
Cacheable class really doesn't have many concepts of "cost." Objects
implementing the Cacheable interface can define a function which
influences the mtime (last modification time) calculation of an
object, but thats it, there is no other notion of cost logic embedded
within the API. This significantly limits what kind of cache
replacement algorithms you can implement, but there probably isn't any
better alternative given the wide variety of object types in Zope.
(If you're confused by my rambling, its OK; let me give a more
concrete example. Zope has no parameter in the ZCacheable_set method
that lets you tell the associated cache manager, "this data took me
2.453 seconds to generate, so I will assign its time-cost metric a
value of 2.453." Thus the cache manager has no way of gauging the
relative overhead of flushing a caching entry that took 5 seconds to
generate, vs. one that took only 2 seconds. OTOH, I'm not sure if a
per-thread job timer is a reasonable thing to ask Cacheable objects to
implement or not, or if it would even matter in the long run.)
Its worthwhile to note the RAM Cache Manager uses a LFU replacement
policy with some tweakable parameters. It could just as easily
implemented a LRU policy, one that expires based on entry size, or a
hybrid thereof, but there's nothing wrong with LFU; different
replacement policies are good in different situations.
Control over the 'key' used by the cache manager is arguably just as
important as the cost logic, and here the caching API gives a very
significant (bordering on limitless) amount of options. Objects are
allowed to define a view_name (this is generally a ZClass thing, the
view_name being the id of the object which is acting as a method of
the ZClass instance), and a keyword dictionary that can contain anything
the object author thinks should be used in determining entry
uniqueness (dtml methods let you control this through the zmi).
Additionally cache managers have access to anything which can be
acquired from the object itself. RAM Cache Managers make good use of
all this flexibility but there are some gotchas I think people should
be aware of which is why I'm writing this whole mess in the first
place.
The first gotcha, is that RAM Cache Managers let you define a list of
variables to give to REQUEST.get(), the values of which will be added
to the cache entry key--this is both good and bad. Its good because
it lets you, for example, make sure AUTHORIZED_USER is part of the
entry key, ensuring user-specific data doesn't leak. Its bad because
it presents the user with a tempting but dangerous way to cache the
results of query submissions. If you allow RAM Cache Manager to add,
say, the result of REQUEST.get("foo") to the key of an entry, its
entirely possible you just opened yourself up to a trivial DoS attack
as you've just allowed clients to store up to whatever your configured
variable size limits are (probably the max POST query size for most
people) in the cache. Even if you have POST queries turned off and
only allow the normal 8k GET request, thats still quite a bit of data
that the client can now re-insert over and over again into your cache.
RAM Cache Manager has no size limits. There are a number of ways this
can be solved, (message digests, length limits, etc.) but in the
interim just be aware the problem exists and use RAM Cache Managers
with appropriate caution.
The second gotcha is specific to Page Templates. DTML methods let you
choose which bits of their namespace should placed in the keywords
dictionary used by the caching API to indicate entry uniqueness. By
default, DTML methods send an empty dictionary to the ZCacheable_set
function. Page Templates OTOH always send a dictionary that contains
some problematic entries:
'bound_names':{'traverse_subpath': [],
'options': {'args': ()},
'user': Anonymous User},
'here':<Folder instance at 896be80>
The bound_names entry hasn't been a problem yet for me as far as I've
been able to tell... but as RAM Cache Manager will str() every key and
value pair in the keywords dict, there's a possibility the order of
bound_names' keys could change between str()s and accidently create a
cache miss. That said, I've never seen it happen. What I have seen
happen though, is the 'here' value can change if enough memory
activity occurs. I was seeing some confusing activity whereby I'd
cache a large bundle of page templates, by running wget over a tree,
then I'd run wget over the same tree and the cache would miss for
every single one. At first I thought it was a cleanup timing issue
but after examining it more carefully I saw that in fact the keys were
changing because the first time through the value of 'here' would be
<Folder instance at 8222222> and the second time it would be
<Folder instance at 8333333>. (numbers obviously falsified) This
tends to make RAM Cache Managers less effective than the should be at
caching page templates. If there's a way to alter the dictionary page
templates send to the caching infrastructure, its not obvious what it
is. (I haven't looked particularly hard though.) But the lesson here
is clear, grabbing the str() of an object isn't always a cache
friendly tactic.
That said, I've written a cache manager which avoids both of these
predicaments. Its called MemoryCache, its on zope.org, but before you
rush off to try it out be forewarned it requires python 2.2 and an
enhanced version of the OFS/Cache.py module (available in collector
issue 911). I solved the first problem by using message digests, and
enforcing a cache size limit, and the second problem by allowing for
extreme flexibility in determining key uniqueness by giving cache
controllers the ability to write their own key aggregation function in
restricted python. My tests indicate it uses far less memory when
caching things that tend to generate redundant data (like tree
widgets) and that the overhead of digesting everything was basically
small enough to be lost in the statistical noise of my benchmarks.
It does however use a LRU replacement policy, rather than an LFU like
RAM Cache Manager, so may not be a suitable replacement for a RAM
Cache Manager in all situations.
--
Jamie Heilman http://audible.transient.net/~jamie/
"You came all this way, without saying squat, and now you're trying
to tell me a '56 Chevy can beat a '47 Buick in a dead quarter mile?
I liked you better when you weren't saying squat kid." -Buddy