On Thu, 16 Sep 1999, Damian Morton wrote:
If you could decompose a website into a kind of feedforward dataflow machine and add some intelligent caching, you would have a very efficient dynamic website.
By rendering and caching each document element separately as the source data changes, you get the best of both worlds - dynamic and static. The alternative is to cache each page and annotate a timeout, or some kind of trigger to cause a cache invalidation, and in my opinion, this could get really messy. Of course, many DOM structures have no inputs and depend on nothing, i.e. they are static - whole trees of these might be kept in a cached compact state - e.g. pick a compression algorithm that decompresses blazingly fast. If you wanted to get real tricky, you could use come kind of heirarchical caching e.g. with time, html -> compressed html -> compressed filed html
The problem with this is that it only works if the DOM is slowly changing WRT the queries. If you have a lot of the tree which is changed more often than it is referenced, you end up either spending a lot of resources rendering things that are never used or your scheme reverts to the "render on demand" scheme that we already have. In that case, the overhead of sorting out the cache validity may be greater than the savings which you get by cacheing. It all depends on the characteristics of the content.