[Zope-dev] what you see and what dont is all dom

Damian Morton morton@dennisinter.com
Thu, 16 Sep 1999 22:22:27 -0400


From: Martijn Faassen <faassen@vet.uu.nl>
> Damian Morton wrote:
> 
> [snip]
> 
> > In fact, the a website as a whole could be viewed as a DOM structure, and
> > there are many advantages to this. For example, if we assumed that our
> > programs elemenst were functional DOM consumers and producers, you could
> > eliminate much of the dynamic nature of a website. In my experince, there is
> > a whole lot of stuff about a website that only changes slowly - it doesnt
> > need to be re-created for every page view, only when the source data
> > changes. If you could decompose a website into a kind of feedforward
> > dataflow machine and add some intelligent caching, you would have a very
> > efficient dynamic website.
> 
> I'm not sure I really get this. I know caching would be useful, but I don't
> see how the DOM nature of a website helps here.

The way I see it, if you view a website and its backing database as a DOM structure (properly indexed), and further allow for the computational part of the website to be nodes that consume and produce DOM trees, then you end up with a computational model called a dataflow machine. Now, a dataflow machine is a very efficient computational structure. It only computes what it needs to compute and no more.

In my scheme, computational nodes could exists in many forms, from XSL declarative nodes, through traditionally coded nodes, to nodes which are SQL queries. Each node would know what nodes depend on its output. If any of a computational node's inputs change, then it re-computes it output and feeds that output to downstream nodes. Think of a fast in-memory make that operates on DOM structures rather than files.

Im thinking of a squishdot-style application. In this kind of application, the page elements change at different rates. For example, the news items only change when a new news item is submitted, the ad's change for every page view, etc etc. Each html rendered part only changes when its source data changes. In the squishdot application, the only part that needs constant re-rendering would be the ad's, everything else changes at a much much slower rate than the pages are requested.

By rendering and caching each document element separately as the source data changes, you get the best of both worlds - dynamic and static. The alternative is to cache each page and annotate a timeout, or some kind of trigger to cause a cache invalidation, and in my opinion, this could get really messy. Of course, many DOM structures have no inputs and depend on nothing, i.e. they are static - whole trees of these might be kept in a cached compact state - e.g. pick a compression algorithm that decompresses blazingly fast. If you wanted to get real tricky, you could use come kind of heirarchical caching e.g. with time, html -> compressed html -> compressed filed html

The other advantage of viewing a website as a DOM structure is that the programming is implicity compartamentalised into structures that can be composed by object oriented visual design tools such as dreamweaver. For example, a squishdot news item, a table of objects which are squishdot news items, etc etc, down to whatever level of granularity you care to work with.    

> > What would be needed for that would be for the
> > programmed part of the website to conform to functional or dataflow
> > programming - each module or function would take objects as input and would
> > create or modify existsing objects. Might mesh well with the ZODB mechanism.
> 
> Actually, the Zope database is already exposed as an (XML) DOM tree in Zope
> 2. Look at ZDOM.py in lib/python/OFS. That's also how Zope 2 does XML export.
> 
> I haven't seen much other leveraging of the DOM-ness of the Zope database
> yet, though. Any ideas?
> 
> Regards,
> 
> Martijn