----- Original Message ----- From: damien morton <morton@dennisinter.com> To: Jeffrey Shell <Jeffrey@digicool.com> Sent: Thursday, September 16, 1999 6:55 PM Subject: Re: [Zope-dev] wygiwys ----- Original Message ----- From: Jeffrey Shell <Jeffrey@digicool.com>
If the community were to come up with something REALLY good, we'd be interested. But there's a lot of difficulty matching the advanced philosphies of Zope (Acquisition, Security, and now ZClass based objects) and DTML (namespaces, syntax, general concepts) with just about any visual editor out there. A _LOT_ of difficulty. It would be a project of some serious effort, and we haven't seen anybody with enough initiative to come up with a plan of tackling this.
On the other hand, the new syntax is easier to work into smart (non-visual) editors like Alpha (and Emacs-ish editor on the Mac) with syntax completion and all that. Some tags like the contributed dtml-let tag may have some difficulties in these environments that prefer to help keep code very neat by quoting all attributes (a good thing really), since the dtml-let tag behaves differently (ie,
<dtml-let bob=uncle lu="aunt()" billy="'cousin'">
)
The basic problem is that most good html designers (as opposed to html hackers) are working with object oriented design tools. They whack this object there, set its properties, attach some behaviours, and away they go. They are working at the DOM level. Many of the objections to integration object oriented html desitgn tools with programmer oriented tools such as zope arise from the fact that programmers like to work with the raw text of the html. But the raw text is just the transport layer. Whats really being handed around in a http transaction is DOM trees that happen to be represented as html. If our work as programmers focussed less on whacking out html text and instead focussed on producing and consuming DOM nodes, then the integration between object oriented html design tools and out programming tools would become a lot easier. In fact, the a website as a whole could be viewed as a DOM structure, and there are many advantages to this. For example, if we assumed that our programs elemenst were functional DOM consumers and producers, you could eliminate much of the dynamic nature of a website. In my experince, there is a whole lot of stuff about a website that only changes slowly - it doesnt need to be re-created for every page view, only when the source data changes. If you could decompose a website into a kind of feedforward dataflow machine and add some intelligent caching, you would have a very efficient dynamic website. What would be needed for that would be for the programmed part of the website to conform to functional or dataflow programming - each module or function would take objects as input and would create or modify existsing objects. Might mesh well with the ZODB mechanism.
And there's always problems with this perrenial favorite (dtml tags inside of HTML tags) --
<tr <dtml-if sequence-index-even>bgcolor="#c0c0c0"</dtml-if>>
And, well, a trying-to-be-SGML-complient editor that is doing all these wonderful things to help you (syntax hiliting, completion, dialog boxes with tag attributes) panics and jumps over these.
But situations like that aside, the new syntax is significantly easier to work with. And with FTP and WebDAV integration, authoring outside of the management interface is made a little bit easier (but work still needs to be done in this area to help it along).
So, the dream lives on... :)
_______________________________________________ Zope-Dev maillist - Zope-Dev@zope.org http://www.zope.org/mailman/listinfo/zope-dev
(To receive general Zope announcements, see: http://www.zope.org/mailman/listinfo/zope-announce
For non-developer, user-level issues, zope@zope.org, http://www.zope.org/mailman/listinfo/zope )
Damian Morton wrote: [snip]
In fact, the a website as a whole could be viewed as a DOM structure, and there are many advantages to this. For example, if we assumed that our programs elemenst were functional DOM consumers and producers, you could eliminate much of the dynamic nature of a website. In my experince, there is a whole lot of stuff about a website that only changes slowly - it doesnt need to be re-created for every page view, only when the source data changes. If you could decompose a website into a kind of feedforward dataflow machine and add some intelligent caching, you would have a very efficient dynamic website.
I'm not sure I really get this. I know caching would be useful, but I don't see how the DOM nature of a website helps here.
What would be needed for that would be for the programmed part of the website to conform to functional or dataflow programming - each module or function would take objects as input and would create or modify existsing objects. Might mesh well with the ZODB mechanism.
Actually, the Zope database is already exposed as an (XML) DOM tree in Zope 2. Look at ZDOM.py in lib/python/OFS. That's also how Zope 2 does XML export. I haven't seen much other leveraging of the DOM-ness of the Zope database yet, though. Any ideas? Regards, Martijn
From: Martijn Faassen <faassen@vet.uu.nl>
Damian Morton wrote:
[snip]
In fact, the a website as a whole could be viewed as a DOM structure, and there are many advantages to this. For example, if we assumed that our programs elemenst were functional DOM consumers and producers, you could eliminate much of the dynamic nature of a website. In my experince, there is a whole lot of stuff about a website that only changes slowly - it doesnt need to be re-created for every page view, only when the source data changes. If you could decompose a website into a kind of feedforward dataflow machine and add some intelligent caching, you would have a very efficient dynamic website.
I'm not sure I really get this. I know caching would be useful, but I don't see how the DOM nature of a website helps here.
The way I see it, if you view a website and its backing database as a DOM structure (properly indexed), and further allow for the computational part of the website to be nodes that consume and produce DOM trees, then you end up with a computational model called a dataflow machine. Now, a dataflow machine is a very efficient computational structure. It only computes what it needs to compute and no more. In my scheme, computational nodes could exists in many forms, from XSL declarative nodes, through traditionally coded nodes, to nodes which are SQL queries. Each node would know what nodes depend on its output. If any of a computational node's inputs change, then it re-computes it output and feeds that output to downstream nodes. Think of a fast in-memory make that operates on DOM structures rather than files. Im thinking of a squishdot-style application. In this kind of application, the page elements change at different rates. For example, the news items only change when a new news item is submitted, the ad's change for every page view, etc etc. Each html rendered part only changes when its source data changes. In the squishdot application, the only part that needs constant re-rendering would be the ad's, everything else changes at a much much slower rate than the pages are requested. By rendering and caching each document element separately as the source data changes, you get the best of both worlds - dynamic and static. The alternative is to cache each page and annotate a timeout, or some kind of trigger to cause a cache invalidation, and in my opinion, this could get really messy. Of course, many DOM structures have no inputs and depend on nothing, i.e. they are static - whole trees of these might be kept in a cached compact state - e.g. pick a compression algorithm that decompresses blazingly fast. If you wanted to get real tricky, you could use come kind of heirarchical caching e.g. with time, html -> compressed html -> compressed filed html The other advantage of viewing a website as a DOM structure is that the programming is implicity compartamentalised into structures that can be composed by object oriented visual design tools such as dreamweaver. For example, a squishdot news item, a table of objects which are squishdot news items, etc etc, down to whatever level of granularity you care to work with.
What would be needed for that would be for the programmed part of the website to conform to functional or dataflow programming - each module or function would take objects as input and would create or modify existsing objects. Might mesh well with the ZODB mechanism.
Actually, the Zope database is already exposed as an (XML) DOM tree in Zope 2. Look at ZDOM.py in lib/python/OFS. That's also how Zope 2 does XML export.
I haven't seen much other leveraging of the DOM-ness of the Zope database yet, though. Any ideas?
Regards,
Martijn
On Thu, 16 Sep 1999, Damian Morton wrote:
If you could decompose a website into a kind of feedforward dataflow machine and add some intelligent caching, you would have a very efficient dynamic website.
By rendering and caching each document element separately as the source data changes, you get the best of both worlds - dynamic and static. The alternative is to cache each page and annotate a timeout, or some kind of trigger to cause a cache invalidation, and in my opinion, this could get really messy. Of course, many DOM structures have no inputs and depend on nothing, i.e. they are static - whole trees of these might be kept in a cached compact state - e.g. pick a compression algorithm that decompresses blazingly fast. If you wanted to get real tricky, you could use come kind of heirarchical caching e.g. with time, html -> compressed html -> compressed filed html
The problem with this is that it only works if the DOM is slowly changing WRT the queries. If you have a lot of the tree which is changed more often than it is referenced, you end up either spending a lot of resources rendering things that are never used or your scheme reverts to the "render on demand" scheme that we already have. In that case, the overhead of sorting out the cache validity may be greater than the savings which you get by cacheing. It all depends on the characteristics of the content.
----- Original Message ----- From: Richard Wackerbarth <rkw@dataplex.net> To: <morton@dennisinter.com> Cc: <zope-dev@zope.org> Sent: Friday, September 17, 1999 6:10 AM Subject: Re: [Zope-dev] what you see and what dont is all dom
On Thu, 16 Sep 1999, Damian Morton wrote:
If you could decompose a website into a kind of feedforward dataflow machine and add some intelligent caching, you would have a very efficient dynamic website.
By rendering and caching each document element separately as the source data changes, you get the best of both worlds - dynamic and static. The alternative is to cache each page and annotate a timeout, or some kind of trigger to cause a cache invalidation, and in my opinion, this could get really messy. Of course, many DOM structures have no inputs and depend on nothing, i.e. they are static - whole trees of these might be kept in a cached compact state - e.g. pick a compression algorithm that decompresses blazingly fast. If you wanted to get real tricky, you could use come kind of heirarchical caching e.g. with time, html -> compressed html -> compressed filed html
The problem with this is that it only works if the DOM is slowly changing WRT the queries. If you have a lot of the tree which is changed more often than it is referenced, you end up either spending a lot of resources rendering things that are never used or your scheme reverts to the "render on demand" scheme that we already have. In that case, the overhead of sorting out the cache validity may be greater than the savings which you get by cacheing. It all depends on the characteristics of the content.
I guess my experience is with sites that tend to get up to a million of hits a day. In this case, the DOM is slowly changing wrt to the queries. However, even in a lower demand situation; a situation in which the changes outstrip the demand, the dataflow scheme still has advantages. You are still only rendering that which needs to be rendered, which is generally a good thing. You can also get more sophisticated about your dataflow, mixing a feedforward with a demand-driven scheme based of usage statistics. If a given tree is demanded more frequently than it changes, then it is rendered as its inputs change. If the tree is changed more frequently than it is demanded, then it should be rendered on demand. In this way the scheme is adaptive with respect to both demand and design. I do think, however, that in most places, in most websites, the parts that change as frequently as (or more frequently than) they are demanded will be small in number and scope. About the only things I can think of that might fall in this category are time-based elements, and non-deterministic elements.
At 16:54 17/09/99 , Damian Morton wrote:
I guess my experience is with sites that tend to get up to a million of hits a day. In this case, the DOM is slowly changing wrt to the queries. However, even in a lower demand situation; a situation in which the changes outstrip the demand, the dataflow scheme still has advantages. You are still only rendering that which needs to be rendered, which is generally a good thing. You can also get more sophisticated about your dataflow, mixing a feedforward with a demand-driven scheme based of usage statistics. If a given tree is demanded more frequently than it changes, then it is rendered as its inputs change. If the tree is changed more frequently than it is demanded, then it should be rendered on demand. In this way the scheme is adaptive with respect to both demand and design. I do think, however, that in most places, in most websites, the parts that change as frequently as (or more frequently than) they are demanded will be small in number and scope. About the only things I can think of that might fall in this category are time-based elements, and non-deterministic elements.
In Zope, the inputs to a rendered document depend on the acquisition path taken, and there is an unlimited number of combinations. I don't think that a dataflow scheme is therefore applicable to Zope. -- Martijn Pieters, Web Developer | Antraciet http://www.antraciet.nl | Tel: +31-35-7502100 Fax: +31-35-7502111 | mailto:mj@antraciet.nl http://www.antraciet.nl/~mj | PGP: http://wwwkeys.nl.pgp.net:11371/pks/lookup?op=get&search=0xA8A32149 ------------------------------------------
On Mon, 20 Sep 1999, Martijn Pieters wrote:
In Zope, the inputs to a rendered document depend on the acquisition path taken, and there is an unlimited number of combinations. I don't think that a dataflow scheme is therefore applicable to Zope.
I tend to agree. Cacheing of targets, which is implicit in dataflow, only works when there are a limited number of targets and those targets are likely to be used in the future.
participants (4)
-
Martijn Faassen -
Martijn Pieters -
morton@dennisinter.com -
Richard Wackerbarth