On Fri, 1 Mar 2002, Chris Withers wrote:
kosh@aesaeion.com wrote:
solution and so far I like it a lot more then ZPT. ZPT seems to have the idea that there should be a 1 document to 1 page mapping which I don't like.
Where on earth did you get that idea?
Can you call ZPT documents from other ZPT documents and so on without having them pull in headers, footers etc at each step. Also how do you deal with user agent detection such that you have a page that changes depending on the requesting agent so that you have one url to x object and x object does not always look the same.
I prefer having lots of little objects around that get assembled together to build a page. Especially when working with smarter objects.
I think you may have missed METAL, or at least not understood it...
Overall DTML is easier to use with this kind of a model by a good deal.
Not true, by a good deal.
<p tal:replace="/some/other/component">Bit bit goes here</p>
...seems a lot nicer (and more predictable) to me than:
<p><dtml-var "PARENTS[-1].some.other.component(_.None,_)"></p>
Well I don't use PARENTS in DTML since I think that belongs in the python code. I also don't use the _.None,_ since I have never had it be needed. Also I don't put the tags around it so the dtml calls I do are just includes.
I doubt there is a simpler way to do <dtml-var header><dtml-var someobject><dtml-var footer> with all of those being python products, python scripts etc that know how to behave when called using a little python magic.
Two points:
1.
<tal:x tal:replace="here/header"><tal:x tal:replace="here/someobject"><tal:x tal:replace="here/footer">
..if you really must. Although the concept of half your layout being in a header and the other half in a footer is broken by design.
Actually I don't think it is broken since I don't see them as the same objects. What you define as the layout of a page may not be the same for all user agents, the same at all levels of a site etc. I like having these nice black boxes that can be called without having to worry about them explicitely since I find that makes things easier to debug. If everything that needs a header calls the header then it only needs to be fixed in one location. Overall I don't think of a page as a single document I think of it as a collection of objects that each does a job and only that job. DTML looks to me to fit that concept better then ZPT does.
the correct way to do this is:
<html metal:use-macro="/some_template/macros/main"> <body metal:fill-slot="here/someobject"> </html>
However now you have defined it as html which is not always the case. Overall that looks more complex to me then just calling the objects and allowing them to behave as needed.
...which is much more explicit, graceful and nice.
2.
"a little bit of python magic" is what causes you to get bit in the ass 6 months down the line when you coem to make a minro tweak to the code and break everything. Explciit is good!
Actually we have been doing it for a lot more then 6 months now without any problems. Introspection and reflection are not really that magical but they do allow an object to react based on its surroundings.
Overall it seems ZPT seems designed for html people that used wysiwyg editors to write pages.
So EMACS is a wysiwig editor now? Glad to hear it ;-)
Overall I don't, nor does anyone I work with do that. The pages written that way just are not as clean as what you can get from someone that knows the spec and how to use it to their best advantage.
Actually, I use Dreamweaver to rapid-prototype pages (and it produces pretty damn good HTML) and then do minro tweaks by hadn when I'm adding in the TAL. I really like the fact that I can look at the source of the template in a browser and see what it's going to look like without actually needing to feed loads of dynamic content into it.
How do you select for which user agent that you will be editing the source. I have data objects that change based on user agent and it seems under that way if you tried to view what the unrendered view of a page would be you would have to state which user agent you wanted to view that information for. I prefer to be able to go to the object and use its interfaces for dealing with the various useragents and being able to trust the other objects will do their job without needing to be checked on it. Overall if you adhere stricly to the DOM and follow the spec to the letter that is not a problem. xhtml 1.0 strict is really not hard and if you use it properly and know how the browsers work you can speed up page drawing and make it easier to develop. Overall I am not happy with any generated code from any application I have seen so far. They are often embarassinly far from the spec and most abuse tables which slows down page rendering.
Come and have a go if you think you're hard enough ;-)