On Tue, 2003-01-21 at 18:03, Tom Emerson wrote:
OK, that is what I feared. I take it the magic insertion of a LIMIT on the SQL query happens automagically? Or does the whole thing get returned and bits dropped?
Yes, the row limit happens automatically but can be adjusted. Only as many rows as necessary up to the limit are conjured up from the underlying database, no bits are dropped. See http://www.zope.org/Documentation/Books/ZopeBook/2_6Edition/RelationalDataba... for more information.
Within ZPT it looks like it would be possible to subclass ZTUtils.Batch to be more intelligent about fetching information...
ZTUtils looks like it's as lazy as possible, but when you construct a new one on every request, obviously the old one's state goes away and you incur the expense all over again. If you aren't worried too much about dataset invalidation (if your data is pretty static) or getting potentially different results between threads, you could probably stick a Batch object as an "_v_batch" attribute of a convenient persistent object in your code. Objects stored as "_v_" attributes of a persistent object are implicitly tied to a thread and though their lifetime is unpredictable and not guaranteed, they typically last longer than a single request (and sometimes much longer), so they are perfect for caching. See the Zope Developer's guide for more info.
Anyway, thanks Chris for your help.
Sure!
-- Chris McDonough <chrism@zope.com> Zope Corporation