[Zope-dev] how bad are per-request-write-transactions
Shane Hathaway
shane@zope.com
Fri, 19 Apr 2002 16:01:53 -0400
Paul Everitt wrote:
> Let's say we had a queue in Zope. We could asynchronously send changes
> into the queue. Later, based on some policy (e.g. idle time, clock
> ticks, etc.), those changes would be enacted/committed.
>
> Imagine the queue itself is in a different storage, likely
> non-versioned. Imagine that the queue is processed every N seconds. It
> takes all the work to do and performs it, but in a subtransaction.
>
> Thus you might send the queue ten increments to a counter, but only one
> will be committed to the main storage.
>
> To make programmers have to think less about the queue (send in the
> object reference, the method to use, and the parameters), you could make
> it look like a special form of subtransactions. That is, you say:
>
> tm.beginQueuingTransactions()
> self.incrementCounter()
> self.title='Simple change'
> self.body = upload_file
> tm.endQueueingTransactions()
>
> At the transaction level, all enclosed changes are queued for later
> commit. You don't have to think any differently than rather object
> state management.
Wow, on the surface, that would be very easy to do.
Transaction.register() might dump to a long-lived queue instead of the
single-transaction queue.
> This pattern applies better when you have a lot of document cataloging
> to be done. A separate process can wake up, make a ZEO connection, and
> process the queue. I don't think that indexing documents *has* to be a
> transactional part of every document save.
Right. Here's another way to think about it: we could use a catalog
lookalike which, instead of updating indexes directly, asks a special
ZEO client to perform the reindexing. The special client might decide
to batch updates.
> Under this cron-style approach, you also pay less of a conflict-error
> penalty, as you can increase the backoff period. There's no web browser
> on the other end, impatiently waiting for their flaming logo. :^)
A variant on your idea is that when the transaction is finishing, if
there are any regular objects to commit, the long-lived queue gets
committed too. That would be beneficial for counters, logs, and objects
like Python Scripts which have to cache the compiled code in ZODB, but
not as beneficial for catalogs.
Ok, thinking further... how about a Zope object called a "peer delegate"
which can act like other Zope objects, but which actually calls out to
another ZEO client to do the work? It could be very interesting... it
might use some standard RPC or RMI mechanism. We would want to be
careful to make it simple.
> Ahh well, fun to talk about. Maybe this time next year we can repeat
> the conversation. :^)
I hope we'll be talking about what we did instead of what we'll do. :-)
The change to transactions seems simple. Another thought: the
long-lived queue might be committed only when there are regular objects
to commit *and* a certain amount of time has passed since the last
commit of the long-lived queue. That might work well for catalogs. Cool!
Shane