On Thu, Apr 13, 2006 at 09:04:38AM -0500, Floyd May wrote:
Paul Winkler wrote:
On Wed, Apr 12, 2006 at 01:56:58PM -0500, Floyd May wrote:
One solution I've found is to buffer the writes to REQUEST.RESPONSE by using a python script which the calls granular page templates rather than a single monolithic template, and outputting the results 25k at a time or so; it gives the rest of the server some time to catch up.
Note that this doesn't buy you any improved responsiveness if you're running behind e.g. apache, because apache has to read the entire response from Zope before it starts sending it back to the client.
Wasn't aware of that, but I've tested it from behind Squid, and it works like a charm.
Actually I really should qualify that; it depends what you're trying to do. The only "problem" I have with streaming behind mod_proxy / mod_rewrite is that it does some buffering, and AFAIK there's no way to turn that off on a per-request basis. Even on a global basis it looks like the ProxyReceiveBufferSize can't be set to less than 512 bytes. (Which would probably make performance suck for everything else anyway.) So if you're trying to do some quick-and-dirty pre-AJAX-style status information, where you're streaming small bits of text to the browser, as I did in the ZSyncer UI, then you're out of luck. It's trivial to verify this with a particular reverse proxy setup by visiting a script something like: # Assuming you've made time importable... import time response = context.REQUEST.RESPONSE msgs = '<br/>hello\n' * 20 response.setHeader('content-type', 'text/html') response.setHeader('content-length', str(len(msgs))) for line in msgs.split(): response.write(line + '\n') time.sleep(0.5) If I view this directly at the zope server, I see each "hello" appear after a short delay. If I view it via apache with mod_rewrite, I see nothing for 10 seconds, then the whole page at once. (Note you can simply leave out the content-length header to get response.write() to use http 1.1-style chunking, which is convenient if you can't pre-calculate an accurate size. This has the same buffering issue behind Apache, and additionally requires the client to be using http 1.1.) OTOH, if you're streaming large blobs in chunks of e.g. 64kb, streaming through apache seems to work just fine. This is probably a more common case. -- Paul Winkler http://www.slinkp.com