[Zope-dev] Streaming Response

Shane Hathaway shane@zope.com
Thu, 24 Apr 2003 10:02:30 -0400


Oliver Bleutgen wrote:
> Toby Dickenson wrote:
> 
>>
>> Deactivating *every* block looks like a bad idea. This defeats the 
>> in-memory cache that will be a big win for small files.
>> The memory cache is only a disadvange if your files are large enough 
>> that they have enough chunks to disrupt LRU cache policy. How about 
>> comparing the number of chunks to a fraction of the cache target size?
>>
>> (sorry, no patch today. Ive a large email backlog to clear)
> 
> 
> Just something which came to my mind when reading about cache policies:
> 
> With the advent of more and more alternative storages esp. adaptable 
> storage and directory storage, maybe the interaction with other caches 
> should be taken into account.
> For instance, does it make sense for zope to cache a file in memory when 
> it's also cached in the kernel VFS, which it likely could be when some 
> FS based storage is used? How can the choice of the backend storage 
> affect caching policies inside zope?
> For the "frontend" caches policies, zope is quite configurable thanks to 
> the various cache managers etc., maybe it's also worth thinking about 
> making such cache policies for the "backend" more configurable.

Here's a better alternative: when you can, avoid caching altogether. 
For example, if you're storing files using Ape, stream the data directly 
to/from the filesystem.  In fact, when sending a file, the server could 
close the database connection, freeing up precious database threads, and 
let asyncore stream the rest of the data from the file.

To do this, we'd need minimal support from the application.  OFS.File 
needs to delegate the specifics of data storage and retrieval to a 
subobject.  Ape could take the opportunity to replace that subobject 
with something that reads and writes the file directly.

Shane