At 14:26 2003-04-15 -0700, Brett Carter said:
What doesn't work in IE? Am I missing something? HTTP Push (Client Push, Client streaming, etc) is a client based protocol, and IE browsers don't support it - they don't recognize the content type, so they'll just pop up a 'Download this file box'
Ok.
I tried the Python Script bellow with the same behavior in both IE (IE6) and Mozilla (each paragraph was written in intervals I don't now how to do a sleep in Python Scripts so I use a for loop that I trimmed to produce measurable intervals on my server (a rather slow machine)).
I'm not sure what you're trying to do - most browsers require the full content of the page to be downloaded before they display it, iirc. -Brett
Well, do they? Most browsers try to show content as fast as possible. There are even CSS-standards for making it possible to predict a tables layout ahead to be able to render it as fast as possible (before it is completely loaded). The main reason of this I guess is because people usually start reading the top of the page and showing it make it appear to load faster for the visitor. Hence, your browser/site appears faster. My tests shows that both Moz and IE shows content sent so far even if the session has not finished or the file is completely transmitted. (I got the feeling that this was the intended behavior in the original question (?)) Also interfaced gif images are shown as soon as possible, I guess movie clips and flash movies also behave that way. This could be useful for other stuff as well (except streaming large object, which should never be saved in the ZODB by the way, be cause it bloats the memory when the object is loaded. It should be stream from a file, and not by reading the entire file as most file system object product I reviewed does including LocalFS which I have made a patch for). One use would be sending response in a long-time execution of a pack or other maintenance activities. It's a great thing to see how the process get along and that it's not halted. In such a process you can't predict the Content-Length, because you don't know ahead what messages the might pop-up. So the question is what happens if the Content-Length is excluded. In my test with text/html content nothing happens and it works fine. Other content types may behave differently depending on the client. Regards, Johan Carlsson -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com
[snip]
This could be useful for other stuff as well (except streaming large object, which should never be saved in the ZODB by the way, be cause it bloats the memory when the object is loaded. It should be stream from a file, and not by reading the entire file as most file system object product I reviewed does including LocalFS which I have made a patch for).
Just FYI: Zope file objects are not completely loaded into memory when they are accessed (and served). They are divided into 64K chunks in the ZODB (where each chunk is an individual record). Files > 64k are streamed using response.write() when they are served. Files smaller than that are served all at once. -Casey
At 09:46 2003-04-16 -0400, Casey Duncan said:
[snip]
This could be useful for other stuff as well (except streaming large object, which should never be saved in the ZODB by the way, be cause it bloats the memory when the object is loaded. It should be stream from a file, and not by reading the entire file as most file system object product I reviewed does including LocalFS which I have made a patch for).
Just FYI: Zope file objects are not completely loaded into memory when they are accessed (and served). They are divided into 64K chunks in the ZODB (where each chunk is an individual record). Files > 64k are streamed using response.write() when they are served. Files smaller than that are served all at once. -Casey
But isn't it un-pickled into memory? I did some work on this last year (for 2.4 or something) and in my tests Zope hang quite effectively when using regular File Objects. But streaming from a file on the file system did work every time. I tried this on a 200MB video clip and the client was Microsoft MediaPlayer. Have anything change in resent Zope releases? Best Regards, Johan Carlsson -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com
On Wednesday 16 April 2003 10:02 am, Johan Carlsson [EasyPublisher] wrote:
At 09:46 2003-04-16 -0400, Casey Duncan said:
[snip]
This could be useful for other stuff as well (except streaming large object, which should never be saved in the ZODB by the way, be cause it bloats the memory when the object is loaded. It should be stream from a file, and not by reading the entire file as most file system object product I reviewed does including LocalFS which I have made a patch for).
Just FYI: Zope file objects are not completely loaded into memory when they are accessed (and served). They are divided into 64K chunks in the ZODB (where each chunk is an individual record). Files > 64k are streamed using response.write() when they are served. Files smaller than that are served all at once. -Casey
But isn't it un-pickled into memory?
The file is divided into 64K chunks, each of which is pickled. Each chunk is unpickled separately and then discarded.
I did some work on this last year (for 2.4 or something) and in my tests Zope hang quite effectively when using regular File Objects. But streaming from a file on the file system did work every time.
If that's still true, then I would regard it as a bug.
I tried this on a 200MB video clip and the client was Microsoft MediaPlayer.
Have anything change in resent Zope releases?
It has been this way for a long time. -Casey
On Wednesday 16 April 2003 3:13 pm, Casey Duncan wrote:
The file is divided into 64K chunks, each of which is pickled. Each chunk is unpickled separately and then discarded.
If I remember correctly, chunks are not explicitly discarded. They will certainly stay in memory until the end of the transaction, then the garbage collector will drop the least recently used objects. A 200MB file will certainly cause a 200MB memory surge when downloaded. -- Toby Dickenson http://www.geminidataloggers.com/people/tdickenson
At 15:26 2003-04-16 +0100, Toby Dickenson said:
On Wednesday 16 April 2003 3:13 pm, Casey Duncan wrote:
The file is divided into 64K chunks, each of which is pickled. Each chunk is unpickled separately and then discarded.
If I remember correctly, chunks are not explicitly discarded. They will certainly stay in memory until the end of the transaction, then the garbage collector will drop the least recently used objects. A 200MB file will certainly cause a 200MB memory surge when downloaded.
Is there anyway to flush them individually? For instance after they have been written to the response. That would make it possible to have a choice if large object should be cache in memory or not. 200MB file object tend to bloat the Data.fs as well, but that is another problem. Regards, Johan Carlsson -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com
On Wednesday 16 April 2003 10:26 am, Toby Dickenson wrote:
On Wednesday 16 April 2003 3:13 pm, Casey Duncan wrote:
The file is divided into 64K chunks, each of which is pickled. Each chunk is unpickled separately and then discarded.
If I remember correctly, chunks are not explicitly discarded. They will certainly stay in memory until the end of the transaction, then the garbage collector will drop the least recently used objects. A 200MB file will certainly cause a 200MB memory surge when downloaded.
Here is the code in question from the index_html method of OFS/Image.py: data=self.data if type(data) is type(''): RESPONSE.setBase(None) return data while data is not None: RESPONSE.write(data.data) data=data.next return '' Although a reference to data is not kept here, I imagine the ZODB cache may keep a reference. Perhaps this should be changed to explicitly discard it from the cache each time through the loop? If the whole thing is indeed loaded into memory, then it pretty much defeats the purpose of this code. -Casey
Casey Duncan wrote at 2003-4-16 11:56 -0400:
... Here is the code in question from the index_html method of OFS/Image.py:
data=self.data if type(data) is type(''): RESPONSE.setBase(None) return data
while data is not None: RESPONSE.write(data.data) data=data.next
return ''
Although a reference to data is not kept here, I imagine the ZODB cache may keep a reference. Perhaps this should be changed to explicitly discard it from the cache each time through the loop?
If the whole thing is indeed loaded into memory, then it pretty much defeats the purpose of this code.
I agree. Change it to: while data is not None: RESPONSE.write(data.data) ndata= data.next data._p_deactivate() data= ndata Dieter
Dieter Maurer wrote:
If the whole thing is indeed loaded into memory, then it pretty much defeats the purpose of this code.
I agree.
Change it to:
while data is not None: RESPONSE.write(data.data) ndata= data.next data._p_deactivate() data= ndata
Has anyone tested this works? If so, has anyone checked it in? If not, could someone open a collector issue so I can check it in ;-) cheers, Chris
(caution; old thread) On Monday 21 April 2003 2:51 pm, Chris Withers wrote:
Dieter Maurer wrote:
If the whole thing is indeed loaded into memory, then it pretty much defeats the purpose of this code.
One purpose of this code is to reduce latency by allowing the first chunk to be written to the http socket before the last chunk is loaded from disk.
Change it to:
while data is not None: RESPONSE.write(data.data) ndata= data.next data._p_deactivate() data= ndata
Has anyone tested this works?
If so, has anyone checked it in?
If not, could someone open a collector issue so I can check it in ;-)
Deactivating *every* block looks like a bad idea. This defeats the in-memory cache that will be a big win for small files. The memory cache is only a disadvange if your files are large enough that they have enough chunks to disrupt LRU cache policy. How about comparing the number of chunks to a fraction of the cache target size? (sorry, no patch today. Ive a large email backlog to clear) -- Toby Dickenson http://www.geminidataloggers.com/people/tdickenson
Toby Dickenson wrote:
Deactivating *every* block looks like a bad idea. This defeats the in-memory cache that will be a big win for small files.
The memory cache is only a disadvange if your files are large enough that they have enough chunks to disrupt LRU cache policy. How about comparing the number of chunks to a fraction of the cache target size?
(sorry, no patch today. Ive a large email backlog to clear)
Just something which came to my mind when reading about cache policies: With the advent of more and more alternative storages esp. adaptable storage and directory storage, maybe the interaction with other caches should be taken into account. For instance, does it make sense for zope to cache a file in memory when it's also cached in the kernel VFS, which it likely could be when some FS based storage is used? How can the choice of the backend storage affect caching policies inside zope? For the "frontend" caches policies, zope is quite configurable thanks to the various cache managers etc., maybe it's also worth thinking about making such cache policies for the "backend" more configurable. cheers, oliver
On Thursday 24 April 2003 10:58 am, Oliver Bleutgen wrote:
Toby Dickenson wrote:
Deactivating *every* block looks like a bad idea. This defeats the in-memory cache that will be a big win for small files.
The memory cache is only a disadvange if your files are large enough that they have enough chunks to disrupt LRU cache policy. How about comparing the number of chunks to a fraction of the cache target size?
(sorry, no patch today. Ive a large email backlog to clear)
Just something which came to my mind when reading about cache policies:
With the advent of more and more alternative storages esp. adaptable storage and directory storage, maybe the interaction with other caches should be taken into account. For instance, does it make sense for zope to cache a file in memory when it's also cached in the kernel VFS, which it likely could be when some FS based storage is used? How can the choice of the backend storage affect caching policies inside zope? For the "frontend" caches policies, zope is quite configurable thanks to the various cache managers etc., maybe it's also worth thinking about making such cache policies for the "backend" more configurable.
Yes, todays LRU policy is good enough, but far from optimal. A better cache eviction scheme could also balance: 1. The cost of keeping an object in memory. This is largely dependant on the byte size of the object. 2. The benefit of keeping an object in memory. Objects that know they are a waste of space (like the chunks that started this thread) can pass on a hint to the cache manager. 3. The cost of a subsequent cache miss. This depends on the pickling overhead, and costs for touching the underlying storage. The effect of this would be to keep a larger proportion of small objects in memory. That is more objects, in the same space. -- Toby Dickenson http://www.geminidataloggers.com/people/tdickenson
Oliver Bleutgen wrote:
Toby Dickenson wrote:
Deactivating *every* block looks like a bad idea. This defeats the in-memory cache that will be a big win for small files. The memory cache is only a disadvange if your files are large enough that they have enough chunks to disrupt LRU cache policy. How about comparing the number of chunks to a fraction of the cache target size?
(sorry, no patch today. Ive a large email backlog to clear)
Just something which came to my mind when reading about cache policies:
With the advent of more and more alternative storages esp. adaptable storage and directory storage, maybe the interaction with other caches should be taken into account. For instance, does it make sense for zope to cache a file in memory when it's also cached in the kernel VFS, which it likely could be when some FS based storage is used? How can the choice of the backend storage affect caching policies inside zope? For the "frontend" caches policies, zope is quite configurable thanks to the various cache managers etc., maybe it's also worth thinking about making such cache policies for the "backend" more configurable.
Here's a better alternative: when you can, avoid caching altogether. For example, if you're storing files using Ape, stream the data directly to/from the filesystem. In fact, when sending a file, the server could close the database connection, freeing up precious database threads, and let asyncore stream the rest of the data from the file. To do this, we'd need minimal support from the application. OFS.File needs to delegate the specifics of data storage and retrieval to a subobject. Ape could take the opportunity to replace that subobject with something that reads and writes the file directly. Shane
Shane Hathaway wrote:
Here's a better alternative: when you can, avoid caching altogether. For example, if you're storing files using Ape, stream the data directly to/from the filesystem. In fact, when sending a file, the server could close the database connection, freeing up precious database threads, and let asyncore stream the rest of the data from the file.
To do this, we'd need minimal support from the application. OFS.File needs to delegate the specifics of data storage and retrieval to a subobject. Ape could take the opportunity to replace that subobject with something that reads and writes the file directly.
Maybe this could include the possibility to not serve the file via ZServer at all. I did a small patch to ExtFile for that. If the file object is accessed via zope, it returns a redirect to a computed location where then the file is served from apache. I think there are a lot of use cases for that, even if ZServer's performance isn't considerably worse than e.g. apache's. You might want to stream multimedia files with a streaming server but manage them in zope, there might be cases when ZServer's http implementation is not up to the task, as it was with pdf a while ago IIRC. Maybe this would also help with certain quirks in some WebDAV implementations where apache might have a workaround, but ZServer has not (yet). cheers, oliver
At 17:01 2003-04-24 +0200, Oliver Bleutgen said:
Shane Hathaway wrote:
Here's a better alternative: when you can, avoid caching altogether. For example, if you're storing files using Ape, stream the data directly to/from the filesystem. In fact, when sending a file, the server could close the database connection, freeing up precious database threads, and let asyncore stream the rest of the data from the file. To do this, we'd need minimal support from the application. OFS.File needs to delegate the specifics of data storage and retrieval to a subobject. Ape could take the opportunity to replace that subobject with something that reads and writes the file directly.
Maybe this could include the possibility to not serve the file via ZServer at all. I did a small patch to ExtFile for that. If the file object is accessed via zope, it returns a redirect to a computed location where then the file is served from apache. I think there are a lot of use cases for that, even if ZServer's performance isn't considerably worse than e.g. apache's. You might want to stream multimedia files with a streaming server but manage them in zope, there might be cases when ZServer's http implementation is not up to the task, as it was with pdf a while ago IIRC. Maybe this would also help with certain quirks in some WebDAV implementations where apache might have a workaround, but ZServer has not (yet).
In my opinion that is a little bit out of scope for the File object. Solutions for integrating content served by other server should be implemented in separate classes (perhaps derived for File). There can exists any number of external server, every one with different set of requirements for integration. -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com
Oliver Bleutgen wrote:
You might want to stream multimedia files with a streaming server but manage them in zope,
This is a very good point... ...but sureley this is already quite easy to do at the application level? cheers, Chris
Chris Withers wrote:
Oliver Bleutgen wrote:
You might want to stream multimedia files with a streaming server but manage them in zope,
This is a very good point...
...but sureley this is already quite easy to do at the application level?
Nope, not if you would use something like Ape, I think, if I read Shanes comment correctly:
To do this, we'd need minimal support from the application. OFS.File needs to delegate the specifics of data storage and retrieval to a subobject. Ape could take the opportunity to replace that subobject with something that reads and writes the file directly.
But this doesn't expose enough, what you'd need would be something like an "external path", which could be the filesystem path in case of Ape or something more exotic when you are using other storages, which offer alternative ways to serve their data (oracle file system for instance). Since this information is only known to the storage, but can be valuable, there should be a way to get it. cheers, oliver
[Oliver]
You might want to stream multimedia files with a streaming server but manage them in zope,
[Shane]
To do this, we'd need minimal support from the application. OFS.File needs to delegate the specifics of data storage and retrieval to a subobject. Ape could take the opportunity to replace that subobject with something that reads and writes the file directly.
[Oliver]
But this doesn't expose enough, what you'd need would be something like an "external path", which could be the filesystem path in case of Ape or something more exotic when you are using other storages, which offer alternative ways to serve their data (oracle file system for instance).
Here's what I have in mind. Simplified, OFS/File.py would change to something like this: class SimpleFileData: def __init__(self, bytes): self.bytes = bytes def send(self, RESPONSE): RESPONSE.write(self.bytes) class File(SimpleItem): def upload(self, data): self.data = SimpleFileData(data) def __call__(self, RESPONSE): self.data.send(RESPONSE) An Ape mapper could replace the self.data attribute with an instance of the following class. The mapper would supply the filesystem path to the instance through the constructor. class StreamedFileData: def __init__(self, path): self.path = path def send(self, RESPONSE): bytes = open(path, 'rb').read() RESPONSE.write(bytes) An even better implementation of StreamedFileData would arrange for the file to be sent asynchronously, but HTTPResponse doesn't currently expose such an API.
Since this information is only known to the storage, but can be valuable, there should be a way to get it.
What I'm suggesting is that the storage might transparently enhance the application. On second thought, though, maybe that's not a good pattern, since it might surprise someone to find a StreamedFileData when they expected a SimpleFileData instance. Maybe this stuff should stay at the application layer. Shane
On Thursday 01 May 2003 4:19 pm, Shane Hathaway wrote:
What I'm suggesting is that the storage might transparently enhance the application. On second thought, though, maybe that's not a good pattern, since it might surprise someone to find a StreamedFileData when they expected a SimpleFileData instance. Maybe this stuff should stay at the application layer.
It would be less evil if the storage exposed extra capabilities to the application though _p_ attributes: class SimpleFileData: def __init__(self, bytes): self.bytes = bytes _p_path_to_data = None def send(self, RESPONSE): if self _p_path_to_data in not None: # Hurrah! a gift from our storage bytes = open(self._p_path_to_data, 'rb').read() RESPONSE.write(bytes) else: RESPONSE.write(self.bytes) -- Toby Dickenson http://www.geminidataloggers.com/people/tdickenson
Toby Dickenson wrote:
On Thursday 01 May 2003 4:19 pm, Shane Hathaway wrote:
What I'm suggesting is that the storage might transparently enhance the application. On second thought, though, maybe that's not a good pattern, since it might surprise someone to find a StreamedFileData when they expected a SimpleFileData instance. Maybe this stuff should stay at the application layer.
It would be less evil if the storage exposed extra capabilities to the application though _p_ attributes:
class SimpleFileData:
def __init__(self, bytes): self.bytes = bytes
_p_path_to_data = None
def send(self, RESPONSE): if self _p_path_to_data in not None: # Hurrah! a gift from our storage bytes = open(self._p_path_to_data, 'rb').read() RESPONSE.write(bytes) else: RESPONSE.write(self.bytes)
I like that idea! It builds on the understanding that _p_ attributes are controlled by the database, not the application. Thanks, I'll keep that in mind. Shane
Shane Hathaway wrote:
It would be less evil if the storage exposed extra capabilities to the application though _p_ attributes:
class SimpleFileData:
def __init__(self, bytes): self.bytes = bytes
_p_path_to_data = None
def send(self, RESPONSE): if self _p_path_to_data in not None: # Hurrah! a gift from our storage bytes = open(self._p_path_to_data, 'rb').read() RESPONSE.write(bytes) else: RESPONSE.write(self.bytes)
I like that idea! It builds on the understanding that _p_ attributes are controlled by the database, not the application. Thanks, I'll keep that in mind.
Shane
Yes, this would for example, when using Ape, allow to override the __str__ method of image to render a URL to an external server which would serve directely from the filesystem. Nice. cheers, oliver
On Thursday 24 April 2003 05:41 am, Toby Dickenson wrote:
(caution; old thread)
On Monday 21 April 2003 2:51 pm, Chris Withers wrote:
Dieter Maurer wrote:
If the whole thing is indeed loaded into memory, then it pretty much defeats the purpose of this code.
One purpose of this code is to reduce latency by allowing the first chunk to be written to the http socket before the last chunk is loaded from disk.
Change it to:
while data is not None: RESPONSE.write(data.data) ndata= data.next data._p_deactivate() data= ndata
Has anyone tested this works?
If so, has anyone checked it in?
If not, could someone open a collector issue so I can check it in ;-)
Deactivating *every* block looks like a bad idea. This defeats the in-memory cache that will be a big win for small files.
I agree.
The memory cache is only a disadvange if your files are large enough that they have enough chunks to disrupt LRU cache policy. How about comparing the number of chunks to a fraction of the cache target size?
I think possibly a better solution is to have an explicit switch on the file like "Disable ZODB caching (recommended for large files)" so that it can be decided as policy on a case-by-case basis. -Casey
At 09:04 2003-04-24 -0400, Casey Duncan said:
Deactivating *every* block looks like a bad idea. This defeats the
in-memory
cache that will be a big win for small files.
I agree.
The memory cache is only a disadvange if your files are large enough that they have enough chunks to disrupt LRU cache policy. How about comparing the number of chunks to a fraction of the cache target size?
I think possibly a better solution is to have an explicit switch on the file like "Disable ZODB caching (recommended for large files)" so that it can be decided as policy on a case-by-case basis.
In the case of the explicit configuration option, wouldn't it be good to use a file-based caching mechanism instead? In other word write the unpickled blocks to a cache file and stream that file to the response. I'm not sure what the overhead for reading data from the ZODB is, but I suspect streaming large files could be a bottle neck. Cache-files could be removed when the database is flush or packed. Best Regards, Johan Carlsson -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com
On Thursday 24 April 2003 2:15 pm, Johan Carlsson [EasyPublisher] wrote:
In the case of the explicit configuration option, wouldn't it be good to use a file-based caching mechanism instead? In other word write the unpickled blocks to a cache file and stream that file to the response.
With a (file-based) Squid cache in front of zope, and a (file-based) ZEO client cache behind it, I doubt that *another* filesystem cache can make an improvement here. -- Toby Dickenson http://www.geminidataloggers.com/people/tdickenson
At 14:37 2003-04-24 +0100, Toby Dickenson wrote:
On Thursday 24 April 2003 2:15 pm, Johan Carlsson [EasyPublisher] wrote:
In the case of the explicit configuration option, wouldn't it be good to use a file-based caching mechanism instead? In other word write the unpickled blocks to a cache file and stream that file to the response.
With a (file-based) Squid cache in front of zope, and a (file-based) ZEO client cache behind it, I doubt that *another* filesystem cache can make an improvement here.
It would make a difference for people not using Squid or ZEO. Johan -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com
In the case of the explicit configuration option, wouldn't it be good to use a file-based caching mechanism instead? In other word write the unpickled blocks to a cache file and stream that file to the response.
With a (file-based) Squid cache in front of zope, and a (file-based) ZEO client cache behind it, I doubt that *another* filesystem cache can make an improvement here.
It would make a difference for people not using Squid or ZEO.
Those people probably shouldn't be streaming anyway ;) /Magnus
Well, do they? Most browsers try to show content as fast as possible. There are even CSS-standards for making it possible to predict a tables layout ahead to be able to render it as fast as possible (before it is completely loaded). The main reason of this I guess is because people usually start reading the top of the page and showing it make it appear to load faster for the visitor. Hence, your browser/site appears faster.
Huh, well perhaps I was wrong, I dunno I've never tried something exactly like that.
This could be useful for other stuff as well (except streaming large object, which should never be saved in the ZODB by the way, be cause it bloats the memory when the object is loaded. It should be stream from a file, and not by reading the entire file as most file system object product I reviewed does including LocalFS which I have made a patch for).
Cool, I'm actually working on a project where I need to stream out a large file from LocalFS, could you send me that patch? I've also made a patch to LocalFS to allow you to specify paths relative to INSTANCE_HOME, or any environment variable for that matter.
One use would be sending response in a long-time execution of a pack or other maintenance activities. It's a great thing to see how the process get along and that it's not halted.
Yeah, actually this is kinda a major problem if you're serving zope via a reverse proxy and rewrite rules -- if any page takes longer than 90-ish seconds to load, the reverse proxy will time out and close the connection, causing most browsers to re-send the request. This is a problem for long running processes, since you get in an infinite loop, with a bunch of processes getting started on top of each other. That's exactly why I wrote the HTTPpush library, the stream of data prevents the browser from timing out.
In such a process you can't predict the Content-Length, because you don't know ahead what messages the might pop-up.
So the question is what happens if the Content-Length is excluded. In my test with text/html content nothing happens and it works fine. Other content types may behave differently depending on the client.
I don't think content-length is required for some types, check the HTTP rfc. -Brett
participants (9)
-
Brett Carter -
Casey Duncan -
Chris Withers -
Dieter Maurer -
Johan Carlsson [EasyPublisher] -
Magnus Heino -
Oliver Bleutgen -
Shane Hathaway -
Toby Dickenson