On Wednesday 16 April 2003 10:26 am, Toby Dickenson wrote:
On Wednesday 16 April 2003 3:13 pm, Casey Duncan wrote:
The file is divided into 64K chunks, each of which is pickled. Each chunk is unpickled separately and then discarded.
If I remember correctly, chunks are not explicitly discarded. They will certainly stay in memory until the end of the transaction, then the garbage collector will drop the least recently used objects. A 200MB file will certainly cause a 200MB memory surge when downloaded.
Here is the code in question from the index_html method of OFS/Image.py: data=self.data if type(data) is type(''): RESPONSE.setBase(None) return data while data is not None: RESPONSE.write(data.data) data=data.next return '' Although a reference to data is not kept here, I imagine the ZODB cache may keep a reference. Perhaps this should be changed to explicitly discard it from the cache each time through the loop? If the whole thing is indeed loaded into memory, then it pretty much defeats the purpose of this code. -Casey