At 15:26 2003-04-16 +0100, Toby Dickenson said:
On Wednesday 16 April 2003 3:13 pm, Casey Duncan wrote:
The file is divided into 64K chunks, each of which is pickled. Each chunk is unpickled separately and then discarded.
If I remember correctly, chunks are not explicitly discarded. They will certainly stay in memory until the end of the transaction, then the garbage collector will drop the least recently used objects. A 200MB file will certainly cause a 200MB memory surge when downloaded.
Is there anyway to flush them individually? For instance after they have been written to the response. That would make it possible to have a choice if large object should be cache in memory or not. 200MB file object tend to bloat the Data.fs as well, but that is another problem. Regards, Johan Carlsson -- Easy Publisher Developers Team Johan Carlsson johanc@easypublisher.com Mail: Birkagatan 9 SE-113 36 Stockholm Sweden Phone +46-(0)8-31 24 94 Fax +46-(0)8-673 04 44 Mobil +46-(0)70-558 25 24 http://www.easypublisher.com