...from the "everything would be perfect if it weren't for those darn users" category... I'm running Zope 2.4.3 on Linux (RedHat 7.1 / Intel) and using the LocalFS product to allow users to easily upload/download files from their browser. For "normal sized" files (up to a few dozen Meg.) everything works as expected. However, with bigger files- in particular a 172 Meg file someone tried to upload- the system grinds to a halt as the server memory gets chewed up. I've seen a lot of discussions on this list and others regarding a previous bug in the cgi.py code that caused an entire file being uploaded to get stored in memory while transferring. It *sounds* like that's been fixed in the versions of python/zope I'm using though... However, my symptoms seem to match what was being described re: the previous bug... is there some patch I'm missing? Or is there any other solution other than asking users not to upload 172 Meg files??? thanks-- Larry p.s. the server has 256Meg of RAM. + Larry J. Prikockis Internet Systems Specialist, NatureServe Voice:703-908-1833 / Fax:703-908-1917 / Email:larry_prikockis@NatureServe.org www.NatureServe.org + All programmers are optimists. Perhaps this modern sorcery especially attracts those who believe in happy endings and fairy godmothers. Perhaps the hundreds of nitty frustrations drive away all but those who habitually focus on the end goal. Perhaps it is merely that computers are young, programmers are younger, and the young are always optimists. But however the selection process works, the result is indisputable: "This time it will surely run," or "I just found the last bug." -- Frederick P. Brooks, Jr., The Mythical Man Month