...from the "everything would be perfect if it weren't for those darn users" category... I'm running Zope 2.4.3 on Linux (RedHat 7.1 / Intel) and using the LocalFS product to allow users to easily upload/download files from their browser. For "normal sized" files (up to a few dozen Meg.) everything works as expected. However, with bigger files- in particular a 172 Meg file someone tried to upload- the system grinds to a halt as the server memory gets chewed up. I've seen a lot of discussions on this list and others regarding a previous bug in the cgi.py code that caused an entire file being uploaded to get stored in memory while transferring. It *sounds* like that's been fixed in the versions of python/zope I'm using though... However, my symptoms seem to match what was being described re: the previous bug... is there some patch I'm missing? Or is there any other solution other than asking users not to upload 172 Meg files??? thanks-- Larry p.s. the server has 256Meg of RAM. + Larry J. Prikockis Internet Systems Specialist, NatureServe Voice:703-908-1833 / Fax:703-908-1917 / Email:larry_prikockis@NatureServe.org www.NatureServe.org + All programmers are optimists. Perhaps this modern sorcery especially attracts those who believe in happy endings and fairy godmothers. Perhaps the hundreds of nitty frustrations drive away all but those who habitually focus on the end goal. Perhaps it is merely that computers are young, programmers are younger, and the young are always optimists. But however the selection process works, the result is indisputable: "This time it will surely run," or "I just found the last bug." -- Frederick P. Brooks, Jr., The Mythical Man Month
On Thu, Jan 10, 2002 at 04:09:30PM -0500, larry_prikockis@NatureServe.org wrote:
...from the "everything would be perfect if it weren't for those darn users" category...
I'm running Zope 2.4.3 on Linux (RedHat 7.1 / Intel) and using the LocalFS product to allow users to easily upload/download files from their browser. For "normal sized" files (up to a few dozen Meg.) everything works as expected. However, with bigger files- in particular a 172 Meg file someone tried to upload- the system grinds to a halt as the server memory gets chewed up.
snip, snip
thanks-- Larry
p.s. the server has 256Meg of RAM.
I use an external method to do my upload, this is the salient part def upload(self, REQUEST): once = 0 filename=REQUEST.form['attachment'].filename if filename[1:2]==':': filename=string.replace(filename,'\\','/') id=filename[string.rfind(filename,'/')+1:] id=string.replace(id,' ','_') user=str(REQUEST['AUTHENTICATED_USER']) tmpfname='/home/zope/shared_files/%s.tmp' % user fname='/home/zope/shared_files/%s/%s' % (user, id) fdir='/home/zope/shared_files/%s' % user url='https://my_server.my_domain.com/dynamic/protected/shared_files/shared_fil es/%s/%s'% (user, id) try: os.mkdir(fdir) except: pass try: os.unlink(tmpfname) except: pass f=open(tmpfname, 'w') while 1: s=REQUEST.form['attachment'].read(4096) if not s: break once = 1 f.write(s) REQUEST.set('error_message', '') if not once: f.write('<HTML><TITLE>No Data Uploaded</title><body>Due to unexp lained failure - No Data Uploaded</body></html>') REQUEST.set('error_message', '<p>Upload failed!') f.close() os.rename(tmpfname, fname) REQUEST.set('upload_url', url) REQUEST.set('upload_filename', fname) Note that it has some hard-coded stuf in it that should perhaps be factored out, namely the location stuff (this is putting it in a user's folder in the shared_files directory of /home/zope.) Also note that it is a bit paranoid about placement, it uploads the entire file to a temporary location, and then moves the file after upload. This keeps you from filling up your hard drive, leaving an incomplete file (your drive is still full, but the user's old file has not been touched yet). It will not fill up your memory, as it reads and writes the file in 4k chunks. It does assume that the user will upload only one file at a time. With Windows users, this tends not to be a problem. They just don't seem to think about launching a bunch of uploads simulataneously. It would be easy to fix, however. Just make a smarter tmpname. The largest file I currently have here is 104,486,083 bytes. This is quite a bit bigger than my server's RAM! Jim Penny
participants (2)
-
Jim Penny -
larry_prikockis@NatureServe.org