[Zope] LocalFS and mega-uploads...
Jim Penny
jpenny@universal-fasteners.com
Thu, 10 Jan 2002 16:51:37 -0500
On Thu, Jan 10, 2002 at 04:09:30PM -0500, larry_prikockis@NatureServe.org wrote:
> ...from the "everything would be perfect if it weren't for those darn users"
> category...
>
> I'm running Zope 2.4.3 on Linux (RedHat 7.1 / Intel) and using the LocalFS
> product to allow users to easily upload/download files from their browser.
> For "normal sized" files (up to a few dozen Meg.) everything works as
> expected. However, with bigger files- in particular a 172 Meg file someone
> tried to upload- the system grinds to a halt as the server memory gets
> chewed up.
>
snip, snip
> thanks--
> Larry
>
> p.s. the server has 256Meg of RAM.
>
I use an external method to do my upload, this is the salient part
def upload(self, REQUEST):
once = 0
filename=REQUEST.form['attachment'].filename
if filename[1:2]==':':
filename=string.replace(filename,'\\','/')
id=filename[string.rfind(filename,'/')+1:]
id=string.replace(id,' ','_')
user=str(REQUEST['AUTHENTICATED_USER'])
tmpfname='/home/zope/shared_files/%s.tmp' % user
fname='/home/zope/shared_files/%s/%s' % (user, id)
fdir='/home/zope/shared_files/%s' % user
url='https://my_server.my_domain.com/dynamic/protected/shared_files/shared_fil
es/%s/%s'% (user, id)
try:
os.mkdir(fdir)
except:
pass
try:
os.unlink(tmpfname)
except:
pass
f=open(tmpfname, 'w')
while 1:
s=REQUEST.form['attachment'].read(4096)
if not s:
break
once = 1
f.write(s)
REQUEST.set('error_message', '')
if not once:
f.write('<HTML><TITLE>No Data Uploaded</title><body>Due to unexp
lained failure - No Data Uploaded</body></html>')
REQUEST.set('error_message', '<p>Upload failed!')
f.close()
os.rename(tmpfname, fname)
REQUEST.set('upload_url', url)
REQUEST.set('upload_filename', fname)
Note that it has some hard-coded stuf in it that should perhaps be factored
out, namely the location stuff (this is putting it in a user's folder in
the shared_files directory of /home/zope.) Also note that it is a bit
paranoid about placement, it uploads the entire file to a temporary location,
and then moves the file after upload. This keeps you from filling up your
hard drive, leaving an incomplete file (your drive is still full, but the
user's old file has not been touched yet). It will not fill up your memory,
as it reads and writes the file in 4k chunks.
It does assume that the user will upload only one file at a time. With
Windows users, this tends not to be a problem. They just don't seem to
think about launching a bunch of uploads simulataneously. It would be
easy to fix, however. Just make a smarter tmpname.
The largest file I currently have here is 104,486,083 bytes. This is
quite a bit bigger than my server's RAM!
Jim Penny