I had to do this just yesterday, and the urllib.py module made it absolutely painless. Here's the simplistic little script I came up with, which could be called by cron, or some other task manager. This ia probably way less cool than the answer you're looking for, but it might give someone else some ideas . . . Later, Jerry S. --------------------------------------------------- """ urlgrabr.py is a simple script to hit an URL, capture the generated output, and manipulate it, based on parameters. e.g. save it as a file, python urlgrabr.py http://server/folder/object file=filename.html Default is simply to request the URL, and exit, dumping all data. """ import urllib import sys import string c_url = sys.argv[1] c_action = None u_url = urllib.urlopen(c_url) if len(sys.argv) > 2 : c_action = sys.argv[2] if string.lower(c_action[:5]) == "file=" : f_action = open(c_action[5:], "w") # file name for l in u_url.readlines() : if string.strip(l[:-1]) : f_action.write(l) f_action.close()