I'm going to try to make a long story short... and the story isn't even over... but I'm getting close. One of our clients is a 'multimedia' company and we're working with a group there that consists mostly of artists and designers who use tools like photoshop and macromedia director. They came to us recently with a project for which they were *going* to use Macromedia Multiuser Server but the complexity of their application is significant.. long story short... I've sold them on the concept of using Zope as the 'media/personality server' for this application. They will use Director (which can post stuff to an URL and can also parse XML). So.. I'm building a framework that permits them to use their favorite tools, but I get to use *my* favorite tool too. ;-) The problem: Director is not a browser. There is no 'view source'. But (I think to myself) this is a great chance to use tcpwatch, which I've never used before. It's a little tricky since my favorite client machine is a Macintosh, and well.. lets just say that Tkinter for the mac is not perfect... not to mention there is no thread module.. but I do have a workaround that's useful (since I run Zope on a FreeBSD server, I just use tcpwatch on FreeBSD and either MI/X, or VirtualPC with Linux for my X server.. ). I noticed however that when I did a 'POST' the URL encoded arguments were lost. I found that the proxy_receiver handle_close method was never called.. so that anything in a 'last line' that didn't end in '\n' was lost. I added the following patch that shows this... but why is handle_close not called? I can only guess that the socket is not being properly closed somehow. I use lib/python/ZPublisher/Client.py to test calls to Zope and it works fine, but the asyncore/asynchat stuff never calls handle_close for proxy_receiver. Anyway... here's the patch: Comments welcome! *** ./tcpwatch_orig.py Sat Jan 20 16:55:43 2001 --- ./tcpwatch.py Sun Jan 21 16:52:11 2001 *************** *** 130,135 **** --- 130,137 ---- pos = pos + 1 else: # Last line, may be incomplete. + line = "Partial line? " + data[oldpos:] + '\r\n' + self.watch_output(line, byClient) return data[oldpos:] def cleanupRefs(self): take care, -steve
i'm guessing this has something to do with the default line terminators that medusa is looking for before it senses that a data buffer is ready to be processed. it seems like the url encoded ones request seem to have different line terminators. you can adjust this the fly by using the set_terminator() func on the dispatcher. set_terminator can look for either a set of input chars, or you can adjust the data buffers to fire off for processing on integer sizes. hope that helps kapil On Sunday 21 January 2001 14:15, Steve Spicklemire wrote:
I'm going to try to make a long story short... and the story isn't even over... but I'm getting close. One of our clients is a 'multimedia' company and we're working with a group there that consists mostly of artists and designers who use tools like photoshop and macromedia director. They came to us recently with a project for which they were *going* to use Macromedia Multiuser Server but the complexity of their application is significant..
long story short... I've sold them on the concept of using Zope as the 'media/personality server' for this application. They will use Director (which can post stuff to an URL and can also parse XML). So.. I'm building a framework that permits them to use their favorite tools, but I get to use *my* favorite tool too. ;-) The problem: Director is not a browser. There is no 'view source'. But (I think to myself) this is a great chance to use tcpwatch, which I've never used before. It's a little tricky since my favorite client machine is a Macintosh, and well.. lets just say that Tkinter for the mac is not perfect... not to mention there is no thread module.. but I do have a workaround that's useful (since I run Zope on a FreeBSD server, I just use tcpwatch on FreeBSD and either MI/X, or VirtualPC with Linux for my X server.. ). I noticed however that when I did a 'POST' the URL encoded arguments were lost. I found that the proxy_receiver handle_close method was never called.. so that anything in a 'last line' that didn't end in '\n' was lost. I added the following patch that shows this... but why is handle_close not called? I can only guess that the socket is not being properly closed somehow. I use lib/python/ZPublisher/Client.py to test calls to Zope and it works fine, but the asyncore/asynchat stuff never calls handle_close for proxy_receiver.
Anyway... here's the patch: Comments welcome!
*** ./tcpwatch_orig.py Sat Jan 20 16:55:43 2001 --- ./tcpwatch.py Sun Jan 21 16:52:11 2001 *************** *** 130,135 **** --- 130,137 ---- pos = pos + 1 else: # Last line, may be incomplete. + line = "Partial line? " + data[oldpos:] + '\r\n' + self.watch_output(line, byClient) return data[oldpos:]
def cleanupRefs(self):
take care, -steve
_______________________________________________ Zope maillist - Zope@zope.org http://lists.zope.org/mailman/listinfo/zope ** No cross posts or HTML encoding! ** (Related lists - http://lists.zope.org/mailman/listinfo/zope-announce http://lists.zope.org/mailman/listinfo/zope-dev )
Well proxy_receiver seems to set it's terminator to 'None', which seems to indicate that the dispatcher should just send it all as it comes without checking for any special terminator (which is indeed what appears to be happening!) The problem is that this dispatcher code: def recv (self, buffer_size): try: data = self.socket.recv (buffer_size) if not data: # a closed connection is indicated by signaling # a read condition, and having recv() return 0. self.handle_close() return '' else: return data is never called at a time when self.socket.recv() returns ''. I think this is the normal signal for a closed socket... so if it's never getting an empty string... does that mean the socket is not closed correctly? Should ayncore be changed so that handle_close() is called whenever close() is called (unless it has already been called in recv)? I'm guessing that the OS takes care of any handles that may be left dangling here... if any. thanks, -steve
"kapil" == ender <kthangavelu@earthlink.net> writes:
kapil> i'm guessing this has something to do with the default line kapil> terminators that medusa is looking for before it senses kapil> that a data buffer is ready to be processed. it seems like kapil> the url encoded ones request seem to have different line kapil> terminators. you can adjust this the fly by using the kapil> set_terminator() func on the dispatcher. set_terminator can kapil> look for either a set of input chars, or you can adjust the kapil> data buffers to fire off for processing on integer sizes. kapil> hope that helps kapil> kapil
Steve Spicklemire wrote:
Anyway... here's the patch: Comments welcome!
*** ./tcpwatch_orig.py Sat Jan 20 16:55:43 2001 --- ./tcpwatch.py Sun Jan 21 16:52:11 2001 *************** *** 130,135 **** --- 130,137 ---- pos = pos + 1 else: # Last line, may be incomplete. + line = "Partial line? " + data[oldpos:] + '\r\n' + self.watch_output(line, byClient) return data[oldpos:]
def cleanupRefs(self):
So you're saying the data still goes through, it just doesn't get displayed, right? What happens is tcpwatch receives a stream of data and immediately passes it along. It then scans the data just transferred for line feeds, calling watch_output() for each complete line it finds. At the end of a transferred block, there is often an incomplete line and in order for the little arrows to be consistent it can't display that data until it finds a line feed or the connection closes. This isn't usually a problem. In the case of HTTP connections, even keepalive connections usually close within 30 seconds. But that logic is only necessary because of the arrows. A better way to display the data would be use colors to distinguish transfer directions. And it could certainly be done IMHO. It would be the right way to solve the problem your patch is trying to solve. Perhaps next time I need TCPWatch I'll colorize it. Or maybe someone else will beat me to it. :-) BTW I tried to adapt TCPWatch for UDP and it *almost* worked... then I realized what a silly thing that was to do. In the case of UDP, it's better to just dump packets to the console, in which case kernel-level packet monitoring serves nicely and you don't have to set up a proxy connection. Shane
participants (3)
-
ender -
Shane Hathaway -
Steve Spicklemire