Pointless code in ZPublisher.HTTPRequest ?
Can anyone enlighten me on what this is for?
From ZPublisher.HTTPRequest.HTTPRequest.keys:
n=0 while 1: n=n+1 key = "URL%s" % n if not self.has_key(key): break n=0 while 1: n=n+1 key = "BASE%s" % n if not self.has_key(key): break Unless I am particularly stupid today (quite possible), the above has no affect on the return value, and no direct side effects. I tried commenting it out and ran all unit tests with no apparent difference. While digging deeper for indirect side effects, I see that self.has_key() calls self.__getitem__() which calls self.get(). The only side effect of self.get('URLn') that I see is that if self.other.has_key('PUBLISHED'), the calculated URL will get cached for the duration of the request in self.other['URLn']. (And also in self._urls which AFAICT is used only to keep track of which keys we've written to self.other, so we can quickly clean them out in _resetURLS().**) So maybe that's the sole purpose of the code in question; some extra logging suggests that it does in fact have that effect. But it's hard for me to be sure. Can anyone confirm or deny? Or is there something else that I'm missing? ** btw, this too smells a bit funny to me... why bother to keep a separate data structure in sync, when we could just do: for i in range(9): try: del(self.other['URL%d' % i]) i += 1 except KeyError: break -- Paul Winkler http://www.slinkp.com
Hi! Paul Winkler wrote:
From ZPublisher.HTTPRequest.HTTPRequest.keys:
n=0 while 1: n=n+1 key = "URL%s" % n if not self.has_key(key): break
n=0 while 1: n=n+1 key = "BASE%s" % n if not self.has_key(key): break
Unless I am particularly stupid today (quite possible), the above has no affect on the return value, and no direct side effects.
I tried commenting it out and ran all unit tests with no apparent difference.
While digging deeper for indirect side effects, I see that self.has_key() calls self.__getitem__() which calls self.get(). The only side effect of self.get('URLn') that I see is that if self.other.has_key('PUBLISHED'), the calculated URL will get cached for the duration of the request in self.other['URLn']. [...]
So maybe that's the sole purpose of the code in question; some extra logging suggests that it does in fact have that effect. But it's hard for me to be sure. Can anyone confirm or deny? Or is there something else that I'm missing?
In situations like that the CVS history is often quite useful: <http://cvs.zope.org/Zope/lib/python/ZPublisher/Attic/HTTPRequest.py.diff?r1=1.17&r2=1.18> I'd say the sole purpose is what you describe, but the result of the side effect is used in the next line of 'keys': keys.update(self.other) This small script shows that the code in 'keys' triggers the computing of URLx and BASEx: print context.REQUEST.other.keys() context.REQUEST.keys() print context.REQUEST.other.keys() return printed Cheers, Yuppie
On Tue, Aug 31, 2004 at 09:51:26PM +0200, yuppie wrote:
I'd say the sole purpose is what you describe, but the result of the side effect is used in the next line of 'keys':
keys.update(self.other)
This small script shows that the code in 'keys' triggers the computing of URLx and BASEx:
print context.REQUEST.other.keys() context.REQUEST.keys() print context.REQUEST.other.keys() return printed
Ahh, thanks, that clears it up. Methinks this code could use an explanatory comment; I'm checking one in on svn HEAD. -- Paul Winkler http://www.slinkp.com
Am Di, den 31.08.2004 schrieb Paul Winkler um 19:08:
Can anyone enlighten me on what this is for?
From ZPublisher.HTTPRequest.HTTPRequest.keys: ...
** btw, this too smells a bit funny to me... why bother to keep a separate data structure in sync, when we could just do:
for i in range(9): try: del(self.other['URL%d' % i]) i += 1 are you sure about the increment here?
except KeyError: break
Regards Tino
On Tue, Aug 31, 2004 at 10:31:47PM +0200, Tino Wildenhain wrote:
Am Di, den 31.08.2004 schrieb Paul Winkler um 19:08:
Can anyone enlighten me on what this is for?
From ZPublisher.HTTPRequest.HTTPRequest.keys: ...
** btw, this too smells a bit funny to me... why bother to keep a separate data structure in sync, when we could just do:
for i in range(9): try: del(self.other['URL%d' % i]) i += 1 are you sure about the increment here?
Ugh, I sent that too quick. Also I forgot to delete BASEN. Maybe something more like this: def _resetURLS(self): """ Remove cached URLN and BASEN items. """ other = self.other other['URL'] = '/'.join([other['SERVER_URL']] + self._script + self._steps) for prefix in ('URL', 'BASE'): i = 0 while True: try: del self.other['%s%d' % (prefix, i)] i += 1 except KeyError: break I just tried this on a fresh checkout of HEAD and all tests pass and everything seems to work normally. Any interest in this little cleanup? It seems pretty safe and pretty trivial to me... -- Paul Winkler http://www.slinkp.com
On Tue, Aug 31, 2004 at 06:47:27PM -0400, Paul Winkler wrote:
On Tue, Aug 31, 2004 at 10:31:47PM +0200, Tino Wildenhain wrote:
Am Di, den 31.08.2004 schrieb Paul Winkler um 19:08:
Can anyone enlighten me on what this is for?
From ZPublisher.HTTPRequest.HTTPRequest.keys: ...
** btw, this too smells a bit funny to me... why bother to keep a separate data structure in sync, when we could just do:
for i in range(9): try: del(self.other['URL%d' % i]) i += 1 are you sure about the increment here?
Ugh, I sent that too quick. Also I forgot to delete BASEN. Maybe something more like this: (snip)
Further logging revealed an off-by-one error that resulted IN NOTHINg getting cleared at all :-P Here we go: def _resetURLS(self): """ Remove cached URLN and BASEN items. """ import zLOG other = self.other other['URL'] = '/'.join([other['SERVER_URL']] + self._script + self._steps) for prefix in ('URL', 'BASE'): i = 0 while True: try: i += 1 del self.other['%s%d' % (prefix, i)] except KeyError: break once again, tests pass and some poking around revealed nothing odd. -- Paul Winkler http://www.slinkp.com
participants (3)
-
Paul Winkler -
Tino Wildenhain -
yuppie