[ZWeb] google results duplication
Simon Michael
simon@joyful.com
13 May 2001 15:57:44 -0700
Karl Anderson <karl@digicool.com> writes:
> Is there an easy way to exclude these? Will robots.txt work for page
> suffixes, or just subdirectories?
Karl - no, robots.txt's influence is quite primitive. There are some
newer meta tags which might work on a per-page basis for newer
robots. These seem like the best bet for keeping them out of editform,
backlinks, reparent, etc. See http://zwiki.org/TheRobotProblem for more.
Shane Hathaway <shane@digicool.com> writes:
> ZWiki just needs to be fixed to not generate infinite URLs based on
> acquisition. If you browse around a ZWiki long enough, you'll find
> that path elements in the URL get repeated. Or you can just browse
> the Zope.org logs to find googlebot and its excessive URLs.
The latest code should be quite resistant to this; it should never
happen to a normal user. It just might be fixed for robots in general
since the switch to using absolute urls for wiki links.
Completely disabling acquisition through zwikipages seems a bit
drastic, unless there's no other option..
Regards,
-Simon