20.11.3. Discussion
To avoid marauding robots and web crawlers hammering their servers,
sites are encouraged to create a file with access rules called
robots.txt. If you're fetching only one
document, this is no big deal, but if your script fetches many
documents from the same server, you could easily exhaust that site's
bandwidth.
When writing scripts to run around the Web, it's important to be a
good net citizen: don't request documents from the same server too
often, and heed the advisory access rules in their
robots.txt file.
The easiest way to handle this is to use the LWP::RobotUA module
instead of LWP::UserAgent to create agents. This agent automatically
knows to fetch data slowly when calling the same server repeatedly.
It also checks each site's robots.txt file to
see whether you're trying to grab a file that is off-limits. If you
do, you'll get a response like this:
403 (Forbidden) Forbidden by robots.txt
Here's an example robots.txt file, fetched using
the GET program that comes with the LWP module suite:
% GET http://www.webtechniques.com/robots.txt
User-agent: *
Disallow: /stats
Disallow: /db
Disallow: /logs
Disallow: /store
Disallow: /forms
Disallow: /gifs
Disallow: /wais-src
Disallow: /scripts
Disallow: /config