Wednesday, 4 June 2008

Using the Robots Exclusion Protocol

The Google Webmaster blog has just published a succinct summary of the Robots Exclusion Protocol - the standard used by websites and search engines to allow or disallow the indexing of a site or particular sections or pages of a site. This is one element that is commonly missed by many websites but should be used to streamline the way that search engines can visit and index a site.

The Robots commands can either be used within a robot.txt file hosted on the website's server, or within a robots metatag at the page level of the site. The standard is now widely accepted by most search engines, although there has not been any common development between the main search tools in the same way that the Sitemaps protocol has been developed. However, this Google post outlines the main implementation requirements for a robots file or metatag, listing and defining the different directives that can be used.

Labels: ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

Links to this post:

Create a Link

<< Home