-->

How to crawl and index labels in blogger

Labels are most powerful keywords of your blog, so you must know how to crawl them. We can crawl them using robot.txt file Allow field. The robots.txt file field Disallow: /search tells search engine that all URLs that should be blocked whose path (folder, file or webpage) beginning of the string “search”. It should be keep in mind that instructions in a robots.txt file are crawl directives, not index directives. Whereas bots will read the robots.txt file prior to requesting access of URL and determine whether or not they're allowed to crawl it. If someone visits indexed categories using search engine, they will getting an error message “Invalid Category Filter” error 400.


So according to Google spider the Disallow: /search field would disallow bots with the user-agent to crawls the "search" directory located in the root directory of your webserver, but allow all other directories and URLs to be crawled. It’s not easy to crawls your labels in blogger because blogger does not support sitemap for labels or tag.

Use Robots.txt file to crawl labels:
User-agent: *
Allow: /search
Allow: /assets/**.css/
Allow: /assets/**.js/
Sitemap: http://www.your domain/sitemap.xml

No comments