-->

Deep understanding of robotstxt file and Googlebot

Webmasters will control how to Google interacts with your blog webpages by exploit the robots.txt file. The parameters of this file tell search engine crawlers how they should visit your website and their webpages.  All Blogger weblog use CSS and JavaScript code. These are usually internal and external files that are linked with your blog using <link>, <script> and other HTML tag.

The files and directories you do not want indexed by search engines, you'll use a robots.txt file to outline wherever the robots shouldn't go. These files are simple text files that are placed on your web server. Google should have access to those resources in order to fully understand your webpage, however usually these files are blocked by the robots.txt file.
robotstxt file and Googlebot

To crawl your blog CSS and JS file by search engine, allow all of your blog assets in robots.txt file to be crawled. The Google uses a web crawler named Googlebot to crawler and indexing system renders webpages using the HTML of a page as well as its assets such as images, CSS, and JavaScript files.

Every webmaster ought to grasp that a search engine crawler like Googlebot should be able to crawl and indexed your blog  so as for it to be enclosed in search engine results.

Setup a robots.txt for blogger blog:
User-Agent: *
Disallow: /?*
Allow: /assets/**.css/
Allow: /assets/**.js/
Allow: /assets/images /
Allow: / Sitemap: “xml sitemap URL here”

No comments