-->

Blogger Blogs SEO Friendly Robots.txt File Setting

How to make your blog more SEO Friendly using Robots.txt? By setting blogger blogs robots.txt file is helps Googlebot to quick crawling and indexing blogs on search engines. Aside from that, customization of the robots.txt is also a part of search engine optimization. Before configurations of robots.txt file you must need to know that what is robots.txt file and how it effects on blog SEO? 

Robots.txt file is simply a convention to govern the web crawler or search engine robots to discover or prevent all pages or just a few pages of a blog.
For example, there is a page that does not want to index and show to the public on the search engines then we can utilize the functionality of this robots.txt to block these pages.
SEO Friendly Robots.txt File
All blog on blogspot platform already have a robots.txt given by blogger. By default robots.txt on a blog like this:
User-agent:  Mediapartners-Google
Disallow:
User-ageant: *
Disallow: /search
Allow: /
Sitemap: http://lablance.com/sitemap.xml
Here we analysis all of the above robots.txt code. The following is an explanation:

User-agent: Mediapartners-Google (Google's user agent indicates that this blog is a partner of Google)

Disallow: (Which is not allowed there)

User-agent: * (All robotic Search Engine or Googlebots)

Disallow: / search (Not allowed to crawl the search folder and string, like .... /Search /label and ... / search? ...)

Allow: / (Allow all pages to be crawled, except that written on Disallow above. Signs [/] or less means that the name of the blog)

Sitemap: http: // Blog address / sitemap.xml (sitemap in xml format (Search Engine Friendly))

All above robots.txt configuration should index all the articles and pages on your blog because the robot web crawl Disallow all search string results. To solve the problems that arises in blogspot blogs after we remove /search in the code Disallow: / search to allow the robot crawler to index all the pages, we can either edit it again using the second text robot setting with the following configuration.
User-agent: Mediapartners-Google
Disallow:
User-agent: *
Disallow: / search? updated-min =
Disallow: / search? updated-max =
Disallow: / search / label / *? updated-min =
Disallow: / search / label / *? updated-max =
Allow: /
Sitemap: http://lablance.com/sitemap.xml
To prevent duplicate content or the title of a blog, especially duplicate content occurs because the blog may be accessed from a mobile phone you can edit robots txt setting like this:
User-agent: Mediapartners-Google
Disallow:
User-agent: Googlebot
Disallow: / search
Disallow: /? m = 1
Disallow: /? m = 0
Disallow: / *? m = 1
Disallow: / *? m = 0
User- agent: *
Disallow: / search
Sitemap: http://lablance.com/sitemap.xml

No comments