Google to stop supporting noindex directive in robots.txt


Effective September 1, Google it will stop supporting unsupported and unpublished rules in the robots exclusive protocol, the company announced on the Google Webmaster blog. That means Google will no longer support robots.txt files with the noindex directive listed within the file.

“In the interest of maintaining a healthy ecosystem and preparing for potential future open source releases, we’re retiring all code that handles unsupported and unpublished rules (such as noindex) on September 1, 2019. For those of you who relied on the noindex indexing directive in the robots.txt file, which controls crawling, there are a number of alternative options,” the company said.

What are the alternatives? Google listed the following options, the ones you probably should have been using anyway:

(1) Noindex in robots meta tags: Supported both in the HTTP response headers and in HTML, the noindex directive is the most effective way to remove URLs from the index when crawling is allowed.
(2) 404 and 410 HTTP status codes: Both status codes mean that the page does not exist, which will drop such URLs from Google’s index once they’re crawled and processed.
(3) Password protection: Unless markup is used to indicate subscription or paywalled content, hiding a page behind a login will generally remove it from Google’s index.
(4) Disallow in robots.txt: Search engines can only index pages that they know about, so blocking the page from being crawled often means its content won’t be indexed. While the search engine may also index a URL based on links from other pages, without seeing the content itself, we aim to make such pages less visible in the future.
(5) Search Console Remove URL tool: The tool is a quick and easy method to remove a URL temporarily from Google’s search results.

Becoming a standard. Yesterday, Google announced the company is working on making the robots exclusion protocol a standard and this is probably the first change coming. In fact, Google released their robots.txt parser as an open source project along with this announcement yesterday.

Why is Google changing now. Google has been looking to change this for years and with Google pushing to standardize the protocol, it can now move forward. Google said they “analyzed the usage of robots.txt rules.” Google focuses on looking at unsupported implementations of the internet draft, such as crawl-delay, nofollow, and noindex. “Since these rules were never documented by Google, naturally, their usage in relation to Googlebot is very low,” Google said. “These mistakes hurt websites’ presence in Google’s search results in ways we don’t think webmasters intended.”

Why we care. The most important thing is to make sure that you are not using the noindex directive in the robots.txt file. If you are, you will want to make the suggested changes above before September 1. Also, look to see if you are using the nofollow or crawl-delay commands and if so, look to use the true supported method for those directives going forward.


About The Author

Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.



Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version