Have you at any time necessary to reduce Google from indexing a particular URL on your net site and exhibiting it in their look for engine results internet pages (SERPs)? If you control net web-sites lengthy plenty of, a day will likely occur when you have to have to know how to do this.

The a few techniques most typically made use of to reduce the indexing of a URL by Google are as follows:

Making use of the rel=”nofollow” attribute on all anchor features used to link to the web site to prevent the back links from getting followed by the crawler.
Utilizing a disallow directive in the site’s robots.txt file to avert the website page from getting crawled and indexed.
Utilizing the meta robots tag with the content=”noindex” attribute to avoid the website page from remaining indexed.

Though the discrepancies in the 3 methods appear to be subtle at initially look, the performance can fluctuate dramatically based on which technique you pick.

Applying rel=”nofollow” to prevent Google indexing

Many inexperienced website owners attempt to avert Google from indexing a individual URL by using the rel=”nofollow” attribute on HTML anchor components. They add the attribute to each individual anchor ingredient on their site used to url to that URL.

Including a rel=”nofollow” attribute on a url prevents Google’s crawler from following the connection which, in convert, prevents them from discovering, crawling, and indexing the concentrate on webpage. Though this process may work as a shorter-phrase solution, it is not a practical lengthy-term resolution.

The flaw with this method is that it assumes all inbound inbound links to the URL will consist of a rel=”nofollow” attribute. The webmaster, on the other hand, has no way to avert other website internet sites from linking to the URL with a adopted link. So the chances that the URL will at some point get crawled and indexed utilizing this process is quite higher.

Making use of robots.txt to prevent Google indexing

One more frequent strategy applied to reduce the indexing of a URL by Google is to use the robots.txt file. A disallow directive can be included to the robots.txt file for the URL in issue. Google’s crawler will honor the directive which will avoid the website page from remaining crawled and indexed. In some instances, nevertheless, the URL can nevertheless show up in the SERPs.

In some cases Google will screen a URL in their SERPs even though they have in no way indexed the contents of that website page. If adequate web web pages backlink to the URL then Google can typically infer the matter of the web site from the backlink text of all those inbound hyperlinks.
If you cherished this post and you would like to get more information about google reverse index kindly take a look at our own web page.
As a consequence they will present the URL in the SERPs for related lookups. Though making use of a disallow directive in the robots.txt file will prevent Google from crawling and indexing a URL, it does not guarantee that the URL will hardly ever surface in the SERPs.

Making use of the meta robots tag to avoid Google indexing

If you will need to prevent Google from indexing a URL while also protecting against that URL from becoming displayed in the SERPs then the most productive technique is to use a meta robots tag with a content material=”noindex” attribute within just the head aspect of the net webpage. Of study course, for Google to really see this meta robots tag they need to have to very first be capable to explore and crawl the web page, so do not block the URL with robots.txt. When Google crawls the website page and discovers the meta robots noindex tag, they will flag the URL so that it will hardly ever be demonstrated in the SERPs. This is the most efficient way to avert Google from indexing a URL and displaying it in their search results.


Your email address will not be published.

You Might Also Like