Seo, in its the majority of basic sense, relies upon one thing above all others: Online search engine spiders crawling and indexing your website.
But nearly every site is going to have pages that you don’t want to consist of in this exploration.
In a best-case circumstance, these are not doing anything to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more vital pages.
Luckily, Google permits webmasters to tell online search engine bots what pages and content to crawl and what to disregard. There are numerous methods to do this, the most common being using a robots.txt file or the meta robots tag.
We have an outstanding and in-depth explanation of the ins and outs of robots.txt, which you should certainly read.
But in high-level terms, it’s a plain text file that lives in your site’s root and follows the Robots Exclusion Procedure (ASSOCIATE).
Robots.txt provides spiders with guidelines about the site as an entire, while meta robots tags include instructions for specific pages.
Some meta robots tags you may use include index, which tells online search engine to include the page to their index; noindex, which informs it not to include a page to the index or include it in search results; follow, which advises a search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.
Both robots.txt and meta robotics tags work tools to keep in your toolbox, however there’s also another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header reaction to a URL, it controls indexing for a whole page, in addition to the particular aspects on that page.
And whereas using meta robotics tags is fairly straightforward, the X-Robots-Tag is a bit more complex.
However this, of course, raises the question:
When Should You Use The X-Robots-Tag?
According to Google, “Any instruction that can be used in a robots meta tag can also be specified as an X-Robots-Tag.”
While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are specific situations where you would wish to utilize the X-Robots-Tag– the two most common being when:
- You wish to manage how your non-HTML files are being crawled and indexed.
- You want to serve regulations site-wide instead of on a page level.
For example, if you wish to block a specific image or video from being crawled– the HTTP action method makes this simple.
The X-Robots-Tag header is also helpful due to the fact that it enables you to combine numerous tags within an HTTP response or use a comma-separated list of directives to define directives.
Possibly you do not want a specific page to be cached and want it to be unavailable after a specific date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these guidelines.
Basically, the power of the X-Robots-Tag is that it is much more versatile than the meta robots tag.
The benefit of using an X-Robots-Tag with HTTP responses is that it allows you to use regular expressions to carry out crawl instructions on non-HTML, along with apply specifications on a bigger, worldwide level.
To help you comprehend the distinction in between these regulations, it’s handy to classify them by type. That is, are they crawler instructions or indexer instructions?
Here’s an useful cheat sheet to describe:
|Spider Directives||Indexer Directives|
|Robots.txt– uses the user agent, enable, disallow, and sitemap directives to specify where on-site online search engine bots are enabled to crawl and not enabled to crawl.||Meta Robotics tag– permits you to define and avoid online search engine from revealing specific pages on a website in search results.
Nofollow– allows you to define links that should not hand down authority or PageRank.
X-Robots-tag– permits you to manage how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you want to block specific file types. A perfect approach would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.
The X-Robots-Tag can be contributed to a site’s HTTP responses in an Apache server configuration via.htaccess file.
Real-World Examples And Uses Of The X-Robots-Tag
So that sounds terrific in theory, but what does it look like in the real world? Let’s have a look.
Let’s say we desired online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:
In Nginx, it would look like the below:
area ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;
Now, let’s look at a different circumstance. Let’s state we wish to utilize the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, and so on, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:
Please keep in mind that understanding how these instructions work and the impact they have on one another is vital.
For example, what occurs if both the X-Robots-Tag and a meta robotics tag lie when spider bots find a URL?
If that URL is obstructed from robots.txt, then specific indexing and serving regulations can not be discovered and will not be followed.
If directives are to be followed, then the URLs including those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few different techniques that can be used to check for an X-Robots-Tag on the site.
The simplest method to check is to set up a web browser extension that will inform you X-Robots-Tag details about the URL.
Screenshot of Robots Exemption Checker, December 2022
Another plugin you can use to identify whether an X-Robots-Tag is being used, for example, is the Web Designer plugin.
By clicking on the plugin in your internet browser and browsing to “View Action Headers,” you can see the numerous HTTP headers being used.
Another approach that can be utilized for scaling in order to pinpoint issues on websites with a million pages is Shrieking Frog
. After running a website through Yelling Frog, you can navigate to the “X-Robots-Tag” column.
This will reveal you which areas of the site are using the tag, along with which specific directives.
Screenshot of Shrieking Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and controlling how online search engine communicate with your site is
the foundation of seo. And the X-Robots-Tag is a powerful tool you can utilize to do just that. Simply be aware: It’s not without its threats. It is really simple to slip up
and deindex your entire site. That stated, if you read this piece, you’re most likely not an SEO beginner.
So long as you use it wisely, take your time and examine your work, you’ll discover the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel