Metadata is just data about data. In the world of organic search, that means title tags and meta descriptions, schema structured data, canonical tags, meta robots tags and the like. And yes, metadata does have a role to play in organic search. But that’s just one aspect of search engine optimization.
It’s time to dispel the myth that SEO is mostly about metadata.
Link Authority
Backlinks are one of the most important aspects of organic rankings. Each link from one site to another confers a sense of authority or importance to the page receiving the link.
The more links received from high-quality, topically relevant sites the more authority modern search engines place on a particular page. All of the links earned to all of the pages on a site also roll up to convey the authority of the entire site.
The concept of link authority is rooted in Google’s famous PageRank algorithm, which borrowed from the scholarly research community the idea of citations to determine authority. All modern search engines incorporate some form of authority analysis.
Link authority is related to metadata in that there are certain types of data — such as nofollow attributes and link title attributes — that can modify the HTML code that creates a link. But the concept of link authority itself and the act of receiving links are independent of metadata.
Contextual Relevance
In 2013, Google released its so-called Hummingbird algorithm update to improve the speed and accuracy of its search results. A significant code change, Hummingbird zoomed in, as major ranking signals, on the meaning of individual words based on the context in which they are used as opposed to taking each word at face value.
For example, “bass” is a musical sound, a stringed instrument, a fish, and a brand of shoes and clothing. The introduction of Hummingbird made it easier for Google to deliver the right “bass” to the right searcher based on contextual relevance.
Content is tricky in regards to metadata because content can be used to modify code, such as a meta description. And metadata can be used to modify content, such as structured data. But the concept of contextual relevance, which is the twin of link authority in ranking importance, has more to do with the words that the user sees on the web page than it does with metadata lurking below the surface to define the page.
User Experience
For the last couple of years, Google has promoted user experience as an important way to improve natural search performance. The theory is that sites with better user experience will have stronger engagement and will better fulfill searchers’ requests.
Google interprets lower bounce rates and longer time on a site as higher engagement, and a sign that that page should rank again in the future.
User experience merges “the services of multiple disciplines, including engineering, marketing, graphical and industrial design, and interface design” to “meet the exact needs of the customer, without fuss or bother,” according to Jakob Nielsen, usability expert and co-founder of the UX Nielsen Norman Group. No metadata here.
301 Redirects
Arguably one of the most important aspects of migration SEO, 301 redirects harvest the link authority from your old site’s URLs and recycle it into your new site’s URLs. Redirects are coded using regular expressions in files on the server.
The 301 redirect tells search engines that the page is gone permanently and triggers a near-instant loading of the new page for customers and search engine bots. These bits of code are so important that I’ve seen sites lose 60 percent of their organic search traffic overnight solely based on launching a new site with new URLs without 301 redirects.
And, importantly, redirects are not associated with metadata. In fact, there’s a piece of metadata called a meta refresh that sneakily acts like a 301 redirect but doesn’t have the SEO benefits of one. So in that instance, metadata opposes redirects.
Crawlability
At the most basic level of SEO, what a crawler can access determines a site’s ability to rank. All the metadata in the world won’t help your natural search performance if the search engine’s crawlers can’t make their way through your site to the page to read that metadata.
What blocks the crawl? Lots of things — using cookies as the sole mechanism to serve content in multiple languages, pages that require client-side rendering, form fields, links formed in JavaScript without explicit anchor tags, and more.
While there are ways to use metadata to block crawlers from indexing your content — such as meta robots noindex commands — managing the crawl has little to do with metadata.