How Tabbed Content Might Be Hurting Your Search Rankings


‘);
$(‘#scheader .sc-logo’).append(‘ ‘);
$(‘#scheader’).append(”);
$(‘#scheader .scdetails’).append(‘

‘+cat_head_params.sponsor+’

‘);
$(‘#scheader .scdetails’).append(cat_head_params.sponsor_text);
$(‘#scheader’).append(‘

ADVERTISEMENT

‘);

}

});
});

Putting content behind tabs is a common solution for keeping a websites body content clean and concise. Offering the user the ability to show, or hide content with a single click.

But is tabbed content a good thing for your search engine optimization efforts?

Here are a few insights to how tabbed content might be hurting your search engine rankings.

Why Tabbed Content Can Hurt Your Rankings

As you know, search engine crawlers have had a hard time reading JavaScript over the years.

Since 2014, Google has been working to understand how JavaScript has become essential in modern day website design. However, it is still far from perfect as JavaScript is a complicated yet beautiful thing.

We don’t know exactly what Google can and can’t read with JavaScript, so the best thing you can do at this point is to make sure that your JS files are readable and not disallowed in your robots.txt.

When a crawler reads your JS statements it will simply either be able to understand it or not. If it isn’t understood, any piece of content within that sector won’t be displayed or rendered. Meaning that your well-structured content may not be of any value to your search aspirations.

A good way to see if your content is being read is to use the Fetch as Google function within Google Search Console, which displays both a rendered version for Googlebot and how a visitor will see the page.

So how does this relate to tabbed content?

Well, tabbed content is typically created using JavaScript using div.tabs. While this is typically read by search engines, search engine crawlers don’t have the user ability that a human has and won’t be able to click on a different tab. This is simply because the action that displays the tabbed content isn’t a standard hyperlink that crawlers are designed to follow.

Here’s an example is a standard piece of tabbed content – notice how it doesn’t contain a standard hyperlink:

<button class="tablinks" onclick="OPENTAB(event, 'EVENTNAME')" id="defaultOpen">TAB TITLE</button>
 
<script>
document.getElementById("defaultOpen").click();
</script>

Googlebot will be able to read your first tab because this is static on your page, but your other tabs may well be ignored.

If you have five tabs, all with 200 words in each, you’re losing out on 800 words on your page.

This is bad news for your webpage because this will significantly lower your content quality, not to mention missing out on those keyword-rich and relevant pieces of content.

What You Can Do to Fix This Issue

Because tabbed content is dependent on the kind of code showcased above, the most reliable way to fix it is to completely remove tabbed content altogether. Choose a well-structured and designed page that enables Googlebot to effectively crawl your website, indexing every single piece of content you have got to offer.

We ran tests over many months to prove this theory. All pages in this example were optimized with keyword-rich content but have not received any external links during the course of the test. We simply created the pages, noticed which pages were not ranking well, and made one simple page edit – the removal of tabbed content.

From the table above, you can quite clearly see the effect that tabbed content removal has had on the search engine rankings. Pages that previously struggled to get onto page 1 are now (and have remained) on page 1 for its target keyword. Target keywords that were already on Page 1, have gained position, pushing for that top slot.

Keyword 8, for example, wasn’t gaining any positioning. Once the tabbed content was removed, the page dropped slightly. However, over time, the page went from Page 3 to Page 1 within a matter of months.

Keyword 2 gradually started to pick up positioning but instantly jumped to Page 1, which indicates that the page was most likely crawled naturally at that point. Once the GoogleBot noticed the rich content that was not being hidden anymore, it decided that the page was worthy of page 1 status.

Arguments Against This Ideology

When creating an SEO rule such as this, considering other factors is also essential. Are there any other reasons why these pages have grown?

Below is a short list of other possible factors that may have affected these search engine results:

  • Page age: As webpages get older, trust grows and will affect the position. The pages showcased in this research didn’t contain any information indicating the date that the page was created.
  • Natural organic external links: No external links were built to these pages in the time we were monitoring the research.
  • Algorithm changes: The only algorithm change that would have helped this would have been the ongoing content quality updates. This means that ongoing algorithm changes would benefit pages that showcase more (and relevant) content.
  • Page creation: Pages weren’t gaining any position 3 months prior to the page being created.
  • Existing on-page optimization: When the pages were created, standard SEO was applied, including meta optimization, content creation, header optimization and image optimization.

Will Google Ever Be Able To Read Tabbed Content?

We don’t know if JavaScript based code will ever be able to be read by search engine bots. So any kind of future proofing this kind of web development is purely speculative.

For now, the research shows us that typically, standard JavaScript isn’t read and therefore SEO professionals should keep a keen eye out for any JavaScript that may be harming a websites ability to rank.


Image Credits
Featured Image: royguisinger/Pixabay

In-post Images: Screenshots by Cai Simpson. Taken August 2017.



Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version