Google’s Featured Snippets are amazingly powerful. We’re seeing more snippets than ever before for more search queries. You need them.
We know this thanks to some brilliant articles and presentations from some super smart people in the industry, including Glenn Gabe (see: The Power of Google Featured Snippets in 2016 and a Google Featured Snippet Case Study – also, an extra big shout out to Glenn for helping me answer some important questions I had when writing this article!), Peter Meyers (see: Ranking #0: SEO for Answers), and Rob Bucci (see: How to Earn More Featured Snippets).
But even after reading everything I could find about Featured Snippets, one huge question remained unanswered: How the heck do you get these damn things?
All of this leads us to today’s experiment: How exactly does Google’s algorithm pick which snippet to feature?
Obviously, Google isn’t manually picking them. It’s an algorithm.
So what makes Google’s Featured Snippet algorithm tick?
For example, if two competing domains both have great, snipp-able results, how does Google decide to pick one over the other? Take this one, for example:
Why does WordStream (in Position 4) get the Featured Snippet instead of Moz (Positions 1 and 2) or Search Engine Watch (Position 3) on a search for [what is link building]?
What we know about Featured Snippets
Before we dive into the unknown, let’s briefly review what we know.
We know snippets, like unicorns, come in all shapes and sizes. Your content must provide the answer in the “right” format, which will vary depending on the specifications in Google’s algorithm. Snippets can be:
- Lists (ordered or unordered).
- Knowledge Graph.
We also know that any website can earn a Featured Snippet. Large brands and sites have no advantage over smaller brands and sites.
Finally, we also know that winning a Featured Snippet lets you enjoy some spoils of war, including:
- You get more website traffic.
- You gain greater visibility in Google’s SERPs.
- You earn trust/credibility.
So that’s what you need to know about Featured Snippets. Now let’s dive into the unknown.
Featured Snippets pose a few problems that really complicated the analysis.
For one, snippets are finicky. You can do a search right now and see the snippet. But sometimes you can conduct the same exact search an hour later and the snippet won’t be there.
For another, we’re all working with limited data sets. We’re limited to analyzing just the snippets we have.
Finally, snippets impact your organic CTRs. Some snippets will increase the CTR to your site – for instance, if you’re ranked in fourth position but you have the featured snippet. But other times a snippet can actually decrease your CTR because the searcher already got their answer – no need to click through.
Google isn’t much help either. Gabe asked Google SEO PR spokesperson Gary Illyes and got this frustratingly funny reply:
Theory #1: Snippets aren’t featured based on organic search ranking factors alone
This one is relatively easy to prove.
According to Gabe’s data, ranking position played some sort of role in whether you get Featured Snippets. Every single snippet was taken from a page that was good enough to rank in the top 10 organic positions.
If you look at Bucci’s data, however, he discovered that Google will take snippets from content that ranks on Page 2 of Google.
I found something a bit more incredible when I pulled a report of snippets – 981 in total – for my own website. Take a look:
- About 70 percent of the time, Google pulled snippets from pages in positions 1 to 3.
- About 30 percent of the time, the snippets “source” comes from positions 4 to as deep as 71 (wow!).
If Google’s algorithm were relying just on traditional search ranking factors (e.g., keywords and links), then Google would simply pick the first “snipp-able” content fragment from the highest-ranking piece of content every time. Google would never have to go to Page 2 or further (Page 8!) for snippets when other there are other perfectly nice formatted snippets to choose from which rank higher.
Clearly, this isn’t happening. Something else is at play. But what?
Theory #2: Having your content in a snipp-able format matters (but isn’t the whole picture)
Is it all about being the most clear, concise, and thorough answer? We know Google is looking for something “snipp-able.”
For the best shot at getting a Featured Snippet, your content should be between 40 and 50 words, according to SEMRush‘s analysis.
Without a doubt, format matters to Google’s algorithm. Your content needs to have the right format if you’re ever going to be eligible to be snipped.
But again, we’re back to the same question. How does Google pick between different pages with eligible stuff to snip?
Theory #3: Engagement metrics seem to play a role in snippet selection
To figure out what was happening, I looked at the outliers. (Usually, the best way to crack an algorithm is to look at the unusual edge cases.)
Let’s look at one example: [how to get more Bing Rewards Points].
This page shows up as a snippet for all sorts of queries related to “getting bing rewards points,” yet the source of the snip is from position 10. What’s crazy is that our page ranks behind Bing’s official site and all sorts of other video tutorials and community forums discussing the topic.
Why the heck is this happening?
Well, when I look at this page in Search Console, I notice it gets an unusually high CTR of 21.43 percent, despite a ridiculously low average position of 10.
This CTR is 10x higher than what you’d expect to see at this position.
The other thing I noticed was that the page had remarkably great engagement metrics. The time on site (which is proportional to dwell time) was an amazing 14 minutes and 30 seconds.
This time on site is considerably higher than the site average – by nearly 3x!
Note: This is just one simple example. I did this for more than 50 pages (unfortunately I was limited by data here because I was looking specifically for pages that rank poorly, yet generate snippets).
What I found was that the relative time on site for pages that were snipped from low positions on the SERP has incredibly higher time on page, compared to the site average.
Basically, what I think might be going on is something like this:
Supporting fact #1: Marissa Mayer said it worked this way
In addition to this data, there are a couple more reasons why I think engagement metrics may be playing a key role in Google’s Featured Snippet algorithm. These examples indicate that Google has long-held beliefs around good engagement metrics reflecting quality content.
Does the past hold some important secrets to our current plot? Let’s see.
First, we’ll head back to 2007 for an interview with Marissa Mayer discussing the OneBox and how features like news, maps, and products would get promoted above the organic results into the OneBox, based on click-through rate:
“We hold them to a very high click-through rate expectation and if they don’t meet that click-through rate, the OneBox gets turned off on that particular query. We have an automated system that looks at click-through rates per OneBox presentation per query. So it might be that news is performing really well on Bush today but it’s not performing very well on another term, it ultimately gets turned off due to lack of click-through rates. We are authorizing it in a way that’s scalable and does a pretty good job enforcing relevance.”
Supporting fact #2: Google used the same algo in paid search a few years back
OK, now let’s go back to 2008 – back when Google still had AdWords ads on the right rail. (Unfortunately, with the death of the right-side ad rail, all ads appear above the organic search results now – a moment of silence for the right-side rail).
Google would promote three ads to appear above the organic search results. How did Google decide which paid search ads to feature above the organic search results?
Here’s what Google revealed in an AdWords blog post, “Improvements to Ads Quality“:
“To appear above the search results, ads must meet a certain quality threshold. In the past, if the ad with the highest Ad Rank did not meet the quality threshold, we may not have shown any ads above the search results. With this update, we’ll allow an ad that meets the quality threshold to appear above the search results even if it has to jump over other ads to do so. For instance, suppose the ad in position 1 on the right side of the page doesn’t have a high enough Quality Score to appear above the search results, but the ad in position 2 does. It’s now possible for the number 2 ad to jump over the number 1 ad and appear above the search results. This change ensures that quality plays an even more important role in determining the ads that show in those prominent positions.”
What’s important to know here is how incredibly important CTR is in the Quality Score formula. By far, CTR has the biggest impact on Quality Score.
So here we have spokespeople from both the organic search side and Google’s own ad system telling us that CTR can play a vital role in helping Google ensure that a piece of content or an ad meets a high enough quality threshold to qualify to appear in the very prominent and valuable space above the organic search results.
That’s why I strongly believe that Featured Snippets work very much the same way – with CTR and engagement metrics being the key element.
What does it all mean?
Featured Snippets give us yet another reason to focus on engagement rates. This year we talked about how engagement rates:
Any one of these alone is good reason to focus on improving your CTR. But wait, there’s more: I believe engagement rates also impact the selection of Featured Snippets.
A call to arms
One thing that’s hard about doing research and analysis on Featured Snippets is that we’re limited to the data we have. You need to have lots of snippets and access to all the CTR data (only the individual webmasters have this). You can’t just crawl a site to discover their engagement metrics.
Why don’t we team up here and try to crack this nut together?
Have you won Featured Snippets? What are your engagement rates like for your featured snippets – from the Search Console for CTR and Google Analytics for time on site? Do you see any patterns? Please share your insights with us in the comments!