Skip to content
NEW: Moz AI, Refreshed Interfaces & More API Data. Discover what's new at Moz!
Testing Ideas From the Google API Leaks 1 Blog Header

Testing Ideas From the Google API Leaks — Whiteboard Friday

Will Critchlow

The author's views are entirely their own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Table of Contents

Will Critchlow

Testing Ideas From the Google API Leaks — Whiteboard Friday

The author's views are entirely their own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

How do we know what is and what isn’t a Google ranking factor? In this episode, Will reveals four testing ideas relating to the Google API leaks to help answer that question.

Click on the whiteboard image above to open a high-resolution version!

Hi, everyone. My name is Will Critchlow. I'm the founder and CEO of a company called SearchPilot, and we specialize in SEO testing on really large websites. So, we power the SEO testing program at a bunch of websites you've probably heard of: Marks & Spencer, Petco, Skyscanner, and the like.

That's why I'm here today to talk to you about some testing ideas that have come from the leaked Google API docs that all the search geeks, including myself, have been poring over over the last few weeks.

What is “testing”?

What is "testing"?

So, first off, I should say what I'm talking about in the context of this video today is specifically testing rather than trying things.

So the distinction I'm drawing there is a test is a scientific experiment where we're looking to get a quantification of the uplift and the impact with some kind of statistical confidence, maybe some error bars, those kinds of things. That's the kind of testing that we focus on.

There are other ideas that you might want to try, but you're nonetheless not going to be able to control them in the same way; you're not going to get the same quantification of impact. So those are ideas around greater brand visibility or diversifying your external link building efforts. They may well be a good idea. You're just not going to be able to quantify them in a scientific test in the same way. So that's fine. For these kinds of tests, we're mainly talking about the document-level factors.

So, a lot of this is contained in the per doc data and associated sections of the leak. And for me personally, what I found, I've been around the industry for a shockingly long time now, is I feel like mainly we're not getting completely fresh, new ideas that we might want to test.

A lot of this is going from 90% confident to maybe 95% confident that a particular thing is or isn't a ranking factor. And that, in particular, the top end, just the existence of something in these docs doesn't 100% guarantee that something is a ranking factor. It might be experimental, deprecated, or never used in the first place. So we can't be sure, which is why we might want to do some of this testing.

What we’ve already tested

What we've already tested

We've been running tests for years now, so we have some factors that show up in the docs that we've already tested, and I thought I'd just call out a few of those. So I saw a few people expressing surprise at quite how much of an emphasis these docs have on titles, page titles.

And that wasn't so much of a surprise to me because those have been some of our most impactful and most commonly impactful tests that we've run. I would say I think of all the factors that we test, title tags are the ones most likely to have an impact in one direction or the other. Unfortunately, it's very hard to tell without testing whether it's going to be a positive or a negative impact.

Writing good title tags turns out to be very hard. But nonetheless, they're really impactful, and I think that is probably because they're one of the few features of a page that can affect both your ranking position, as well as the keywords you rank for, and the click-through rate from the search results to your website because they're a strong ranking factor and they show up in the search snippet on the search results page, and so they affect the click-through rate as well.

So tons of those. You can find them on our website. Incidentally, if you're interested in these test results, if you head to searchpilot.com, we publish a test result every two weeks, and they're all there listed on our website. And there's an email list if you want to get them in your inbox first as well. After titles, the next thing I want to call out is freshness. So this is the idea that showing how recently a document was published or updated could make a big difference in how well it ranks.

And this is, for me, the most surprising of all these things, the most surprising how powerful an impact that's turned out to be, so much so that actually some of these tests we've felt like are driving unsustainable impacts. They almost seem like they're manipulable and that the result of doing better is probably not something that you're going to want to strategically exploit because it might damage you and harm the website in the long run.

Nonetheless, powerful factor. Interstitials is one that was an official announcement. I can't remember when it was, but Google made this announcement that intrusive interstitials was going to be something that would harm your performance in search. And sure enough, we had the opportunity to test that and found that that was true, unlike some official statements. So these are things that we've had a chance to test in the past.

New test ideas

New ideas for testing

But you're probably here more for the new ideas. So new ideas that came from reading these docs, and these are hypotheses, these are ideas of things that we could test, that may or may not turn out to be impactful. I'm slightly cheating with the first one, which is more freshness. I already called out freshness as a factor, but it's one of the most tactical examples, in my opinion, in the docs that I've read so far is just how many different kinds of specific freshness features show up, and they are very specific.

So there's mentions of repeated dates on a page. There's mentions of dates in the copy on the page as opposed to just in the metadata or the structured data and so forth. And there's mentions of dates in internal anchors as well. My suspicion is this is quite focused on super fresh content that is really designed to be up to the minute, up to the second. Think live blogging, a political debate, or something like that, where you can imagine that all of these features would be naturally present.

But nonetheless, I think there's possibly some opportunity to test this across large-scale sites and say, "Does this, in fact, move the needle as well?" The reference to commercial score is an interesting one and definitely one I want to dig into a little bit deeper. We have one customer in particular that has been building some hypotheses around this.

I don't have the results yet. But they are a commercial website that is an affiliate, so it doesn't have the shopping cart and the checkout functionality on their website. And one of the hypotheses we've had is that they actually need to look more commercial to do well in the highly commercial query space that they're targeting. That's not going to be universally true.

There are certainly going to be some queries where being more commercial is a bad thing. But I think it's quite easy to believe that there'll be some where being highly commercial is a good thing because that's the user intent, the searcher intent. So that's one that we're definitely going to be digging into a little bit more. Authorship is probably the more controversial end of things. This was a really big deal in the discourse in the G+ days, when Google Plus was a thing.

A big part of the search impact of that, in theory, was this tying of the person who wrote something to their content that they put out. There's been a lot of debate, discussion, unknowable questions around whether it, in fact, remains a factor, whether it even ever was that strong a factor. There are references to it in the docs, numerous kinds of references, and that is going to drive, I think, some new hypotheses for specific tests that we can run.

As I said at the top, the existence of something in the docs doesn't guarantee that it's used. It certainly doesn't guarantee it's a strong factor. But if it's impactful, then we should be able to measure it. So watch this space for more on that. There's been quite a lot here where I've been saying, "Hmm, we guessed some of these things, or we were at 90% confidence already."

Surprises from the Google leaks

Surprises from the Google leaks

Generally, a bit of a lack of surprise of some of the things that are in the docs, which is, in my opinion, quite a good thing. It shows that the industry was broadly on top of things. There were a couple of things that I found super surprising, and I thought I'd end just with those, even though these are more either site-wide factors or things completely outside of our control, so they're not testable. These are not things that we can test in the scientific way I was talking about earlier.

The first one is the existence of the big green button. We've known, for basically ever, that there's a big red button, that you can get a manual penalty, you can be kicked out in the search results just because humans at Google decide that your website shouldn't be there, all the way back to BMW way back in the day, for those of us who've been around the industry far too long.

What I certainly wasn't confident existed was the existence of the equivalent big green button, so the ability to say this website should not be impacted by such and such an algorithm update, or such and such a re-ranking, or whatever penalty or feature might be going on. It does seem and, again, we don't get to 100% confidence, but it does seem in the docs that there is some concept of the big green button.

Now some people were confident already. I was probably less confident. I'm now a bit more confident that that is a thing. The other one is this explicit, almost hardcoded pairing of queries and destinations for official results. The example that they give in the docs, showing their age slightly, is that the query "Britney Spears" in English in the United States should return the website britneyspears.com as the number one result, because it's the official website.

I probably would have guessed that wasn't a thing. I think I would have assumed that the number one ranking in those situations was just emergent from the regular ranking algorithm. But it seems as though, for at least some queries, there is some official pairing of query and official website.

Now, as I said, these things are not testable. There's no way you're going to get somebody to press the green button for you, and there's no way of getting your website paired to any kind of interesting query that you might be going after. But nonetheless, these were some of the most kind of, I guess, academically interesting surprises I found in the docs. So I hope that's been a useful whistle-stop tour of some of my ideas for testing things from the Google API Docs leak.

My name is Will Critchlow. It's been a pleasure.

Transcription by Speechpad.

Back to Top
Will Critchlow

Will Critchlow is CEO of SearchPilot, a company that spun out of his previous business Distilled, which was acquired by Brainlabs in early 2020. SearchPilot is an enterprise SEO A/B testing platform that proves the value of SEO for the world’s biggest websites by empowering them to make agile changes and test their impact.

With Moz Pro, you have the tools you need to get SEO right — all in one place.

Read Next

Directional Reporting in GA4 — Whiteboard Friday

Directional Reporting in GA4 — Whiteboard Friday

Aug 02, 2024
Actionable AI — Whiteboard Friday

Actionable AI — Whiteboard Friday

Jul 26, 2024
Catering to User Behavior Shifts During Times of Economic Turbulence — Whiteboard Friday

Catering to User Behavior Shifts During Times of Economic Turbulence — Whiteboard Friday

Jul 12, 2024