Why use noindex, follow vs rel next/prev
-
Look at what www.shutterstock.com/cat-26p3-Abstract.html
does with their search results page 3 for 'Abstract' - same for page 2-N in the paginated series.
| name="robots" content="NOINDEX, FOLLOW"> |
| |Why is this a better alternative then using the next/prev, per Google's official statement on pagination? http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663744
Which doesn't even mention this as an option. Any ideas? Does this improve the odds of the first page in the paginated series ranking for the target term? There can't be a 'view all page' because there are simply too many items.
- Jeff
-
Hmmm - good thought. I wonder if Google is giving out deliberately bad advice for dealing with paginated sets, in that they never mention <noindex, follow="">as a viable alternative to next/prev. </noindex,>
If each paginated page is all unique assets (photos), why would it be dupe?
J
-
I don't think they're "gaming" Googlebot - I think they're trying to help the bots properly crawl through the site, index the relevant content, but not create hundreds of thousands of empty pages that will simply dilute their index and lower the overall value of the site in the search engine's eyes - I think they're trying to keep the Panda hungry and not provide it with lots of yummy food for it's low quality content hungry stomach.
This is why they are noindexing the pages - not to game the system, but to actually play by the system's rules.
-
Thanks Mark - if you disable javascript or impersonate Google-bot using a browser extension, then click on one of the main categories on the homepage bottom nav, you arrive here:
http://www.shutterstock.com/cat-5-Education.html
and click next, you get a URL like this: http://www.shutterstock.com/cat-5p2-Education.html
which is noindex,follow
if I arrive at the site without impersonating google-bot:
http://www.shutterstock.com/cat-5-Education.html#page=2
with a canonical back to http://www.shutterstock.com/cat-5-Education.html
So it seems they are trying to literally game Google - is there any evidence this works?
-
It seems like they noindexed that page because it may be part of an antiquated version of the site navigation/structure, or part of the cms and not something they want to promote. Not sure how you got there, but when you get to the primary version of a category, and then click through to the next page, the items shown change via ajax and the URL stays the same, just with a parameter that this is the second set of items being shown.
With the url staying the same, for their primary path of navigation, I don't think rel prev/next would be relevant. And these other pages probably created by the cms but not easily accessible they've noindexed - that's my best guess
-
There's more than one way to skin a cat. So while rel next/prev is an option, you could also dump it all out in one page OR you could also noindex your search page and let your sitemap do the work of notifying Google of your pages. I don't know that it's better (I would guess not but that's just a guess) but you could do it that way and not hurt yourself.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel Sponsored on Internal Links
Hi all. Should you use rel sponsored on internal links? Here is the scenario: a company accepts money from one of their partners to place a prominent link on their home page. That link goes to an internal page on the company's website that contains information about that partner's service. If this was an external link that the partner was paying for, then you would obviously use rel="sponsored" but since this is a link that goes from awebsite.com to awebsite.com/some-page/, it seems odd to qualify that link in this way. Does this change if the link contains a "sponsored" label in the text (not in the rel qualifier)? Does this change if this link looks more like an ad (i.e. a banner image) vs. regular text (i.e. a link in a paragraph)? Thanks for any and all guidance or examples you can share!
Technical SEO | | Matthew_Edgar0 -
Confused about repeated occurences of URL/essayorg/topic/ showing up as 404 errors in our site logs
Working on a Wordpress website, https://thedoctorwithin.comScanning the site’s 404 errors, I’m seeing a lot of searches for URL/essayorg/topic, coming from Bingbot, as well as other spiders (Google, OpensiteExlorer). We get at least 200 of these irrelevant requests per week. Seems like each topic that follows /essayorg/ is unique. Some include typos: /dissitation/Haven't done a verification to make sure the spiders are who they say they are, yet.Almost seems like there are many links ‘in the wild’ intended for Essay.Org that are being directed towards the site I’m working on.I've considered redirecting any requests for URL/essayorg/ to our sitemap… figuring that might encourage further spidering of actual site content. Is redirection to our sitemap xml file a good idea, or might doing so have unintended consequences? Interested in suggestions about why this might be occurring. Thank you.
Technical SEO | | linkjuiced0 -
Title Tag vs. H1 / H2
OK, Title tag, no problem, it's the SEO juice, appears on SERP, etc. Got it. But I'm reading up on H1 and getting conflicting bits of information ... Only use H1 once? H1 is crucial for SERP Use H1s for subheads Google almost never looks past H2 for relevance So say I've got a blog post with three sections ... do I use H1 three times (or does Google think you're playing them ...) Or do I create a "big" H1 subhead and then use H2s? Or just use all H2s because H1s are scary? 🙂 I frequently use subheads, it would seem weird to me to have one a font size bigger than another, but of course I can adjust that in settings ... Thoughts? Lisa
Technical SEO | | ChristianRubio0 -
Excessive use of KeyWord?
Hey I have an Immigration website in South Africa
Technical SEO | | NikitaG
MigrationLawyers.co.za and the website used to be divided in to two categories:
1st part - South African Immigration
2nd part - United Kingdom Immigration Because of that we made all the pages include the word "South Africa" in the titles. eg.
...ers.co.za/work-permit-south-africa
...ers.co.za/spousal-visa-south-africa
...ers.co.za/retirement-permit-south-africa
...ers.co.za/permanent-residence-south-africa I'm sure you get the idea.
we since, removed the UK part of the website and now are left only with the SA part. Now my question is: Is it bad? will google see this as spammy, as I'm targeting "South Africa" in almost every link of the website. Should I stick to the structure for new pages, or try to avoid any more use of "South Africa". Perhaps I can change something as it currently stands? Kind Regards
Nikita0 -
Wordpress Page vs. Posts
My campaigns are telling me I have some duplicate content. I know the reason but not sure how to correct it. Example site here: Bikers Blog is a "static page" referencing each actual "blog post" I write. This site is somewhat orphaned and about to be reconstituted. I have a number of other sites with a similar problem. I'm not sure how to structure the "page" so it only shows a summary of the blog post on the page not the whole post. Permalinks is set as "/%postname%/" I've posted on Wordpress.org with no answer. Since this is an SEO issue I thought maybe someone with WP experience could chime in. Thanks, Don
Technical SEO | | NicheGuy0 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Track Backs how to use them
Hi i am trying to learn how to use track backs as a way to get link exposure. Cana anyone please explain to me the importance of them and how to use them please. Would i use one by putting a link back to my site or am i wrong on this. any help would be great
Technical SEO | | ClaireH-1848860 -
Google not using <title>for SERP?</title>
Today I noticed that Google is not using my title tag for one of my pages. Search for "covered call search" Look at organic result 6: Search - Covered Calls Covered call screener filters 150000 options instantly to find the best high yield covered calls that meet your custom criteria. Free newsletter.<cite>https://www.borntosell.com/search</cite> - CachedNow, if you click through to that page you see the meta title tag is:Covered Call ScreenerEven the cached version shows the title tag as Covered Call ScreenerI am not logged in, so I don't believe personalization has anything to do with it.Have others seen this before?It is possible that "search - covered calls" was the title tag 9 months ago (before I understood SEO); I honestly don't remember. I cleaned all my titles up at least 6 months ago.Can I force Google to re-index the page? Its content has changed a few times in the last few months, and Google crawls my site frequently according to webmaster tools.
Technical SEO | | scanlin0