Why do my https pages index while noindexed?
-
I have some tag pages on one of my sites that I meta noindexed. This worked for the http version, which they are canonical'd to but now the https:// version is indexing.
The https version is both noindexed and has a canonical to the http version, but they still show up! I even have wordpress set up to redirect all https: to http! For some reason these pages are STILL showing in the SERPS though. Any experience or advice would be greatly appreciated.
Example page: https://www.michaelpadway.com/tag/insurance-coverage/
Thanks all!
-
That is true, but I also have them 301'd to the http version and canonicaled! That is pretty much every possible signal to tell them those pages aren't pages and don't index them.
I suppose we can submit the URLs, unfortunately there are a LOT of tag pages.
Thanks for the advice Dana!
-
Hi Spencer,
I am an in-house SEO to a fairly large e-commerce site (4,000 SKUs) that has the same exact problem. As I am sure you are aware, the META robots noindex tag is only a suggestion to goooglebot. This is something the bot can easily choose to ignore, and it frequently does ignore it.
I would suggest submitting individual URLs that you would prefer to be removed from Google in Google Webmaster Tools to the "Remove URLS" tool - It's not instantaneous, but it does work.
I hope that helps. I know it's frustrating. We have tons of content that's indexed that we'd rather wasn't. It takes time, patience and intelligent work to get the job done.
Dana
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Internal search pages (and faceted navigation) solutions for 2018! Canonical or meta robots "noindex,follow"?
There seems to conflicting information on how best to handle internal search results pages. To recap - they are problematic because these pages generally result in lots of query parameters being appended to the URL string for every kind of search - whilst the title, meta-description and general framework of the page remain the same - which is flagged in Moz Pro Site Crawl - as duplicate, meta descriptions/h1s etc. The general advice these days is NOT to disallow these pages in robots.txt anymore - because there is still value in their being crawled for all the links that appear on the page. But in order to handle the duplicate issues - the advice varies into two camps on what to do: 1. Add meta robots tag - with "noindex,follow" to the page
Intermediate & Advanced SEO | | SWEMII
This means the page will not be indexed with all it's myriad queries and parameters. And so takes care of any duplicate meta /markup issues - but any other links from the page can still be crawled and indexed = better crawling, indexing of the site, however you lose any value the page itself might bring.
This is the advice Yoast recommends in 2017 : https://yoast.com/blocking-your-sites-search-results/ - who are adamant that Google just doesn't like or want to serve this kind of page anyway... 2. Just add a canonical link tag - this will ensure that the search results page is still indexed as well.
All the different query string URLs, and the array of results they serve - are 'canonicalised' as the same.
However - this seems a bit duplicitous as the results in the page body could all be very different. Also - all the paginated results pages - would be 'canonicalised' to the main search page - which we know Google states is not correct implementation of canonical tag
https://webmasters.googleblog.com/2013/04/5-common-mistakes-with-relcanonical.html this picks up on this older discussion here from 2012
https://mza.bundledseo.com/community/q/internal-search-rel-canonical-vs-noindex-vs-robots-txt
Where the advice was leaning towards using canonicals because the user was seeing a percentage of inbound into these search result pages - but i wonder if it will still be the case ? As the older discussion is now 6 years old - just wondering if there is any new approach or how others have chosen to handle internal search I think a lot of the same issues occur with faceted navigation as discussed here in 2017
https://mza.bundledseo.com/blog/large-site-seo-basics-faceted-navigation1 -
Canonicle & rel=NOINDEX used on the same page?
I have a real estate company: www.company.com with approximately 400 agents. When an agent gets hired we allow them to pick a URL which we then register and manage. For example: www.AGENT1.com We then take this agent domain and 301 redirect it to a subdomain of our main site. For example
Intermediate & Advanced SEO | | EasyStreet
Agent1.com 301’s to agent1.company.com We have each page on the agent subdomain canonicled back to the corresponding page on www.company.com
For example: agent1.company.com canonicles to www.company.com What happened is that google indexed many URLS on the subdomains, and it seemed like Google ignored the canonical in many cases. Although these URLS were being crawled and indexed by google, I never noticed any of them rank in the results. My theory is that Google crawled the subdomain first, indexed the page, and then later Google crawled the main URL. At that point in time, the two pages actually looked quite different from one another so Google did not recognize/honor the canonical. For example:
Agent1.company.com/category1 gets crawled on day 1
Company.com/category1 gets crawled 5 days later The content (recently listed properties for sale) on these category pages changes every day. If Google crawled the pages (both the subdomain and the main domain) on the same day, the content on the subdomain and the main domain would look identical. If the urls are crawled on different days, the content will not match. We had some major issues (duplicate content and site speed) on our www.company.com site that needed immediate attention. We knew we had an issue with the agent subdomains and decided to block the crawling of the subdomains in the robot.txt file until we got the main site “fixed”. We have seen a small decrease in organic traffic from google to our main site since blocking the crawling of the subdomains. Whereas with Bing our traffic has dropped almost 80%. After a couple months, we have now got our main site mostly “fixed” and I want to figure out how to handle the subdomains in order to regain the lost organic traffic. My theory is that these subdomains have a some link juice that is basically being wasted with the implementation of the robots.txt file on the subdomains. Here is my question
If we put a ROBOTS rel=NOINDEX on all pages of the subdomains and leave the canonical (to the corresponding page of the company site) in place on each of those pages, will link juice flow to the canonical version? Basically I want the link juice from the subdomains to pass to our main site but do not want the pages to be competing for a spot in the search results with our main site. Another thought I had was to place the NOIndex tag only on the category pages (the ones that seem to change every day) and leave it off the product (property detail pages, pages that rarely ever change). Thank you in advance for any insight.0 -
Possible to Improve Domain Authority By Improving Content on Low Page Rank Pages?
My sites domain authority is only 23. The home page has a page authority of 32. My site consists of about 400 pages. The topic of the site is commercial real estate (I am a real estate broker). A number of the sites we compete against have a domain authority of 30-40. Would our overall domain authority improved if we re-wrote the content for several hundred of pages that had the lowest page authority (say 12-15)? Is the overall domain authority derived by an average of the page authority of each page on a domain? Alternatively could we increase domain authority by setting the pages with the lowest page authority to "no index". By the way our domain is www.nyc-officespace-leader.com Thanks, Alan
Intermediate & Advanced SEO | | Kingalan10 -
Google de-indexed a page on my site
I have a site which is around 9 months old. For most search terms we rank fine (including top 3 rankings for competitive terms). Recently one of our pages has been fluctuating wildly in the rankings and has now disappeared altogether from the rankings for over 1 week. As a test I added a similar page to one of my other sites and it ranks fine. I've checked webmaster tools and there is nothing of note there. I'm not really sure what to do at this stage. Any advice would me much appreciated!
Intermediate & Advanced SEO | | deelo5550 -
Is it possible to get a list of pages indexed in Google?
Is there a tool that will give me a list of pages on my site that are indexed in Google?
Intermediate & Advanced SEO | | rise10 -
Number of Indexed Pages are Continuously Going Down
I am working on online retail stores. Initially, Google have indexed 10K+ pages of my website. I have checked number of indexed page before one week and pages were 8K+. Today, number of indexed pages are 7680. I can't understand why should it happen and How can fix it? I want to index maximum pages of my website.
Intermediate & Advanced SEO | | CommercePundit0 -
Getting 260,000 pages re-indexed?
Hey there guys, I was recently hired to do SEO for a big forum to move the site to a new domain and to get them back up to their ranks after this move. This all went quite well, except for the fact that we lost about 1/3rd of our traffic. Although I expected some traffic to drop, this is quite a lot and I'm wondering what it is. The big keywords are still pulling the same traffic but I feel that a lot of the small threads on the forums have been de-indexed. Now, with a site with 260,000 threads, do I just take my loss and focus on new keywords? Or is there something I can do to get all these threads re-indexed? Thanks!
Intermediate & Advanced SEO | | StefanJDorresteijn0 -
How long till pages drop out of the index
In your experience how long does it normally take for 301-redirected pages to drop out of Google's index?
Intermediate & Advanced SEO | | bjalc20110