Https-pages still in the SERP's
-
Hi all,
my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content.
Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow>
I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up.
Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index?
Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))...
thanx in advance
-
Hi Irving,
yes, you are right. The https login page is the "problem", other pages that I visit after are staying on https, as all the links on these page are https links. So you could surf all the pages on the domain in a https mode, if you visited the login page before
I spoke to our it department about this problem and they told me it would take time to program our CMS different. My boss then told me to find another, cheaper solution - so I came up with the noindex,nofollow.
So, do you see another solution whithout having to ask our it department again? They< are always very busy and almost have no time for nobody
-
Hi Malcolm,
thankx for the help. Before we put the noindex, nofollow on these pages, I thought about using the rel=canonical.
To be honest, I did not choose rel=canonical because I think that the noindex,nofollow ia a stronger sign for Google, and that the rel=canonical is more like a hint, which G does not always follow... but sure, i can be wrong!
You are saying that the noindex could end worse. The https-pages only contain links to https-pages, think of these pages like "normal" pages, same content, link structure etc. etc. Every URL just is a https, internal, external....
So I thought the noindex,nofollow would not hurt the http pages, because they cannot be found on the https ones - what do you think?
-
Is there a reason you're supporting both http and https versions of every page? If not, 301 redirect to either http or https for each page. I'd only leave pages that need to be secure as https, e.g. purchase pages. Non-secure pages are generally a better user experience in terms of load time since the user can use cached files from previous pages and non-encrypted pages are more lightweight.
If you're out to support both for those secure users who like https everywhere, I'd go with Malcolm's solution and rel canonical to the version you'd like to have indexed rather than using noindex nofollow.
-
do you have absolute links on your site that are keeping https?
For example, if you go to a secure login page and then click a homepage navigation link on the secure https page do you see the homepage link going back to http or staying on https?
That is usually the cause of this problem you should look into that. I would not manually request removal of the pages in WMT i would just fix the problem and let google update it itself.
-
have you tried canonicalising the http version?
Using a noindex nofollow rule could end up being worse as you are telling Google not to follow the pages or index them and this will include both http and https.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No: 'noindex' detected in 'robots' meta tag
Pages on my site show No: 'noindex' detected in 'robots' meta tag. However, when I inspect the pages html, it does not show noindex. In fact, it shows index, follow. Majority of pages show the error and are not indexed by Google...Not sure why this is happening. The page below in search console shows the error above...
Technical SEO | | Sean_White_Consult0 -
NoIndex tag, canonical tag or automatically generated H1's for automatically generated enquiry pages?
What would be better for automatically generated accommodation enquiry pages for a travel company? NoIndex tag, canonical tag, automatically generated H1's or another solution? This is the homepage: https://www.discoverqueensland.com.au/ You would enquire from a page like this: https://www.discoverqueensland.com.au/accommodation/sunshine-coast/twin-waters/the-sebel-twin-waters This is the enquiry form: https://www.discoverqueensland.com.au/accommodation-enquiry.php?name=The+Sebel+Twin+Waters®ion_name=Sunshine+Coast
Technical SEO | | Kim_Lazaro0 -
I need help with redirecting chain to another page and 301, I don't understand on how to fix
Redirect Chain <label>What it is:</label> Your page is redirecting to a page that is redirecting to a page that is redirecting to a page... and so on. Learn more about redirection best practices. <label>Why it's an issue:</label> Every redirect hop loses link equity and offers a poor user experience, which will negatively impact your rankings. <label>How to fix it:</label> Chiaryn says: “Redirect chains are often caused when multiple redirect rules pile up, such as redirecting a 'www' to non-www URL or a non-secure page to a secure/https: page. Look for any recurring chains that could be rewritten as a single rule. Be particularly careful with 301/302 chains in any combination, as the 302 in the mix could disrupt the ability of the 301 to pass link equity.” This is not helping me I don't understand about the 301 do I use the www.jasperartisanjewelry.com or the /jasperartisanjewelry.com I'm confused
Technical SEO | | geanmitch0 -
My 'complete guide' is cannibalising my main product page and hurting rankings
Hi everyone, I have a main page for my blepharoplasty surgical product that I want to rank. It's a pretty in-depth summary for patients to read all about the treatment and look at before and after pictures and there's calls to action in there. It works great and is getting lots of conversions. But I also have a 'complete guide' PDF which is for patients who are really interested in discovering all the technicalities of their eye-lift procedure including medical research, clinical stuff and risks. Now my main page is at position 4 and the complete guide is right below it in 5. So I tried to consolidate by adding the complete guide as a download on the main page. I've looked into rel canonical but don't think it's appropriate here as they are not technically 'duplicates' because they serve different purposes. Then I thought of adding a meta noindex but was not sure whether this was the right thing to do either. My report doesn't get any clicks from the serps, people visit it from the main page. I saw in Wordpress that there's options for the link, one says 'link to media file', 'custom URL' and 'attachment'. I've got the custom URL selected at the moment. There's also a box for 'link rel' which i figure is where I'd put the noindex. If that's the right thing to do, what should go in that box? Thanks.
Technical SEO | | Smileworks_Liverpool0 -
Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
Hi- I have a client that had thousands of dynamic php pages indexed by Google that shouldn't have been. He has since blocked these php pages via robots.txt disallow. Unfortunately, many of those php pages were linked to by high quality sites mulitiple times (instead of the static urls) before he put up the php 'disallow'. If we create 301 redirects for some of these php URLs that area still showing high value backlinks and send them to the correct static URLs, will Google even see these 301 redirects and pass link value to the proper static URLs? Or will the robots.txt keep Google away and we lose all these high quality backlinks? I guess the same question applies if we use the canonical tag instead of the 301. Will the robots.txt keep Google from seeing the canonical tags on the php pages? Thanks very much, V
Technical SEO | | Voodak0 -
Unfindable 404's
So I have noticed that my site has some really strange 404's that are only being linked to from internal links from the site.
Technical SEO | | Adamshowbiz
When I go to the pages that Web master tools suggests I can't actaully find the link which is pointing to the 404. In that instance what do you do? Any help would be much appreciated 🙂0 -
Are Collapsible DIV's SEO-Friendly?
When I have a long article about a single topic with sub-topics I can make it user friendlier when I limit the text and hide text just showing the next headlines, by using expandable-collapsible div's. My doubt is if Google is really able to read onclick textlinks (with javaScript) or if it could be "seen" as hidden text? I think I read in the SEOmoz Users Guide, that all javaScript "manipulated" contend will not be crawled. So from SEOmoz's Point of View I should better make use of old school named anchors and a side-navigation to jump to the sub-topics? (I had a similar question in my post before, but I did not use the perfect terms to describe what I really wanted. Also my text is not too long (<1000 Words) that I should use pagination with rel="next" and rel="prev" attributes.) THANKS for every answer 🙂
Technical SEO | | inlinear0 -
Google's "cache:" operator is returning a 404 error.
I'm doing the "cache:" operator on one of my sites and Google is returning a 404 error. I've swapped out the domain with another and it works fine. Has anyone seen this before? I'm wondering if G is crawling the site now? Thx!
Technical SEO | | AZWebWorks0