Switching from HTTP to HTTPS and google webmaster
-
HI,
I've recently moved one of my sites www.thegoldregister.co.uk to https. I'm using wordpress and put in the permanent 301 redirect for all pages to false https for all pages in the htaaccess file. I've updated the settings in google analytics to https for the original site. All seems to be working well.
Regarding the google webmaster tools and what needs to be done. I'm very confused by the google documentation on this subject around https. Does all my crawl data and indexing from http site still stand and be inherited by the https version because of the redirects in place. I'm really worried I will lose all of this indexing data, I looked at the "change of address" in the settings of webmaster, but this seems to refer to changing the actual domain name rather than the protocol which i haven't at all.
I've also tried adding the https version to the console as well, but the https version is showing a severe warning "is robots.txt blocking some important pages". I don't understand this error as it's the same version and file as the http site being generated by all in one seo pack for wordpress (see below at bottom). The warning is against line 5 saying it will ignore it. What i don't understand is i don't get this error in the webmaster console with the http version which is the same file??
Any help and advice would be much appreciated.
Kind regards
Steve
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /xmlrpc.php
Crawl-delay: 10 -
Hi Steve! If Peter and Kristen answered your question, make sure to mark their responses as "Good Answers."
-
You have few more things to do:
-
change redirect from 302 to 301 between HTTP and HTTPS sites
-
you need to verify in SearchConsole HTTPS site too and then do "change of address". Change of address can be used also if you switch protocols.
-
you need to change in your pages - canonical, assets, images so everything to point to HTTPS pages/elements. Also internal linking should be only to HTTPS pages. I check 2-3 pages of your site and they're still pointing to HTTP. This give bots wrong signal.
-
setup HSTS header. This will prevent browsers/bots to visit anymore HTTP site for one year:
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload" put this in .htaccess
-
about errors.txt I think that it's much better if you enable indexing at all. Here is example of mine site:
User-agent: *
Disallow:
Sitemap: http://peter.nikolow.me/sitemap_index.xmlas you can see i enable bot to crawl everything within WordPress folders.
Current you make half moving to HTTPS and this sent to bots wrong signals because site isn't moved proper. Fix everything to avoid wasting of crawling budget.
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google not Indexing images on CDN.
My URL is: https://bit.ly/2hWAApQ We have set up a CDN on our own domain: https://bit.ly/2KspW3C We have a main xml sitemap: https://bit.ly/2rd2jEb and https://bit.ly/2JMu7GB is one the sub sitemaps with images listed within. The image sitemap uses the CDN URLs. We verified the CDN subdomain in GWT. The robots.txt does not restrict any of the photos: https://bit.ly/2FAWJjk. Yet, GWT still reports none of our images on the CDN are indexed. I ve followed all the steps and still none of the images are being indexed. My problem seems similar to this ticket https://bit.ly/2FzUnBl but however different because we don't have a separate image sitemap but instead have listed image urls within the sitemaps itself. Can anyone help please? I will promptly respond to any queries. Thanks
Technical SEO | | TNZ
Deepinder0 -
Http to https - Copy Disavow?
If the switch is made from http to https (with 301 redirects from http to https) should the disavow file be copied over in GWT so it is also uploaded against the https as well as the http version?
Technical SEO | | twitime0 -
Google Update Frequency
Hi, I recently found a large number of duplicate pages on our site that we didn't know existed (our third-party review provider was creating a separate page for each product whether it was reviewed or not - the ones not reviewed are almost identical so they have been no indexed. Question - how long do you have to typically wait for Google to pick this up On our site? Is it a normal crawl or do we need to wait for the next Panda review (if there is such a thing)? Thanks much.
Technical SEO | | trophycentraltrophiesandawards0 -
How to correct a google canonical issue?
So when I initially launched my website I had an issue where I didn't properly set my canonical tags and all my pages got crawled. Now in looking at the search engine results I see a number of the pages that were meant to be canonical tagged to the correct page showing up in the results. What is the best way to correct this issue with google? Also I noticed that while initially I was ranking well for the main pages, now those results have disappeared entirely and deeper in the rankings I am finding the pages that were meant to be canonical tagged. Please Help.
Technical SEO | | jackaveli0 -
Tags showing up in Google
Yesterday a user pointed out to me that Tags were being indexed in Google search results and that was not a good idea. I went into my Yoast settings and checked the "nofollow, index" in my Taxanomies, but when checking the source code for no follow, I found nothing. So instead, I went into the robot.txt and disallowed /tag/ Is that ok? or is that a bad idea? The site is The Tech Block for anyone interested in looking.
Technical SEO | | ttb0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
The page in question receives a lot of quality traffic but is only relevant to a small percent of my users. I want to keep the link juice received from this page but I do not want it to appear in the SERPs.
Technical SEO | | surveygizmo0 -
What's the best way to deal with an entire existing site moving from http to https?
I have a client that just switched their entire site from the standard unsecure (http) to secure (https) because of over-zealous compliance issues for protecting personal information in the health care realm. They currently have the server setup to 302 redirect from the http version of a URL to the https version. My first inclination was to have them simply update that to a 301 and be done with it, but I'd prefer not to have to 301 every URL on the site. I know that putting a rel="canonical" tag on every page that refers to the http version of the URL is a best practice (http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=139394), but should I leave the 302 redirects or update them to 301's. Something seems off to me about the search engines visiting an http page, getting 301 redirected to an https page and then being told by the canonical tag that it's actually the URL they were just 301 redirected from.
Technical SEO | | JasonCooper0