Moving from http to https: image duplicate issue?
-
Hello everyone,
We have recently moved our entire website virtualsheetmusic.com from http:// to https:// and now we are facing a question about images.
Here is the deal: All webpages URLs are properly redirected to their corresponding https if they are called from former http links. Whereas, due to compatibility issues, all images URLs can be called either via http or https, so that any of the following URLs work without any redirect:
http://www.virtualsheetmusic.com/images/icons/ResponsiveLogo.png
https://www.virtualsheetmusic.com/images/icons/ResponsiveLogo.png
Please note though that all internal links are relative and not absolute.
So, my question is: Can that be a problem from the SEO stand point? In particular: We have thousands of images indexed on Google, mostly images related to our digital sheet music preview image files, and many of them are ranking pretty well in the image pack search results. Could this change be detrimental in some way? Or doesn't make any difference in the eyes of Google? As I wrote above, all internal links are relative, so an image tag like this one:
Hasn't changed at all, it is just loaded in a https context.
I'll wait for your thoughts on this. Thank you in advance!
-
No problem
-
Great! Glad to know that. Thank you Dimitrii, I appreciated your help very much!
-
Oh, I see. Yeah, there shouldn't be any problems, if someone else links to your images with http. And yes, your assumption is correct
-
Thank you Dimitrii to clarifying, actually all our webpages now load images only via the https://, but since many external websites are hard-linking to many of our images via the regular http:// protocol, I was thinking to allow linking to them the "insecure" way if requested. Do you see my point? So... to better clarify my initial question, let's say Google is spidering one of those external affiliates and finds an image tag like this:
Will Google consider the image found at:
http://www.virtualsheetmusic.com/image.jpg
a duplicate of:
https://www.virtualsheetmusic.com/image.jpg
?? This was my original question...
In any case, I have made some testings today, and I have been able to redirect all images via .htaccess permanently (301) to https:// and looks like even if an image is requested with the http:// from the browser, it shows up correctly because the web browser handles redirects for images in the same way it handles them for the web page itself.
So... my concern should be solved this way. But in case, for any reason, I need to be able to serve the same image from both protocols (http or https) it is my understand that that shouldn't be an issue anyway. Is my assumption correct?
Thanks again.
-
I did quick search, and there are lots of good articles about why images are not duplicate content: http://bfy.tw/9Qy4
-
So, the reason I recommend having images loading only through one resource is the "insecurity" of https connection, if any resources are loaded not over https. You might have seen that sometimes instead of green lock in a browser bar, it can show yellow exclamation mark - that's one of the reasons. And also it's just cleaner, if everything is loaded the same way.
Here is a link to resource about mixed content: https://developers.google.com/web/fundamentals/security/prevent-mixed-content/fixing-mixed-content
-
Thank you Dimitrii for your reply.
Well, your two statements above contradicts each other, in my opinion. You see, what really concerns me is your last suggestion:
"it's better to make sure that images (and all the other resources) available only through one protocol - http or https."
And hence my original concern. Why should we make sure that images are available only through one protocol if you say first that there isn't such thing as duplicate content for images? Why should we concern about that then?
Sorry for my further request for clarification. I really appreciated your help!
-
Howdy.
As far as I understand, there is no such thing as duplicate content just for images. Duplicate content is more for the page as a whole. Especially, since you guys redirected all the links, you shouldn't have any problems, since google will simply "realize" the change.
Now, it's better to make sure that images (and all the other resources) available only through one protocol - http or https.
Hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
May Faceted Navigation via ajax #parameter cause duplicated content issues?
We are going to implement a faceted navigation for an ecommerce site of about 1000 products.
Intermediate & Advanced SEO | | lcourse
Faceted navigation is implemented via ajax/javascript which adds to the URL a large number of #parameters.
Faceted pages are canonicalizing to page without any parameters. We do not want google to index any of the faceted pages at this point. Will google include pages with #parameters in their index?
Can I tell google somehow to ignore #parameters and not to index them?
Could this setup cause any SEO problems for us in terms of crawl bandwidth and or link equity?0 -
Would you redirect Website A to Website B, when Website B is in the middle of a HTTP=>HTTPS migration?
Hey guys, I'm curious on your thoughts around this scenario... Website A: 35,000 monthly pageviews 1,000 pages 375 root linking domains currently HTTPS focused on one topic weak rankings for competitive keywords Website B: 3M monthly pageviews 32,500 pages 3,500 root linking domains started HTTP to HTTPS migration 1 week ago. 1/3 of pages indexed as HTTPS. focused on many topics strong rankings for competitive keywords Requirement: I want to have a reliable read on how Website A's keyword rankings change after redirecting it's pages to Website A. This post-migration analysis will be used as a basis to assess the risk of redirecting another website we own that is similar to Website A into Website B. My question: Would you wait until most of the pages on Website B are indexed as HTTPS before doing a 301 of Website A to Website B? Please back up your answer with reasons why or why not 🙂
Intermediate & Advanced SEO | | jeremycabral0 -
Switching from HTTP to HTTPS: 301 redirect or keep both & rel canonical?
Hey Mozzers, I'll be moving several sites from HTTP to HTTPS in the coming weeks (same brand, multiple ccTLDs). We'll start on a low traffic site and test it for 2-4 weeks to see the impact before rolling out across all 8 sites. Ideally, I'd like to simply 301 redirect the HTTP version page to the HTTPS version of the page (to get that potential SEO rankings boost). However, I'm concerned about the potential drop in rankings, links and traffic. I'm thinking of alternative ways and so instead of the 301 redirect approach, I would keep both sites live and accessible, and then add rel canonical on the HTTPS pages to point towards HTTP so that Google keeps the current pages/ links/ indexed as they are today (in this case, HTTPS is more UX than for SEO). Has anyone tried the rel canonical approach, and if so, what were the results? Do you recommend it? Also, for those who have implemented HTTPS, how long did it take for Google to index those pages over the older HTTP pages?
Intermediate & Advanced SEO | | Steven_Macdonald0 -
Http - Https Issue
Hey there Mozzers, I have a site that few months ago went from being http - https. All the links redirect perfect but after scanning my site with Screaming Frog i get a bunch of 503 errors. After looking into my website I see that a lot of links in my content and menu have as a link the http url. For example my homepage has content that interlinks to the http version of the site. And even though when I test it it redirects correctly after scanning with Screaming frog it reports back as 503. Any ideas what's going on? Thanks in advance
Intermediate & Advanced SEO | | Angelos_Savvaidis0 -
Authorship image
Do you understand google authorship? They removed the image from google but left the name. Cant understand what is wrong. It still shows up in testing.
Intermediate & Advanced SEO | | Joseph-Green-SEO0 -
HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
The organization I am working with has multiple versions of its website geared towards the different regions. US - http://www.orionhealth.com/ CA - http://www.orionhealth.com/ca/ DE - http://www.orionhealth.com/de/ UK - http://www.orionhealth.com/uk/ AU - http://www.orionhealth.com/au/ NZ - http://www.orionhealth.com/nz/ Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are: http://www.orionhealth.com/terms-and-conditions http://www.orionhealth.com/uk/terms-and-conditions Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. A rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories? Any insight would be GREATLY appreciated! Thank you!
Intermediate & Advanced SEO | | Scratch_MM0 -
Is legacy duplicate content an issue?
I am looking for some proof, or at least evidence to whether or not sites are being hurt by duplicate content. The situation is, that there were 4 content rich newspaper/magazine style sites that were basically just reskins of each other. [ a tactic used under a previous regime 😉 ] The least busy of the sites has since been discontinued & 301d to one of the others, but the traffic was so low on the discontinued site as to be lost in noise, so it is unclear if that was any benefit. Now for the last ~2 years all the sites have had unique content going up, but there are still the archives of articles that are on all 3 remaining sites, now I would like to know whether to redirect, remove or rewrite the content, but it is a big decision - the number of duplicate articles? 263,114 ! Is there a chance this is hurting one or more of the sites? Is there anyway to prove it, short of actually doing the work?
Intermediate & Advanced SEO | | Fammy0 -
Blog Duplicate Content
Hi, I have a blog, and like most blogs I have various search options (subject matter, author, archive, etc) which produce the same content via different URLs. Should I implement the rel-canonical tag AND the meta robots tag (noindex, follow) on every page of duplicate blog content, or simply choose one or the other? What's best practice? Thanks Mozzers! Luke
Intermediate & Advanced SEO | | McTaggart0