Can you have an SSL cert but still have http?
-
I was under the impression that if you got an SSL cert for your site that the site would change to https. I ran this site: http://thekinigroup.com/ through an SSL checker and it said it had one...but it's http.
1. Why didn't it change to https? Is there an extra step there that needs to be done?
2. Is there a reason someone would choose to get an SSL cert, but not have https?
Thanks,
Ruben
-
Absolutely! Yeah I'm just not sold on...
A) The amount of work it would take to transition each site
B) The risk of something going wrong
C) The actual ranking benefit/advantage it would actually produce
D) The value of it for our users
And for that reason... I'm out! lol
-
Oh yeah, I have seen that before. Okay, that make sense.Thanks so much!
- Ruben
-
Hi Bryan,
It seems like a lot of knowledgeable people are not jumping on the SSL bandwagon just yet, so you're in good company. I appreciate the help.
Thanks,
Ruben
-
Hey Ruben!
We have several eCommerce sites that only have the SSL on pages that "need" to be secured like the checkout pages, etc. I am not totally buying into all the SEO value of the HTTPS just yet. And honestly doubt I ever will.
IMHO I don't feel that all pages of a site "NEED" to be secured. To me it just doesn't make sense and we have plenty of sites that are ranking #1 for competitive terms that only use the SSL on checkout. Hope this helps!!!
-
They could have the SSL for transactions. It was very common a few years ago (and many sites still do this) to have a site be http: and then once you start into their transaction funnel (such as a purchase) to switch over to https:
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can you promote a sub-domain ahead of a domain on the SERPs?
I have a new client that wants to promote their subdomain uk.imagemcs.com and have their main domain imagemcs.com fall off the SERPs. Objective? Get uk.imagemcs.com to rank first for UK 'brand' searches. Do a search for 'imagem creative services' and you should see the issue (it looks like rules have been applied to the robots.txt on the main domain to exclude any bots from crawling - but since they've been indexed previously I need to take action as it doesn't look great!). I think I can do this by applying a permanent redirect from the main domain to the subdomain at domain level and then no-indexing the site - and then resubmit the sitemap. My slight concern is that this no-indexing of the main domain may impact on the visibility of the subdomains (I'm dealing with uk.imagemcs.com, but there is us.imagemcs.com and de.imagemcs.com) and was looking for some assurance that this would not be the case. My understanding is that subdomains are completely distinct from domains and as such this action should have no impact on the subdomains. I asked the question on the Webmasters Forum but haven't really got anywhere
Technical SEO | | nathangdavidson2
https://productforums.google.com/forum/#!msg/webmasters/1Avupy3Uw_o/hu6oLQntCAAJ Can anyone suggest a course of action? many thanks, Nathan0 -
Site hacked, but can't find the code
Discovered some really odd words ranking for us in WMT. Looked further and found pages like this www.pdnseek.com/wll/canadian-24-hour-pharmacy. When you click it it redirects to the home page. The developers can't find /wll anywhere on the site. The pages are indexed and cached. Looked at the back links in moz and found many backlinks to our site from other sites using URLs like this. The host says there is nothing on the server, but where else could it be. We've run virus scans, nothing, looked through source code, nothing. Anyone with some idea? www.pdnseek.com is the URL
Technical SEO | | Britewave0 -
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
Is "commented out" text still read by the SEs?
A site I reviewed was showing up in Google rankings for key phrases specific to a city, however the page that was showing up had the 'city' key phrases commented out. Does Google still read and utilized commented out text? Or is it more likely that the page in question got indexed before the key phrases were commented out and it's just still appearing for the related search queries?
Technical SEO | | MLTGroup1 -
How can I block incoming links from a bad web site ?
Hello all, We got a new client recently who had a warning from Google Webmasters tools for manual soft penalty. I did a lot of search and I found out one particular site that sounds roughly 100k links to one page and has been potentialy a high risk site. I wish to block those links from coming in to my site but their webmaster is nowhere to be seen and I do not want to use the disavow tool. Is there a way I can use code to our htaccess file or any other method? Would appreciate anyone's immediate response. Kind Regards
Technical SEO | | artdivision0 -
HTTP headers
don't know why I did not see it before but my server "Expires:" tage is set to: Expires: Thu, 19 Nov 1981 08:52:00 GMT Also I have no "Last-modified" http header either Crazy! How much do you guys think this type of thing hurts a site?
Technical SEO | | kevin48030 -
Where can I find a good definition of "link juice"?
I have heard the term link juice being used in many different contexts. Where can I find a good definition for it?
Technical SEO | | casper4340 -
Organic traffic still down 9 months after redesign
The Good: We redesigned our nature travel website (www.ietravel.com) in Drupal. Overall, it's a great improvement in look and usability. Also, we are ranking for more relevant search terms (the SEO was managed by an agency before, and there were a lot of junk terms in their campaigns that weren't converting). The Bad: Organic search referrals have consistently been down 10-20% year-over-year each and every month. The Ugly: I am trying to dig in and figure out why this is happening, and I'm at a loss. We are aggressively publishing to our blog 5 days a week, and I've built many keyword-focused landing pages. Here's what I do know in terms of things that could be problems which I've seen in Webmaster Tools and SEOmoz tools. I have a lot of files restricted by robots.txt - 1,337 of them. Many have to be that way by design because they are nodes generated by web forms (w/ private user data). The rest are "Dates & Rates" pages - I restricted them because for each destination they are very similar in content. Wondering now if that was a mistake. For example, http://www.ietravel.com/central-south-america/galapagos-islands/dates-rates We have duplicate title tags on 462 pages. The Lightbox Module that was installed for our photo galleries was a disaster. I am researching a more SEO-friendly solution, but that solution is a month or more away. We have 31 duplicate meta descriptions. My question is, could these errors be THAT significantly impacting our rankings? I should note that according to Google Analytics, Referral traffic & Direct traffic is also year-over-year every month since the redesign. I don't understand the Referral part especially, since we took great pains to put in many 301 redirects. There are no 404s or non-indexable pages showing up in Webmaster Tools either. If anyone has any suggestions for problem areas or red flags I should investigate, please let me know. Really, any thoughts are appreciated. Best, Carlton
Technical SEO | | csmithal1