Home Page Deindexed Only at Google after Recovering from Hack Attack
-
Hello, Facing a Strange issue, wordpress blog hghscience[dot]com was hacked by someone, when checked, I found index.php file was changed & it was showing some page with a hacked message, & also index.html file was added to the cpanel account.All pages were showing same message, when I found it, I replaced index.php to default wordpress index.php file & deleted index.htmlI could not find any other file which was looking suspicious. Site started working fine & it was also indexed but cached version was that hacked page. I used webmaster tool to fetch & render it as google bot & submitted for indexing. After that I noticed home page get deindexed by google. Rest all pages are indexing like before. Site was hacked around 30th July & I fixed it on 1st Aug. Since then home page is not getting indexed, I tried to fetch & index multiple time via google webmasters tool but no luck as of now. 1 More thing I Noticed, When I used info:mysite.com on google, its showing some other hacked site ( www.whatsmyreferer.com/ ) When Searching from India But when same info:mysite.com is searched from US a different hacked site is showing ( sigaretamogilev.by )However when I search "mysite.com" my site home page is appearing on google search but when I check cached URL its showing hacked sites mentioned above.As per my knowledge I checked all SEO Plugins, Codes of homepage, can't find anything which is not letting the homepage indexed.PS: webmaster tool has received no warning etc for penalty or malware.
I also noticed I disallowed index.php file via robots.txt earlier but now I even removed that. 7Dj1Q0w.png 3krfp9K.png
-
.htaccess file has nothing but
BEGIN WordPress
<ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule>END WordPress
Installed Plugins
Yoast SEO, Google XML Sitemaps, Akismat, Udinra All Image Sitemap, Social Share Bar (Digg Digg Alternative), Jetpack by WordPress.com, AuthorH Review.
Apart from Yoast, it seems nothing can block site, and Yoast settings are fine, just disabled tag indexing & subpages along with author archive.
Problem is something else I guess
-
Hi Ankit,
Though I have checked for the pages you're serving to bots, could you please have a look at your .htaccess file once? Does it contains something like:
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (google|yahoo) [OR]
RewriteCond %{HTTP_REFERER} (google|aol|yahoo)Do you have your code's copy in github or bitbucket or any other source code management tool? If yes, please scan last few commits thoroughly.
You can create a list of plugins installed recently. Remove them one by one and submit your home page URL to GWT for fetching a fresh copy it every time. Not sure what's the issue here, let's do hit-and-trial to deep dive a bit.
-
Hey Alan,
Do let me know if you find some solution or identify the problem.
-
That's what. Not able to find any good information to go next-step for this. But, still checking random things with a "hope".
-
Domaintools domain report shows no more info that could be helpful. Leaving me at a complete loss as to what else to check.
-
More info.
Because Nitin was able to run a ping and traceroute without problem, I went to DomainTools.com - the worlds leading resource for forensic digital investigative research. I use it whenever I am doing investigations for expert witness work I do.
When I ran the domain there, it had a screen-capture of the home page from June. So I submitted a refresh, and it came back as not being able to provide a screen-shot of the home page.
While not a smoking gun issue, it further clouds my trust in regard to whether the domain is actually functioning properly in the hosting environment as I originally thought it might not be.
I will run a deeper test to see if I can get more information, however I wanted to post this update because I believe it relevant.
-
Well, this is probably 1 of the most interesting issues an SEO can come across with. Google is showing different cached version in different countries. For me, that's strange too. Is that usual thing?
-
Nitin
Thanks for doing that - Now I'm stumped - I've never had Pingdom fail before with both ping and traceroute. And I now wonder if it's a non-issue, or part of the confused mess that Ankit referenced somehow.
-
That's right, its showing different cached versions in different countries. Just checked for US here. Screenshot attached.
-
I think that index.php disallowed was not an issue, I took suggestion and removed it but many sites disallow index.php via robots.txt to avoid duplicate content issue in site.com & site.com/index.php
here is an example - http://www.shoutmeloud.com/robots.txt
Still I did it about 10-12 days ago, fetched & submitted to index & also put rendering request.
Attaching current Screenshot of last rendering request.
I think some other issue, what's your view on that info:site.com showing some other hacked sites, how's this happening & sites are also changing. Its different in India, Different in US.
-
Ping and traceroute worked for me when I tried using my terminal (screenshot is attached).
Well, I agree that the problem is actually bigger. If you see its cached version on google, it was last cached on 16th Aug i.e after the issue of index.php/index.html was fixed by the admin (another screenshot attached).
I tried to see this page as googlebot as well, couldn't find the issue (wanted to check it for cloaking as well).
-
UPDATE TO MY ORIGINAL COMMENT
I initially found a problem doing a ping and traceroute test using Pingdom.com - both returned an "invalid host name" error, something I have not seen previously for both ping and traceroute simultaneously.
Nitin (see his comment below) did a similar test locally and found both to be okay. Though he has other thoughts.
I just wanted to clarify here now, that my original finding may not be a key to this issue, though I want to understand why my test came back that way...
-
You said you remove the index.php from the robots.txt. I just wanted to when did that happened? Because after removal, it usually took some time to get back in index (crawler need to recrawl the website accordingly).
My advice is to resubmit your robots.txt and updated sitemap.xml to Webmaster console and wait for the next crawl and this should be fixed.
Hope this helps!
-
Just sent SC, Nothing helped so far, Its quite strange that the info:domain.com is now showing some other hacked URL. SC attached.
-
It was quite strange for me as well, Just attached Screen Shot after fetching for 1 more time.
1 more thing I noticed, that info:mysite.com is not showing some other Hacked domain. Not sure How it's happening & why It's happening.
Sorry for the delay in reply, I was not getting email updates so I though no one answered my question.
-
Hi Ankit! Did Nitin's suggestions help at all? And are you able to share the screenshot he asked for?
-
Check the following, may be it'll help you resolve the issue:
https://mza.bundledseo.com/community/q/de-indexed-homepage-in-google-very-confusing
https://mza.bundledseo.com/community/q/site-de-indexed-except-for-homepage
-
That's really strange. Could you please share the screenshot when you're trying to fetch it as google in the GWT?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page Speed or Site Speed which one does Google considered a ranking signal
I've read many threads online which proves that website speed is a ranking factor. There's a friend whose website scores 44 (slow metric score) on Google Pagespeed Insights. Despite that his website is slow, he outranks me on Google search results. It confuses me that I optimized my website for speed, but my competitor's slow site outperforms me. On Six9ja.com, I did amazing work by getting my target score which is 100 (fast metric score) on Google Pagespeed Insights. Coming to my Google search console tool, they have shown that some of my pages have average scores, while some have slow scores. Google search console tool proves me wrong that none of my pages are fast. Then where did the fast metrics went? Could it be because I added three Adsense Javascript code to all my blog posts? If so, that means that Adsense code is slowing website speed performance despite having an async tag. I tested my blog post speed and I understand that my page speed reduced by 48 due to the 3 Adsense javascript codes added to it. I got 62 (Average metric score). Now, my site speed is=100, then my page speed=62 Does this mean that Google considers page speed rather than site speed as a ranking factor? Screenshots: https://imgur.com/a/YSxSwOG Regarding: https://etcnaija.com
Technical SEO | | etcna0 -
Google not returning an international version of the page
I run a website that duplicates some content across international editions. These are differentiated by the country codes e.g. /uk/folder/article1/ /au/folder/article1/ The UK version is considered the origin of the content. We currently use hreflang to differentiate content, however there is no actual regional or language variation between the content on these pages. Recently the UK version of a specific article is being indexed by Google as I am able to access via keyword search, however when I try to search for it via: site:domain.com/uk/folder/article1/then it is not displaying, however the AU version is. Identical articles in the same folder are not having this issue. There are no errors within webmaster tools and I have recently refetched the specific URL. Additionally when checking for internal links to the UK and AU edition of the article, I am getting internal links for the AU edition of the article however no internal links for the UK edition of the article. The main reason why this is problematic is because the article is now no longer appearing on the UK edition of the site for internal site search. How can I find out why Google is not getting a result when the URL is entered but it is coming up when doing a specific search?
Technical SEO | | AndDa0 -
Home page duplicate content...
Hello all! I've just downloaded my first Moz crawl CSV and I noticed that the home page appears twice - one with an appending forward slash at the end: http://www.example.com
Technical SEO | | LiamMcArthur
http://www.example.com/ For any of my product and category pages that encounter this problem - it's automatically resolved with a canonical tag. Should I create the same canonical tag for my home page? rel="canonical" href="http://www.example.com" />0 -
How come only 2 pages of my 16 page infographic are being crawled by Moz?
Our Infographic titled "What Is Coaching" was officially launched 5 weeks ago. http://whatiscoaching.erickson.edu/ We set up campaigns in Moz & Google Analytics to track its performance. Moz is reporting No organic traffic and is only crawling 2 of the 16 pages we created. (see first and third attachments) Google Analytics is seeing hundreds of some very strange random pages (see second attachment) Both campaigns are tracking the url above. We have no idea where we've gone wrong. Please help!! 16_pages_seen_in_wordpress.png how_google_analytics_sees_pages.png what_moz_sees.png
Technical SEO | | EricksonCoaching0 -
How can I get Google to forget an https version of one page on my site?
Google mysteriously decided to index the broken, https version of one page on my company's site (we have a cert for the site, but this page is not designed to be served over https and the CSS doesn't load). The page already has many incoming links to the http version, and it has a canonical URL with http. I resubmitted it on http with webmaster tools. Is there anything else I could do?
Technical SEO | | BostonWright0 -
Page Indexing increase when I request Google Site Link demote
Hi there, Has anyone seen a page crawling increase in Google Web Master Tools when they have requested a site link demotion? I did this around the 23rd of March, the next day I started to see page crawling rise and rise and report a very visible spike in activity and to this day is still relatively high. From memory I have asked about this in SEOMOZ Q&A a couple of years ago in and was told that page crawl activity is a good thing - ok fine, no argument. However at the nearly in the same period I have noticed that my primary keyword rank for my home page has dropped away to something in the region of 4th page on Google US and since March has stayed there. However the exact same query in Google UK (Using SEOMOZ Rank Checker for this) has remained the same position (around 11th) - it has barely moved. I decided to request an undemote on GWT for this page link and the page crawl started to drop but not to the level before March 23rd. However the rank situation for this keyword term has not changed, the content on our website has not changed but something has come adrift with our US ranks. Using Open Site Explorer not one competitor listed has a higher domain authority than our site, page authority, domain links you name it but they sit there in first page. Sorry the above is a little bit of frustration, this question is not impulsive I have sat for weeks analyzing causes and effects but cannot see why this disparity is happening between the 2 country ranks when it has never happened for this length of time before. Ironically we are still number one in the United States for a keyword phrase which I moved away from over a month ago and do not refer to this phrase at all on our index page!! Bizarre. Granted, site link demotion may have no correlation to the KW ranking impact but looking at activities carried out on the site and timing of the page crawling. This is the only sizable factor I can identify that could be the cause. Oh! and the SEOMOZ 'On-Page Optimization Tool' reports that the home page gets an 'A' for this KW term. I have however this week commented out the canonical tag for the moment in the index page header to see if this has any effect. Why? Because as this was another (if not minor) change I employed to get the site to an 'A' credit with the tool. Any ideas, help appreciated as to what could be causing the rank differences. One final note the North American ranks initially were high, circa 11-12th but then consequently dropped away to 4th page but not the UK rankings, they witnessed no impact. Sorry one final thing, the rank in the US is my statistical outlier, using Google Analytics I have an average rank position of about 3 across all countries where our company appears for this term. Include the US and it pushes the average to 8/9th. Thanks David
Technical SEO | | David-E-Carey0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Trailing Slashes on Home Pages
I do not think I have a problem here, but a second opinion would be welcomed... I have a site which has a the rel=canonical tag with the trailing slash displayed. ie www.example.com/ The sitemap has it without the trailing slash. www.example.com Google has it's cached copy with the trailing slash but the browser displays it without. I want to say it's perfectly fine (for the home page) as I tend to think they are treated (with/without trailing slashes) as the same canonical URL.
Technical SEO | | eventurerob0