Crawling issues in google
-
Hi everyone,
I think i have crawling issues with one of my sites.
It has vanished form Google rankings it used to rank for all services i offered now it doesn't anymore ever since September 29th.
I have resubmitted to Google 2 times and they came back with the same answer:
"
We reviewed your site and found no manual actions by the web spam team that might affect your site's ranking in Google. There's no need to file a reconsideration request for your site, because any ranking issues you may be experiencing are not related to a manual action taken by the webspam team.
Of course, there may be other issues with your site that affect your site's ranking. Google's computers determine the order of our search results using a series of formulas known as algorithms. We make hundreds of changes to our search algorithms each year, and we employ more than 200 different signals when ranking pages. As our algorithms change and as the web (including your site) changes, some fluctuation in ranking can happen as we make updates to present the best results to our users.
If you've experienced a change in ranking which you suspect may be more than a simple algorithm change, there are other things you may want to investigate as possible causes, such as a major change to your site's content, content management system, or server architecture. For example, a site may not rank well if your server stops serving pages to Googlebot, or if you've changed the URLs for a large portion of your site's pages. This article has a list of other potential reasons your site may not be doing well in search.
"
How i detected that it may be a crawling issue is that 2 weeks ago i changed metas - metas are very slow in getting updated and for some of my pages never did update
Do you know any good tools to check for bad code that could slow down the crawling. I really don't know where to look other than issues for crawling. I validated the website with w3c validator and ran xenu and cleaned these up but my website is still down.
Any ideas are appreciated.
-
i ran another one
Date: Saturday, November 10, 2012 4:52:04 PM PST
Googlebot Type: Web
Download Time (in milliseconds): 94
<code>HTTP/1.1 200 OK Date: Sun, 11 Nov 2012 00:52:04 GMT</code>
-
Hi,
I already did do that a while ago...i still think i have a javascript issue.
I already did do the GWMT, w3c validation - but none of that helped
Date: Monday, October 22, 2012 5:00:59 PM PDT
Googlebot Type: Web
Download Time (in milliseconds): 115
<code>HTTP/1.1 200 OK Date: Tue, 23 Oct 2012 00:00:59 GMT</code>
-
Hey Cary,
The first thing I would do is to go into Google Webmaster tools -> Health -> Fetch as Googlebot.
Now under Fetch status you should have "Success" if Google managed to fetch the information. Click on it, and check the first few rows, you should see something like:
<code>HTTP/1.1 200 OK Date: Fri, 10 Nov 2012 00:26:17 GMT</code>
If you do see an other response then leave it as a response to this question so we can have investigate further with you.
Gr.,
Istvan
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical error from Google
Moz couldn't explain this properly and I don't understand how to fix it. Google emailed this morning saying "Alternate page with proper canonical tag." Moz also kinda complains about the main URL and the main URL/index.html being duplicate. Of course they are. The main URL doesn't work without the index.html page. What am I missing? How can I fix this to eliminate this duplicate problem which to me isn't a problem?
Technical SEO | | RVForce0 -
Issues Indexing Translated Pages
I'm having trouble getting http://www.procloud.ch/ to index for their german pages. The english pages are being indexed but not the german. Any ideas? Chris
Technical SEO | | ninel_P0 -
Google crawl rate dropped after we activated CloudFront
Hello! Previously we've been using Amazon CloudFront for our static content (js, css etc). But to be able to reduce load on our origin servers and to be able to give our international users a good user experience we decided to deliver a couple of our sites through CloudFront. We noticed very nice drops in page load time, but when checking Google webmaster tools we noticed that all CloudFront-activated sites got a huge drop in pages crawled per day (from avg ~3500 to ~150). Also one of the sites have issues with the Google sitemaps (just marked as "Pending" in GWT) and no new pages or updated pages seems to be updated in the Google SERP. The rest of the sites gets some updates on the Google SERP, but very few compared to before CloudFront activation. Is there anybody here who have experience in full site delivery through CloudFront (or other CDNs) and effects on SEO/Google? Would be very glad for any insights or suggestions. The risk is that we need to remove CloudFront if this just continues.
Technical SEO | | Ludde0 -
Google Publisher status
Hi all, I just wondered what the general opinion was with regard getting Google publisher status for medium to large organisations. Lots of our clients write a lot of articles & publications and it would be interesting to get some thoughts on how others view Authorship & in particular Publisher credentials. Thanks!
Technical SEO | | davidmaxwell0 -
.htaccess Redirect 301 issues
I have completely rewritten my web site, adding structure to the file directories. Subsequently added was Redirect information within the .htaccess file. The following example ...
Technical SEO | | Cyberace
Redirect 301 /armaflex.html http://www.just-insulation.com/002-brands/armaflex.html
Returns this response in the URL bar of ...
http://www.just-insulation.com/002-brands/armaflex.html?file=armaflex
I am at a loss to understand why the suffix "?file=armaflex" is added The following code is inserted at the top of the file ...
RewriteEngine On redirect html pages to the root domain RewriteRule ^index.html$ / [NC,R,L] Force www. prefix in URLs and redirect non-www to www RewriteCond %{http_host} ^just-insulation.com [NC]
RewriteRule ^(.*)$ http://www.just-insulation.com/ [R=301,NC] Any advice would be most welcome.0 -
Indexing Issue
Hi, I am working on www.stjohnswaydentalpractice.co.uk Google only seems to be indexing two of the pages when i search site:www.stjohnswaydentalpractice.co.uk I have added the site to webmaster tools and created a new sitemap which is showing that it has only submitted two of the pages. Can anyone shed any light for why these pages are not being indexed? Thanks Faye
Technical SEO | | dentaldesign0 -
Do we need to manually submit a sitemap every time, or can we host it on our site as /sitemap and Google will see & crawl it?
I realized we don't have a sitemap in place, so we're going to get one built. Once we do, I'll submit it manually to Google via Webmaster tools. However, we have a very dynamic site with content constantly being added. Will I need to keep manually re-submitting the sitemap to Google? Or could we have the continually updating sitemap live on our site at /sitemap and the crawlers will just pick it up from there? I noticed this is what SEOmoz does at http://www.seomoz.org/sitemap.
Technical SEO | | askotzko0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0