Crawling issue
-
Hello,
I am working on 3 weeks old new Magento website. On GWT, under index status >advanced, I can only see 1 crawl on the 4th day of launching and I don't see any numbers for indexed or blocked status.
| Total indexed | Ever crawled | Blocked by robots | Removed |
| 0 | 1 | 0 | 0 |I can see the traffic on Google Analytic and i can see the website on SERPS when i search for some of the keywords, i can see the links appear on Google but i don't see any numbers on GWT.. As far as I check there is no 'no index' or robot block issue but Google doesn't crawl the website for some reason.
Any ideas why i cannot see any numbers for indexed or crawled status on GWT?
Thanks
Seda
| | | | |
| | | | | -
Thanks Davenport and Everett, I've got XML sitemap submitted already, checked robot and no index etc but no stats yet. I'll wait for a few weeks more but it just doesn't make sense to not get any stays after a month. Meanwhile, If i figure out anything, I'll reply here.
-
The data in GWT is not always updated regularly. Also, for a new site that has never been indexed before and has no, or few, external links, it would not be surprising to experience infrequent crawls. The more links you earn and the more of a history of fresh content and updated pages you develop, the more often and deeply you'll be crawled.
As Davenport-Tractor mentioned, an XML sitemap submitted to GWT will also help if you haven't done that already.
If most of your pages are indexed when you do a (site:yourdomain.com) search on Google I wouldn't worry about it too much. If they aren't indexed, you may have a problem, such as inadvertently blocking the crawlers via robots meta tag or robots.txt file. I'd have to see the site to know that though.
-
Seda,
Have you submitted a sitemap to GWMT?
That will greatly help the Google spiders crawl your site. Kind of like telling someone how to find your business vs providing them a road map. They will get there a whole lot quicker if you provide a map on how to find all the different locations.
There are quite a few different sitemap generator programs available. These programs will index your site and build the sitemap.xml file for you. Now you can save the file to your website root directory, then point GWMT to the sitemap.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Redirect Issue in .htaccess
Hi, I'm stumped on this, so I'm hoping someone can help. I have a Wordpress site that I migrated to https about a year ago. Shortly after I added some code to my .htaccess file. My intention was to force https and www to all pages. I did see a moderate decline in rankings around the same time, so I feel the code may be wrong. Also, when I run the domain through Open Site Explorer all of the internal links are showing 301 redirects. The code I'm using is below. Thank you in advance for your help! Redirect HTTP to HTTPS RewriteEngine On ensure www. RewriteCond %{HTTP_HOST} !^www. [NC]
Intermediate & Advanced SEO | | JohnWeb12
RewriteRule ^ https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] ensure https RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteCond %{HTTPS} off
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress USER IP BANNING <limit get="" post="">order allow,deny
deny from 213.238.175.29
deny from 66.249.69.54
allow from all</limit> #Enable gzip compression
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript #Setting heading expires
<ifmodule mod_expires.c="">ExpiresActive on
ExpiresDefault "access plus 1 month"
ExpiresByType application/javascript "access plus 1 year"
ExpiresByType image/x-ico "access plus 1 year"
ExpiresByType image/jpg "access plus 14 days"
ExpiresByType image/jpeg "access plus 14 days"
ExpiresByType image/gif "access plus 14 days"
ExpiresByType image/png "access plus 14 days"
ExpiresByType text/css "access plus 14 days"</ifmodule>0 -
Crawl efficiency - Page indexed after one minute!
Hey Guys,A site that has 5+ million pages indexed and 300 new pages a day.I hear a lot that sites at this level its all about efficient crawlabitliy.The pages of this site gets indexed one minute after the page is online.1) Does this mean that the site is already crawling efficient and there is not much else to do about it?2) By increasing crawlability efficiency, should I expect gogole to crawl my site less (less bandwith google takes from my site for the same amount of crawl)or to crawl my site more often?Thanks
Intermediate & Advanced SEO | | Mr.bfz0 -
Hey guys i have this issues on my crawling report what should i do to exlude the pages? are d
Overly-Dynamic URL Overly-Dynamic URL Although search engines can crawl dynamic URLs, search engine representatives have warned against using over 2 parameters in a given URL. Search engines may also see dynamic versions of the same URL as unique URLs, creating duplicate content.
Intermediate & Advanced SEO | | adulter0 -
Meta No INDEX and Robots - Optimizing Crawl Budget
Hi, Sometime ago, a few thousand pages got into Google's index - they were "product pop up" pages, exact duplicates of the actual product page but a "quick view". So I deleted them via GWT and also put in a Meta No Index on these pop up overlays to stop them being indexed and causing dupe content issues. They are no longer within the index as far as I can see, i do a site:www.mydomain.com/ajax and nothing appears - So can I block these off now with robots.txt to optimize my crawl budget? Thanks
Intermediate & Advanced SEO | | bjs20100 -
Should I let Google crawl my production server if the site is still under development?
I am building out a brand new site. It's built on Wordpress so I've been tinkering with the themes and plug-ins on the production server. To my surprise, less than a week after installing Wordpress, I have pages in the index. I've seen advice in this forum about blocking search bots from dev servers to prevent duplicate content, but this is my production server so it seems like a bad idea. Any advice on the best way to proceed? Block or no block? Or something else? (I know how to block, so I'm not looking for instructions). We're around 3 months from officially launching (possibly less). We'll start to have real content on the site some time in June, even though we aren't planning to launch. We should have a development environment ready in the next couple of weeks. Thanks!
Intermediate & Advanced SEO | | DoItHappy0 -
REL canonicals not fixing duplicate issue
I have a ton of querystrings in one of the apps on my site as well as pagination - both of which caused a lot of Duplicate errors on my site. I added rel canonicals as a php condition so every time a specific string (which only exists in these pages) occurs. The rel canonical notification shows up in my campaign now, but all of the duplicate errors are still there. Did I do it right and just need to ignore the duplicate errors? Is there further action to be taken? Thanks!
Intermediate & Advanced SEO | | Ocularis0 -
Our keyword rank fell drastically since our last moz crawl. What gives?
Our site was just crawled last night an we had a number of keywords fall 25% or so, from say # 30 to 38. What would cause such a drastic fall? We haven't taken down pages or changed changed page titles, url or content. One thing we are having trouble with is our meta description and meta keywords for each page. Drupal 7 seems to be giving us trouble and the meta information isn't showing up anymore. Would that cause the fall? Any other suggestions?
Intermediate & Advanced SEO | | drudalton1 -
In-House SEO - Doubt about one SEO issue - Plz guys help over here =)
Hello, We wanna promote some of our software's. I will give u guys one example bellow: http://www.mediavideoconverter.de/pdf-to-epub-converter.html We also have this domain: http://pdftoepub.de/ How can we deal about the duplicate content, and also how can we improve the first domain product page. If I use the canonical and don't index the second domain and make a link to the first domain it will help anyway? or don't make any difference? keyword: pdf to epub , pdf to epub converter What u guys think about this technique ? Good / Bad ? Is there the second domain giving any value to the first domain page? Thanks in advance.
Intermediate & Advanced SEO | | augustos0