Crawl Diagnostics - unexpected results
-
I received my first Crawl Diagnostics report last night on my dynamic ecommerce site.
It showed errors on generated URLs which simply are not produced anywhere when running on my live site. Only when running on my local development server.
It appears that the Crawler doesn't think that it's running on the live site.
For example
http://www.nordichouse.co.uk/candlestick-centrepiece-p-1140.html
will go to a Product Not Found page, and therefore Duplicate Content errors are produced.
Running
http://www.nhlocal.co.uk/candlestick-centrepiece-p-1140.html
produces the correct product page and not a Product Not Found page
Any thoughts?
-
Hi Nordichouse,
Sorry it took awhile for me to get back to you on this.
I agree with the SEOmoz techs, it doesn't matter if it is a crawler or a actual person, if you go to an invalid url you should be redirected as 301 to the actual page. If the product doesn't exist it should not allow for superfluous urls.
So basically what you should have is if the product exist then the site redirects to the correct URL. If it doesn't exist then send any query for that product to the same page and display the oscommerce product not found message. By doing this you prevent the system from creating upteenthousand urls for each product.
If you want to test what I mean you can visit our store a www.rubberstore.com/catalog and try a few urls like:
catalog/nipple-clips-p-1000.html
we don't have a product with the id of 1000 so you'll get redirected to the not found message and the root page
-p-1000.htmlhowever if you try:
catalog/a-fake-url-p-29.html
you'll get redirected to our actual product page matching this product id.Hope that makes since. All this is done with the .htaccess url re-writter I posted above.
-
Don
Yes, that is how it is done and there is no problem with that. The above is just how inbound URLs get processed.
The issue here is how the crawler works. The only possible way for this particular URL to be generated is for a certain parameter to be appended to the URL - and that would be unusual (unless SEOmoz techies tell me different)
Alan
-
Did you ever have a product with the id of 1140? If you look at your products table just check the auto number in the product_id column..
If you did and it was live at some point it could be finding the old product based on the old url it used to have.
If you never made that product live then I don't know how a crawler could of found a product that doesn't exist unless they starting using some technology that I'm unaware of.
Since you said you use OSC this what we use to deal with the problem I outlined above..
Begin Ultimate SEO V2.2d
Options +FollowSymLinks
RewriteEngine On# RewriteBase instructions
# Change RewriteBase dependent on how your shop is accessed as below.
# http://www.mysite.com = RewriteBase /
# http://www.mysite.com/catalog/ = RewriteBase /catalog/
# http://www.mysite.com/catalog/shop/ = RewriteBase /catalog/shop/# Change the following line using the instructions above
RewriteBase /catalog/RewriteRule ^(.)-p-(.).html$ product_info.php?products_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-c-(.).html$ index.php?cPath=$2&%{QUERY_STRING}
RewriteRule ^(.)-m-(.).html$ index.php?manufacturers_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-pi-(.).html$ popup_image.php?pID=$2&%{QUERY_STRING}
RewriteRule ^(.)-by-(.).html$ all-products.php?fl=$2&%{QUERY_STRING}
RewriteRule ^(.)-t-(.).html$ articles.php?tPath=$2&%{QUERY_STRING}
RewriteRule ^(.)-a-(.).html$ article_info.php?articles_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-au-(.).html$ articles.php?authors_id=$2&%{QUERY_STRING}
#RewriteRule ^(.)-pr-(.).html$ product_reviews.php?products_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-pri-(.).html$ product_reviews_info.php?products_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-f-(.).html$ faqdesk_info.php?faqdesk_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-fc-(.).html$ faqdesk_index.php?faqPath=$2&%{QUERY_STRING}
RewriteRule ^(.)-fri-(.).html$ faqdesk_reviews_info.php?faqdesk_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-fra-(.).html$ faqdesk_reviews_article.php?faqdesk_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-i-(.).html$ information.php?info_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-links-(.).html$ links.php?lPath=$2&%{QUERY_STRING}
RewriteRule ^(.)-pm-([0-9]+).html$ info_pages.php?pages_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-n-(.).html$ newsdesk_info.php?newsdesk_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-nc-(.).html$ newsdesk_index.php?newsPath=$2&%{QUERY_STRING}
RewriteRule ^(.)-nri-(.).html$ newsdesk_reviews_info.php?newsdesk_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-nra-(.).html$ newsdesk_reviews_article.php?newsdesk_id=$2&%{QUERY_STRING}
RewriteRule ^(.)-po-([0-9]+).html$ pollbooth.php?pollid=$2&%{QUERY_STRING}End Ultimate SEO V2.2d
You may try it to see if it helps fix your issue.
-
Thanks, Don
You are right in your analysis - it is osC, but highly modified by myself. Yes, it does redirect.
That, however, is not the point. On the live site, the URL containing 1140 (for example) is never generated.
The mystery is how the Crawler can find something that isn't there! Magic.
Alan
-
Hi nordichouse,
You may want to check with your CMS provider. The urls are similar to Oscommerce which I'm experienced with, but I can see that isn't an Oscommerce setup. The system should have some sort of URL re-writer to deal with this problem.
The issue that I see is the system actually doesn't care what you type in between .co.uk/ and -p-1140.html
For example try this url to get a valid product..
http://www.nordichouse.co.uk/nipple-clips-p-1000.html
which is the same as
http://www.nordichouse.co.uk/-p-1000.html
But should 301 redirect to: http://www.nordichouse.co.uk/linen-style-collection-p-1000.htmlOscommerce has a URL 301 re-writer that prevents the system for using incorrect URL's I would hope your system does as well.
I'm not trying to avoid helping you, but the without an exact knowledge of how the system handles URL's it generates it is hard to troubleshoot, however since it is a CMS somebody who works on it should already have this knowledge.
My best,
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unable to view crawl test
After doing a crawl test i get a download report. It then downloads in csv form and when I go to view it there is a curruption error or just a load of gibberish signs Can I not see the report onsite?
Moz Pro | | hantaah0 -
Campaign Crawl
I have a site with 8036 pages in my sitemap index. But the MozBot only Crawled 2169 pages. It's been several months and each week it crawls roughly the same number of pages. Any idea why I'm not getting fully crawled?
Moz Pro | | JMFieldMarketing0 -
Question about Crawl Diagnostics - 4xx (Client Error) report
Hi here, I was wondering if there is a way to find out the originating page where a broken link is found from the 4xx (Client Error) report. I can't find a way to know that, and without that information is very difficult for me to fix any possible 404 related issues on my website. Any thoughts are very welcome! Thank you in advance.
Moz Pro | | fablau0 -
Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider?
Well, basically that's the question 😄 Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider? This is, I have more than 10.000 pages on the website, and I am not interested in having reports for many of them, but I still wanna get SEO visits on them, so I want Google to crawl it easily... Thanks!
Moz Pro | | MattDG0 -
How to force SeoMoz to re-crawl my website?
Hi, I have done a lot of changes on my website to comply with SeoMoz advices, now I would like to see if I have better feedback from the tool, how can I force it to re-crawl a specific campaign? (waiting another week is too long :-))
Moz Pro | | oumma0 -
A question about Mozbot and a recent crawl on our website.
Hi All, Rogerbot has been reporting errors on our website's for over a year now, and we correct the issues as soon as they are reported. However I have 2 questions regarding the recent crawl report we got on the 8th. 1.) Pages with a "no-index" tag are being crawled by roger and are being reported as duplicate page content errors. I can ignore these as google doesnt see these pages, but surely roger should ignore pages with "no-index" instructions as well? Also, these errors wont go away in our campaign until Roger ignores the URL's. 2.) What bugs me most is that resource pages that have been around for about 6 months have only just been reported as being duplicate content. Our weekly crawls have never picked up these resources pages as being a problem, why now all of a sudden? (Makes me wonder how extensive each crawl is?) Anyone else had a similar problem? Regards GREG
Moz Pro | | AndreVanKets0 -
Errors on my Crawl Diagnostics
I have 51 errors on my Crawl Diagnostics tool.46 are 4xx Client Error.Those 4xx errors are links to products (or categories) that we are not selling them any more so there are inactive on the website but Google still have the links. How can I tell Google not to index them?. Can those errors (and warnings) could be harming my rankings (they went down from position 1 to 4 for the most important keywords) thanks,
Moz Pro | | cardif0 -
How do I get my crawl report?
I received a message that my crawl report is complete with a link - went to the link however - when I click on the icon that has the report name and the complete check mark nothing happens looked around can't find the results. Need to bid on this job so it would be helpful to know where to get it. Thanks for all you do. Wickey
Moz Pro | | Wickey0