Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
-
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results.
However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do!
I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl?
Can I get my developer to add some code somewhere.
Help will be much appreciated. Asif
-
Hello Tom and Asif,
First of all Tom thanks for the excellent blog post re google docs.
We are also using the Jshop platform for one of our sites. And am not sure whether it is working correctly in terms of SEO. I just ran an seomoz crawl of the site and found that every single link in the list has a rel canonical in it, even the ones with session id's.
Here is an example:
www.strictlybeautiful.com/section.php/184/1/davines_shampoo/d112a41df89190c3a211ec14fdd705e9
www.strictlybeautiful.com/section.php/184/1/davines_shampoo
As Asif has pointed out the Jshop people say they have programmed it so that google cannot pick up the session ids, firstly is that even possible? And if I assume thats not an issue then what about the fact that every single page on the site has a rel canonical link on it?
Any help would be much appreciated.
<colgroup><col width="1074"></colgroup>
| |
| | -
Asif, here's the page with the information on the SEOmoz bot.
-
Thanks for the reply Tom. Spoke to our developer he has told me that the website platform (Jshop) does not show session ID's to the search engines so we are ok on that side. However as it doesn't recognise the Seomoz bot it shows it the session ID's. Do you know where I can find info on the Seomoz bot so we can see what it identifies itself as so it can be added to the list of recognised spiders?
Thanks
-
Hi Asif!
Firstly - I'd suggest that as soon as possible you address the core problem - the use of session ids in the URL. There are not many upsides to the approach and there are many downsides.That it doesn't show up with the site: command doesn't mean it isn't having a negative impact.
In the meantime, you should add a rel=canonical tag to all the offending pages pointing to the URL without the session id. Secondly, you could use robots.txt to block the SEOmoz bot from crawling pages with session ids, but it may affect the bots ability to crawl the site if all the links it is presented with are with session ids - which takes us back around to fixing the core problem.
Hope this helps a little!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error Code 804: HTTPS (SSL) Error Encountered
I'm seeing the following error in Moz as below. I have not seen any errors when crawling with different tools - is this common or is it something that needs to be looked at. SO far I only have info below. Assuming I would need to open a ticket with hosting provider for this? Thanks! Error Code 804: HTTPS (SSL) Error Encountered Your page requires an SSL security certificate to load (using HTTPS), but the Moz Crawler encountered an error when trying to load the certificate. Our crawler is pretty standard, so it's likely that other browsers and crawlers may also encounter this error. If you have this error on your homepage, it prevents the Moz crawler (and some search engines) from crawling the rest of your site.
Moz Pro | | w4rdy0 -
Campaign Crawl
I have a site with 8036 pages in my sitemap index. But the MozBot only Crawled 2169 pages. It's been several months and each week it crawls roughly the same number of pages. Any idea why I'm not getting fully crawled?
Moz Pro | | JMFieldMarketing0 -
Duplicate Page Titles and Content
The SeoMoz crawler has found many pages like this on my site with /?Letter=Letter, e.g. http://www.johnsearles.com/metal-art-tiles/?D=A. I believe it is finding multiple caches of a page and identifying them as duplicates. Is there any way to screen out these multiple cache results?
Moz Pro | | johnsearles0 -
Duplicate page content due to Sort By dropdown
Hi there, I have over 150 Duplicate Page Title errors showing up in SEOMoz but on closer inspection these are related to the 'Sort By:' functionality on our ecommerce site that allows customers to sort our products by Price, Alphabetically etc. To give an example: http://www.parklanechampagne.co.uk/park-lane-champagne/special-occasions/easter Is showing as being duplicated by this page: http://www.parklanechampagne.co.uk/park-lane-champagne/special-occasions/easter?productlisting_page=1&sortorder=Price Does anyone know how I can resolve this? Any help greatly appreciated. Kind regards, Jon CDFyp.jpg
Moz Pro | | jonmorse860 -
Duplicate page content and search in Magento
Hi all, Firstly, I am a business owner and not a SEO genuis but I work on my site and am learning how to "tweek" everyday. That said, my site www.vintagetimes.com.au needs a bit more than a tweek. Here is problem 1: I have massive duplicate page content which is being driven primarily by search and I'm not sure how to tackle the issue. Working in Magento. Could anybody give me an instruction on how to steer robots away from search results? I would also like to know WHY a search result is here as well? Example of about 20 pages of this type of result: | Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=created_at&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=metal&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=name&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=price&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=relevance&dir=asc 50+ 1 0 Search results for: '1 carat' Vintage Times http://www.vintagetimes.com.au/catalogsearch/result/index/?q=1+carat&enable_googlecheckout=1&cat=21&order=stone&dir=asc | 50+ | 1 | 0 |
Moz Pro | | VintageTimesAustralia0 -
Can I exclude pages from my Crawl Diagnostics?
Right now my crawl diagnostic information is being skewed because it's including the onsite search from my website. Is there a way to remove certain pages like search from the errors and warnings of the crawl diagnostic? My search pages are coming up as: Long URL Title Element Too Long Missing Meta Description Blocked by meta-robots (Which is how I want it) Rel Canonical Here is what the crawl diagnostic thinks my page URL looks like: website.com/search/gutter%25252525252525252525252525252525252525252525252525252525 252525252525252525252525252525252525252525252525252525252525252 525252525252525252525252525252525252525252525252525252525252525 252525252525252525252525252525252525252525252525252525252525252 52525252525252525252525252525252525252525252525252Bcleaning/ Thank you, Jonathan
Moz Pro | | JonathanGoodman0 -
How to crawl the whole domain?
Hi, I have a website an e-commerce website with more than 4.600 products. I expect that Seomoz scan check all url's. I don't know why this doesn't happens. The Campaign name is Artigos para festa and should scan the whole domain festaexpress.com. But it crels only 100 pages I even tried to create a new campaign named Festa Express - Root Domain to check if it scans but had the same problem it crawled only 199 pages. Hope to have a solution. Thanks,
Moz Pro | | EduardoCoen
Eduardo0 -
What the . . ! Duplicate Pages and Titles WAY up?
My duplicate pages went up 50 plus in the past week, and my duplicate page titles went over more then 100. We recently redesigned the website, but it has been up for several weeks now. The only change I made specifically last week or late the week before was to get my 301 redirects done to get the www. version and the non www version pointing to the same place (as well as a couple other sites that point to it). I'm sure this is not enough info to figure out what went wrong . . . I'd love some help in figuring this out though.
Moz Pro | | damon12120