Crawl Diagnostics returning duplicate content based on session id
-
I'm just starting to dig into crawl diagnostics and it is returning quite a few errors. Primarily, the crawl is indicating duplicate content (page titles, meta tags, etc), because of a session id in the URL.
I have set-up a URL parameter in Google Webmaster Tools to help Google recognize the existence of this session id. Is there any way to tell the SEOMoz spider the same thing? I'd like to get rid of these errors since I've already handled them for the most part.
-
You the man! Thanks!
-
Hi Cody,
The best way is to block Rogerbot within your Robots.txt from crawling specific pages of your site. In your case protecting Rogerbot from seeing the pages with a session ID.
More information could be found here on Rogerbot.Be cautious and test it out, but the lines you would have to add to your Robots.txt are probably:
User-agent: rogerbot
Disallow: /*sessionidHope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
moz crawl is stopped?
moz stopped indexing the links due to some updates? can some one confirm me thanks
Moz Pro | | 42409300125323700 -
Problems with duplicate contents...
Hi folks, how's going? I started using Seo Moz and from the first crawling appears that I have 11 pages with duplicate contents... but this is not true, are different pages with different url, contents, tags... any idea to solve he problem? Alessandro, MusicaNueva.es
Moz Pro | | musicanueva0 -
Why SEOmoz bot consider these as duplicate pages?
Hello here, SEOmoz bot has recently marked the following two pages as duplicate: http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3 http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf I don't personally see how these pages can be considered duplicate since their content is quite different. Thoughts??!!
Moz Pro | | fablau0 -
Change Crawl and rank report day?
Does anyone know if there is a way to get all of my account's campaign's to get crawled and rank reports on the same day?
Moz Pro | | CDUBP0 -
Adding canonical still returns duplicate pages
According to SEOmoz, several of my campaigns show that I have duplicate pages (SEOmoz Errors). Upon reading more about how to resolve the issue, I followed SEOmoz's suggestion to add rel='canonical' <links>to each page. After the next SEOmoz crawl, the number of SEOmoz Errors related to duplicate pages remained the same and the number of SEOmoz notices shot up indicating that it recognized that I added rel='canonical'.</links> I'm still puzzled as to why the SEOmoz errors did not go down with respect to duplicate page errors after I added rel='canonical', especially since SEOmoz noticed that I added them. Can anyone explain this to me? Thanks,
Moz Pro | | MOZ2
Scott.0 -
SEOMOZ Crawl Test
Guys I really have an issue that i know have but cannot see if that makes sense. Basically 3 months ago i did a site wide 301 from economyleasinguk.co.uk to www.economy-car-leasing.co.uk Every thing looks good get all the correct header responses , all canonicals work perfectly , Google webmaster tools is updated fetch as google bot shows the old site is 301 I tried the seomoz crawl test today on the old domain and got this message Oh no! Looks like the page you were trying to access is temporarily down which at first thought ok because the site was not there it wont do it on an old 301 domain, however i tried it on a domain i know has just been 301'd and i got this message The URL http://www.site1.com/ redirects to http://site2.com/. Do you want to crawl http://site2.com/ instead?
Moz Pro | | kellymandingo
Would you like to:
Continue with www.site1.com
Continue with site2.com I really do not know what to do, its either the redirect script is missing something however its doing what it should or the server is a problem but again its doing what it should so why would SEOMOZ not be able to crawl the old URL like it example site above. Now the strange thing is Open Site Explorer does see the 301 and asks if i want to check the new URL instead Ps the redirect is done using PHP redirect which i am asking him to change to a htaccess as its now on a apache server and was wondering if this could be an issue, all pages go to correct pages as requested Thanks in Advance1 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0 -
Duplicate page error from SEOmoz
SEOmoz's Crawl Diagnostics is complaining about a duplicate page error. I'm trying to use a rel=canonical but maybe I'm not doing it right. This page is the original, definitive version of the content: https://www.borntosell.com/covered-call-newsletter/sent-2011-10-01 This page is an alias that points to it (each month the alias is changed to point to the then current issue): https://www.borntosell.com/covered-call-newsletter/latest-issue The alias page above contains this tag (which is also updated each month when a new issue comes out) in the section: Is that not correct? Is the https (vs http) messing something up? Thanks!
Moz Pro | | scanlin0