Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
-
I'm curious as to how accurately rogerbot replicates google's searchbot
I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing.
I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors.
Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
-
Thanks for your response. I was beginning to think this question had been left to rot.
I'm not getting any errors in WMT. What is concerning is that Roger is returning almost 300 errors of dupe content, which is obviously a problem. Screaming frog is no longer finding the pages (they've been blocked in the robot.txt) I guess what I'm trying to ask here is how can I be sure that my dupe content has been effectively blocked from google's spider.
Is there anyway to check?
Thanks for your help.
-
I've see similar concerns from others, it seems "rogerbot" does ignore certain things that other bots consider.
Don't worry about it, if it's not being flagged in WMT it shouldn't be an issue.
Take Roger as a guide rather than an iron fist bot like googlebot.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Spam score very high but it's not clear in MOZ, it looks like a bucle
Hey there, I've a problem with this domain www.cornerjob.com In OSE just saw a spam score very high and so, I tried to see backlinks' profile in order to find suspicious backlinks. But in OSE there's a bucle, I mean: I know my domain has a high spam score but I cannot see any bad backlinks, actually the only one I see it's the same www.cornerjob.com...for that, It's a bucle. I tried to use other tools but I wasn't able to see backlinks, actually we don't have many. What Should I do in order to fix it? What does spam score mean? Thanks in advance Francesca
Moz Pro | | Nuvolaa0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
What is the best free 'contact finder' tool?
What is the best free 'contact finder' tool? By contact finder I mean a tool that can search multiple websites and display the contact details of each site. Why do Moz not provide such a tool? Thanks
Moz Pro | | conor10050 -
OSE vs google webmaster link data
Hiya experts, I am trying to understand the OSE. One thing I noticed is that OSE shows 12 linking root domains to our site. However, google webmasters show more than 90(that have kinks to pages on our site). And these are not new links, some of these date back to Apr 2011. Is there something very obvious I am missing here? Thanks for your help. Regards, Raman.
Moz Pro | | ramangarg0 -
Crawl Test - Taking too long
The last crawl test I invoked seems to be in progress for over 24 hours. The one before that completed in a few hours. Wish there was a progress indicator or an option to cancel. The crawl (from Tool > Crawl Test) should not take this long. Any ideas or suggestions? Also, the keyword research tool (plus a few others) have been down ever since I signed up. Is this a normal?
Moz Pro | | MomoMasta0 -
SEOMOZ Crawling Our Site
Hi there, We get a report from SEOMOZ every week which shows our performance within search. I noticed for our website www.unifor.com.au that it looks through over 10,000 pages, however our website sells less than 500 products so not sure why or how so many pages are trawled? If someone could let me know that would be great. It uses up a lot of bandwidth doing each of these searches so if the amount of pages being trawled reduced it would definitely assist. Thanks, Geoff
Moz Pro | | BeerCartel750 -
Only crawling one page
Hi there, A campaign was crawling fine, but at the last crawl, for some reason, SEOmoz can only crawl one page... any ideas? If I run a custom crawl I still access all of the site's pages.
Moz Pro | | harryholmes0070 -
API returns only 3 results
Hi all, I'm using the links seomoz API, and it's working great for me, except it only appears to return 3 results. Seems to only return 3 results on the anchor text api also. Is there something basic I'm not doing to get more results? Thanks!
Moz Pro | | jimbo_kemp0