Rogerbot crawls my site and causes error as it uses urls that don't exist
-
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported...
Example:- The correct url is as follows:
/vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/
But it seems to want to crawl the following:
/vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330
This format doesn't exist anywhere and never has so I have no idea where its getting this url format from
The user agent details I get are as follows:
IP ADDRESS: 107.22.107.114
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, [email protected]) -
The first thing I would do is download the crawl report as an excel sheet. You can do this from your crawl report page.
From there, sort by the 404 error column, bringing "True" to the top. The top of the list is now the broken URL's. One of the very last columns on the right is the "referrer" column. This will show you the page where Roger is getting the bad link from.
Make Sense?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I combat title tag issues in Moz because of long company/site name? [Currently using Yoast SEO Plugin]
I've only just started to dip my toe into the SEO pool, so please forgive me if this is a silly question. I've been working heavily in the Moz tool and the Yoast SEO plugin to clean up the meta descriptions on our site, as there were many pages without descriptions in place (we just transferred from a different CMS to Wordpress within the last month or so). Now I'm noticing on our Site Crawl that we have 60+ pages that have title elements that are considered too long. When I go in to fix these issues on our blog posts (as this is where most of the issues lie), Yoast is already giving me the green light on the title lengths. I've discovered that the reason why Moz is telling us that many of our site lengths are too long is because "| Parker Staffing Services" is added to the end of our title tags automatically. Deep breath Now that you've heard all of the backstory... Is there a way that I can prevent that site title (which is 26 characters all by itself! Talk about limiting!) from displaying on search engines when our blog posts or pages show up in search results? I've poked around in the Titles & Metas settings that come with Yoast, but I fear making too many changes will result in me screwing something up. Help an SEO beginner out? 🙂
Moz Pro | | giaciccone0 -
Pages with Temporary Redirects on pages that don't exist!
Hi There Another obvious question to some I hope. I ran my first report using the Moz crawler and I have a bunch of pages with temporary redirects as a medium level issue showing up. Trouble is the pages don't exist so they are being redirected to my custom 404 page. So for example I have a URL in the report being called up from lord only knows where!: www.domain.com/pdf/home.aspx This doesn't exist, I have only 1 home.aspx page and it's in the root directory! but it is giving a temp redirect to my 404 page as I would expect but that then leads to a MOZ error as outlined. So basically you could randomize any url up and it would give this error so I am trying to work out how I deal with it before Google starts to notice or before a competitor starts to throw all kinds at my site generating these errors. Any steering on this would be much appreciated!
Moz Pro | | Raptor-crew0 -
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
My site's domain authority is 1\. why is that
Hi Guys My website's domain authority is 1 no matter i try www or non www.. why is that? can you guys please help? Thanks a lot in advance. http://www.opensiteexplorer.org/links?site=autoproject.com.au
Moz Pro | | JazzJack
http://www.opensiteexplorer.org/links?site=www.autoproject.com.au Jazz0 -
Why Rank Tracker don't work for 2 days?
It dosen't work just keep reloading and i get error "unable to retreave your ranking" or something like that. Anyone have some answear?
Moz Pro | | sasa0370 -
How do I get the Page Authority of individual URLs in my exported (CSV) crawl reports?
I need to prioritize fixes somehow. It seems the best way to do this would be to filter my exported crawl report by the Page Authority of each URL with an error/issue. However, Page Authority doesn't seem to be included in the crawl report's CSV file. Am I missing something?
Moz Pro | | Twilio0 -
It won't let me print the secon or third pages of site errors.
When I place the site errors page in pdf format it won't let me print the second or any of the other webpages containing content about my site. Does any one know why?
Moz Pro | | ibex0 -
Duplicate page content reports duplicates, but pages don't show duplication
My duplicate page reports shows 376 pages with duplicate content. After reviewing the pages the report claims have duplicate content, i can't find duplications. could this be an error, or is there some source code that doesn't display that could be causing this issue?
Moz Pro | | noonzie0