Crawl Errors from URL Parameter
-
Hello,
I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login.
I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls.
Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't.
So what I ended doing was going to the robots.txt and disallowing rogerbot.
It looks like this:
User-agent: rogerbot
Disallow:/login
However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix?
Thanks!
-
Hi Tony,
I need more information from you in order to check this out. I'm going to send you a support ticket to reply to.
Thanks,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Diagnostics - 350 Critical errors? But I used rel-canonical links
Hello Mozzers, We launched a new website on Monday and had our first MOZ crawl on 01/07/15 which came back with 350+ critical errors. The majority of these were for duplicate content. We had a situation like this for each gym class: GLOBAL YOGA CLASS (canonical link / master record) YOGA CLASS BROMLEY YOGA CLASS OXFORD YOGA CLASS GLASGOW etc All of these local Yoga pages had the canonical link deployed. So why is this regarded as an error by MOZ? Should I have added robots NO INDEX instead? Would think help? Very scared our rankings are gonna get effected 😞 Ben
Moz Pro | | Bendall0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
Magento creating odd URL's, no idea why. GWT reporting 404 errors
Hi Mozzes! Problem 1 GWT and Moz, both are reporting approximately one hundred 404 errors for certain URL's. Examples shown below. We have no idea why or how these URL's are being created in Magento. Any hypothesis on the matter would be appreciated. The domain name in question is http://www.artorca.com/ These are valid URL's if /privacy is removed. The first URL is for a product, second for an artist profile and third for a CMS page 1. semi-abstract-landscape/privacy 2. jose-de-la-barra/privacy 3. seller-guide/privacy What may be the source for these URL's? What solution should we implement to fix existing 404's? 301 redirects should be fine? Problem 2 Website pages seem to also be accessible with index.php in the domain name. Example Artorca.com/index.php/URL's. Will this cause a duplicate content issue? Should we implement 301's, canonicals, or just leave as is? Cheers! MozAddict
Moz Pro | | MozAddict0 -
Crawl Errors and Notices drop to zero
Hi all, After setting up a campaign in Moz the crawl is successful and it showed the Errors and Warnings in crawl diagnostics (each one had about 40-50), but after a few days the number dropped to zero. Only the "notices" seems to stay normal, with a slight drop since the campaign set up, but not dropping to zero. I set this campaign up in a colleague's account and the same thing happened shortly after set up. I didn't find any Q&A already posted so any insight is appreciated!
Moz Pro | | Vanessa120 -
Seo moz has only crawled 2 pages of my site. Ive been notified of a 403 error and need an answer as to why my pages are not being crawled?
SEO Moz has only crawled 2 pages of my clients site. I have noticed the following. A 403 error message screaming frog also cannot crawl the site but IIS can. Due to the lack of crawling ability, im getting no feed back on my on page optimization rankings or crawl diagnostics summary, so my competitive analysis and optimization is suffering Anybody have any idea as to what needs to be done to rectify this issue as access to the coding or cms platform is out of my hands. Thank you
Moz Pro | | nitro-digital0 -
How often does seomoz crawl the site? Can you force a crawl at a specific time ?
How often does seomoz crawl the site? Can you force a crawl at a specific time ?
Moz Pro | | stewbuch18720 -
Why is it that certain keywords in my seomoz report card are for the wrong urls
Hi Guys, why is it that seomoz's On Page Optimization Reports for Google TH are attributing certain keywords with certain urls which are wrong? What mean is an example keyword - 'chiang mai villas for rent' has been scored an F against my home page url rather than using our 'Chiang Mai' url, why is this, is there a coding issue on my site? Is it that seomoz is finding something on my home page to suggest I want it to rank for this keyword?
Moz Pro | | ewanTHH0 -
How long does the seomoz crawl take?
It's been doing it's thing for over 48 hours now and Ive got less than 350 pages... is this norma? It's NOT the first crawl.
Moz Pro | | borderbound0