Site Crawl 4xx Errors?
-
Hello!
When I check our website's critical crawler issues with Moz Site Crawler, I'm seeing over 1000 pages with a 4xx error.
All of the pages that are showing to have a 4xx error appear to be the brand and product pages we have on our website, but with /URL at the end of each permalink.
For example, we have a page on our site for a brand called Davinci. The URL is https://kannakart.com/davinci/. In the site crawler, I'm seeing the 4xx for this URL: https://kannakart.com/davinci/URL.
Could this be a plugin on our site that is generating these URLs? If they're going to be an issue, I'd like to remove them. However, I'm not sure exactly where to begin.
Thanks in advance for the help,
-Andrew
-
Paul,
Makes perfect sense. Thank you for the clear answer and explanation!
-
You have a defective link coded for the FAQ link in your site's footer, Andrew. It's currently coded as
[Which then gets parsed as a relative link. This means the link gets parsed as www.example.com/currentpageurl/URL
Because that link is in the sitewide footer, it means every page is generating that link as a link to itself with the /URL added to the end. Which you're seeing detected as all those 404s.
This is very important to fix not only because the FAQ link is broken in the footer, but you've doubled the number of pages of the site for the search crawlers, meaning they're wasting time following thousands of useless links instead of focusing on real pages.
So fortunately it's a quick fix - find that link in your footer template and correct it.
Make sense?
Paul](URL)
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
Find a 4xx or 5xx link referenced in an SEO Crawl Report
So I just got the Crawl Diagnostics report for a client site and it came back with a number of 4xx errors and even 1 5xx error. So while I can find the URL that has the problem, I cannot find the pages that have the links pointing to these non-existent or problematic pages. Normally I would just search the database for the site, but in this case I don't have access to it as the site is on a proprietary platform with no access other than to the CMS. Is there anyway to get the linking URL from the report? Thanks!
Moz Pro | | farlandlee0 -
Campaign crawl re - schedule
Hello, On the last crawl of a website of mine, seomoz pointed out about 1500 errors (ouch!) on my site. I have made some corrections and i just want to see if they are at the right way but the next crawl is in a week. Is there any way so i can force a crawl before the scheduled date? Thanks!
Moz Pro | | Tz_Seo0 -
Open Site Explorer.. Bit of a let down?
Hi, not too sure if this is a discussion or rant!? I’ve been following SEOMoz for a couple of years now. Testing their tools, reading their blogs, sharing their content, and watching the whiteboard Friday videos (massive thumbs up to that one!). They are at the top of the game, no argument there. Although I should have done so much earlier, I have eventually signed up to the pro version and am about to migrate all my clients over. But there is one major caveat which I’m not sure should exist.. Open Site Explorer. Don’t get me wrong, OSE provides invaluable metrics, some that no other ‘crawler’ provides, but it is far from perfect, there are things that we SEO’s need, and are hoping to get! Some feature SEOMoz could add; Ability to view a chart of ‘link types’ (i.e. blog post, social media, Press Release etc..) – Linkdex do this! Utilise a ‘fresh‘ backlink index as Majestic SEO do. ( we do use majestic alongside SEOMoz) Crawl more frequently – enough said! Index all ‘not so good’ backlinks – this will help identify what backlinks AREN’T helping. I realise this is a lot easier said than done, and I’m sure SEOMoz are working on solutions.. just can’t wait till they launch! What do you think? IS OSE more than it’s cracked up to be? Could it be improved? Let me know 🙂 Lee
Moz Pro | | Webpresence0 -
Crawl Diagnostics Report Lacks Information
When I look at the crawl diagnostics, SEOMoz tells me there are 404 errors. This is understandable, because some pages were removed. What this report doesn't tell me is how those pages were discovered. This is a very important piece of information, because it would tell me there are links pointing to those pages, either internal or external. I believe the internal links have been removed. If the report told me how if found the link, I would be able to take immediate action. Without that information, I have to go so a lot of investigation. And when you have a million pages, that isn't easy. Some possibilities: The crawler remembered the page from the previous crawl. There was a link from an index page - i.e. it is in the database still There was an individual link from another story - so now there are broken links Ditto, but it in on a static index page The link was from an external source - I need to make a redirect Am I missing something, or is this a feature the SEO Moz crawler doesn't have yet? What can I do (other than check all my pages) to discover this?
Moz Pro | | loopyal0 -
Can you set-up a manual SEOmoz crawl?
I received a crawl report yesterday, made some site changes, and would like to see if those changes were done correctly. Rather than wait a week for my automatic crawl to be generated, is there anyway to initiate a manual crawl on a single subdomain as a PRO member? As a PRO member, you can schedule crawls for 2 subdomains every 24 hours, and you'll get up to 3,000 pages crawled per subdomain. When we've finished crawling, your reports will be sent to your PRO email address, which is currently From here... http://pro.seomoz.org/tools/crawl-test
Moz Pro | | ICM0 -
Do crawl reports see canonical tags?
Greetings, I just redesigned my site, www.funderstanding.com, and have the old site pointing to the new site via canonical URLs. I had a new crawl test run and it showed a large amount of duplicate content. Does the SEO Moz crawl tool validate canonical urls and adjusts the duplicate content count or is this note considered? FYI, I sent from no duplicate content to having 865 errors since the redesign went up so that seems suspicious. I would think though that assuming the canonical tag were used properly, and I hope it is?, that this would not be a problem?? All help with this is most appreciated. Eric
Moz Pro | | Ericc220 -
SEOMoz's Crawl Diagnostics showing an error where the Title is missing on our Sitemap.xml file?
Hi Everyone, I'm working on our website Sky Candle and I've been running it as a campaign in SEOmoz. I've corrected a few errors we had with the site previously, but today it's recrawled and found a new error which is a missing Title tag on the sitemap.xml file. Is this a little glitch in the SEOmoz system? Or do I need to add a page title and meta description to my XML file. http://www.skycandle.co.uk/sitemap.xml Any help would be greatly appreciated. I didn't think I'd need to add this. Kind Regards Lewis
Moz Pro | | LewisSellers0