Question about Crawl Diagnostics - 4xx (Client Error) report
-
Hi here,
I was wondering if there is a way to find out the originating page where a broken link is found from the 4xx (Client Error) report. I can't find a way to know that, and without that information is very difficult for me to fix any possible 404 related issues on my website.
Any thoughts are very welcome! Thank you in advance.
-
Thank you Paul, I have just voted and left a comment! Let's cross our fingers! That tool is great, but can be much improved
Thank you again.
-
I've created a Feature Request indicating this kind of info needs better documentation, Fabrizo. Would you go to the request I made, add your vote, and perhaps leave a comment describing how much time you lost trying to find your solution?
https://seomoz.zendesk.com/entries/23715458-Better-documentation-of-features-on-tool-report-pages
I've suggested this a number of times to staff while answering this question in the past but in fairness, I can see how requests here in Q&A might not get passed to the folks responsible for considering these kinds of changes. So if we help them by using the Feature Request, we can make it easier for them and hopefully other users in the future.
Paul
-
Yes, thank you Paul, that's actually what I didn't know! I think SEOmoz should make that clear somewhere... I spent almost 1 hour trying to find that information around the site!
Thank you again very much.
-
Unfortunately the report page doesn't explain it, Fabrizio, but the information you are looking for is available if you download the full report as a CSV. Once you open the spreadsheet, you'll find that the very last column is titled "Referrer" and lists the page the broken link is found on so you go and fix it.
That what you were looking for?
Paul
-
I've actually wondered this too because my SEOmoz report shows 404s that Google Webmaster Tools does not and I'd like to know how mozbot found it while googlebot didn't.
-
Google Webmaster Tools reports where 404 links originate from. I don't think the SEOmoz bundle can help you. GWMT will for sure.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Magento creating odd URL's, no idea why. GWT reporting 404 errors
Hi Mozzes! Problem 1 GWT and Moz, both are reporting approximately one hundred 404 errors for certain URL's. Examples shown below. We have no idea why or how these URL's are being created in Magento. Any hypothesis on the matter would be appreciated. The domain name in question is http://www.artorca.com/ These are valid URL's if /privacy is removed. The first URL is for a product, second for an artist profile and third for a CMS page 1. semi-abstract-landscape/privacy 2. jose-de-la-barra/privacy 3. seller-guide/privacy What may be the source for these URL's? What solution should we implement to fix existing 404's? 301 redirects should be fine? Problem 2 Website pages seem to also be accessible with index.php in the domain name. Example Artorca.com/index.php/URL's. Will this cause a duplicate content issue? Should we implement 301's, canonicals, or just leave as is? Cheers! MozAddict
Moz Pro | | MozAddict0 -
Why is moz saying I have a 404 error?
I recently asked a question on here about how to fix my 404 error. I figured it out, and now webmaster tools says it's not there any more. This as been confirmed by keeping watch for about a week now. But Moz just crawled my site today and is for some reason showing an error. Am i missing something here or is Moz a little slow sometimes?
Moz Pro | | NateStewart0 -
Crawl Errors and Notices drop to zero
Hi all, After setting up a campaign in Moz the crawl is successful and it showed the Errors and Warnings in crawl diagnostics (each one had about 40-50), but after a few days the number dropped to zero. Only the "notices" seems to stay normal, with a slight drop since the campaign set up, but not dropping to zero. I set this campaign up in a colleague's account and the same thing happened shortly after set up. I didn't find any Q&A already posted so any insight is appreciated!
Moz Pro | | Vanessa120 -
Why does Crawl Diagnostics report this as duplicate content?
Hi guys, we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools. Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler. Here's an example, taken directly from our Crawl Diagnostics Report: URL with 4 Duplicate Content errors:
Moz Pro | | yacpro13
/safety-lights.html Duplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514 What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.html So why is SEOMoz crawler still flagging this as duplicate content?0 -
Crawl Diagnostics returning duplicate content based on session id
I'm just starting to dig into crawl diagnostics and it is returning quite a few errors. Primarily, the crawl is indicating duplicate content (page titles, meta tags, etc), because of a session id in the URL. I have set-up a URL parameter in Google Webmaster Tools to help Google recognize the existence of this session id. Is there any way to tell the SEOMoz spider the same thing? I'd like to get rid of these errors since I've already handled them for the most part.
Moz Pro | | csingsaas0 -
I have another Duplicate page content Question to ask.Why does my blog tags come up as duplicates when my page gets crawled,how do I fix it?
I have a blog linked to my web page.& when rogerbot crawls my website it considers tags for my blog pages duplicate content.is there any way I can fix this? Thanks for your advice.
Moz Pro | | PCTechGuy20120 -
Our recently searched Open Site backlink reports are identical to last months across multiple clients. Why is this happening?
When processing the backlink data of our clients for this month (searches done on 28th December), we're finding that the information gathered from Open Site explorer is identical to the data that was found on searches done at the end of November. It appears to have happened for the reports we use for every single client so it must be more than a coincidence. From our perspective, it's almost as if no data has been tracked since the last time we gathered backlink reports. Could you help us get to the bottom of this?
Moz Pro | | queryclick0 -
Trifecta Report (domain)
Recently i have found that the data in the trifecta report is becoming less complete. For the last few sites i found that the report returned about half of the metrics it was looking for. For instance, it mentioned that one of the domains did not have a dmoz or wikipedia link/page even though they had both? Also, it rarely returns the Yahoo Site Explorer metrics even though when clicking to see the source the data is there? Are you guys planning on phasing out this tool, if not it seems like some work needs to be done on cleaning it up. Thanks!
Moz Pro | | kchandler1