4xx Client Error
-
I have 2 pages showing as errors in my Crawl Diagnostics, but I have no idea where these pages have come from, they don't exist on my site.
I have done a site wide search for them and they don't appear to be referenced are linked to from anywhere on my site, so where is SEomoz pulling this info from?
the two links are:
http://www.adgenerator.co.uk/acessibility.asp
http://www.adgenerator.co.uk/reseller-application.asp
The first link has a spelling mistake and the second link should have an "S" on the end of "application"
-
Is your site verified in Google? Try logging into Google Webmaster Tools and looking at the 404 report there. It often will list the incoming link for a 404 error.
-
then the only other time I've seen this happen was from inbound links that Google found on other sites. You can try searching Google itself, however you may or may not be able to discover where they exist.
-
Nope, just checked the sitemap and not reference to them on there either.
-
Thanks I will check the site map, I have done a find all search and canot find any reference to those URL's anywhere in my site.
Where does SEOmoz pull this info from?
-
Google picks up links several ways. If you are 100% sure these links are not in your own site somewhere (even on a single page, either intentionally or accidentally), check your sitemap.xml file, but be aware that they also pick up links that come from 3rd party web sites.
If you can't find the source, consider setting up a 301 redirect to resolve both of them, or alternately don't worry about them if you only have 2 listed and your site is otherwise doing well.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
"Moz encountered an error on one or more pages on your site" Error
I have been receiving this error for a while: "Moz encountered an error on one or more pages on your site" It's a Multi-Lingual Wordpress website, the robots.txt is set to allow crawlers on all links and I have followed the same process for other website I've done yet I'm receiving this error for this site.
Technical SEO | | JustinZimri0 -
Google PageSpeed Insights Error
Google PageSpeed Insights showing this "An error occurred while fetching or analyzing the page.Dismiss" URL: http://westpalmbailbonds.com/ What should I do to fix this? Thanks in Advance
Technical SEO | | Beachflower0 -
Error after scanning with browseo.net
Good day! I have done a scan on my site with browseo.net ( and a few other similar scanners ) and got the mess seen in the screenshot. I've tried deleting all the files in the website folder, replace it with a single image file, but it still shows the same error. What could this mean and should i be worried? P.S Found my answer after contacting the helpful support of browseo.net : It took me some time to figure out what was going on, but it seems as if you are mixing content types. Browsers are quite smart when it comes to interpreting the contents, so they are much more forgiving than we are. Browseo crawls your website and detects that you are setting utf-8 as part of the meta information. By doing so, it converts the content in a different character encoding then what they are supposed to be. In a quick test, I tried to fetch the content type based on the response object, but without any success. So I am suspecting that in reality your content is not utf-8 encoded when you parse it into joomla. The wrong character type is then carried over for the body (which explains why we can still read the header information). All of this explains the error. In order for it to work in browseo, you’d have to set the content type correctly, or convert your own content into utf-8 before parsing. It may be that you are either storing this incorrectly in the database (check your db settings for a different content type other than utf-8) or that other settings are a bit messed up. The good news is, that google is probably interpreting your websites correctly, so you won’t be punished for this, but perhaps something to look into… From Paul Piper VKNNnAL.png?1
Technical SEO | | AlexElks0 -
Error report in Bing Evaluated size of HTML....
Hi Whilst checking Bing's SEO analyser I got this error message for our page www.tidy-books.co.uk/childrens-bookcases "Evaluated size of HTML is estimated to be over 125 KB and risks not being fully cached. (Issue marker for this rule is not visible in the current view)" Just wondering what needs to be done about it and what it actually means? Thanks
Technical SEO | | tidybooks0 -
Has Google Stopped Listing URLs with Crawl Errors in Webmaster Tools?
I went to Google Webmaster Tools this morning and found that one of my clients had 11 crawl errors. However, Webmaster Tools is not showing which URLs are having experiencing the errors, which it used to do. (I checked several other clients that I manage and they list crawl errors without showing the specific URLs. Does anyone know how I can find out which URLs are experiencing problems? (I checked with Bing Webmaster Tools and the number of errors are different).
Technical SEO | | TopFloor0 -
How do crawl errors from SEOmoz tool set effect rankings?
Hello - The other day I presented the crawl diagnostic report to a client. We identified duplicate page title errors, missing meta description errors, and duplicate content errors. After reviewing the report we presented it to the clients web company who operates a closed source CMS. Their response was that these errors are not worthy of fixing and in fact they are not hurting the site. We are having issues getting the errors fixed and I would like your opinion on this matter. My question is, how bad are these errors? Should we not fix them? Should they be fixed? Will fixing the errors have an impact on our site's rankings? Personally, I think the question is silly. I mean, the errors were found using the SEOmoz tool kit, these errors have to be effecting SEO.....right? The attached image is the result of the Crawl Diagnostics that crawled 1,400 pages. NOTE: Most of the errors are coming from Pages like blog/archive/2011-07/page-2 /blog/category/xxxxx-xxxxxx-xxxxxxx/page-2 testimonials/147/xxxxx--xxxxx (xxxx represents information unique to the client) Thanks for your insight! c9Q33.png
Technical SEO | | Gabe0 -
Seek help correcting large number of 404 errors generated, 95% traffic halt
Hi, The following GWT screen tells a bit of the story: site: http://bit.ly/mrgdD0 http://www.diigo.com/item/image/1dbpl/wrbp On about Feb 8 I decided to fix a large number of 'duplicate title' warnings being reported in GWT "HTML Suggestions" -- these were for URLs which differed only in parameter case, and which had Canonical tags, but were still reported as dups in GWT. My traffic had been steady at about 1000 clicks/day. At midnight on 2/10, google traffic completely halted, down to 11 clicks/day. I submitted a recon request and was told 'no manual penalty' Also, the 'sitemap' indexes in GWT showed 'pending' for 24x7 starting then. By about the 18th, the 'duplicate titles' count dropped to about 600 or so... the next day traffic hopped right back to about 800 clicks/day - for a week - then stopped again, down to 10/day, a week later, on the 26th. I then noticed that GWT was reporting 20K page-not found errors - this has now grown to 35K such errors! I realized that bogus internal links were being generated as I failed to disable the PHP warning messages.... so I disabled PHP warnings and fixed what I thought was the source of the errors. However, the not-found count continues to climb -- and I don't know where these bad internal links are coming from, because the GWT report lists these link sources as 'unavailable'. I'v been through a similar problem last year and it took months (4) for google to digest all the bogus pages ad recover. If I have to wait that long again I will lose much $$. Assuming that the large number of 404 internal errors is the reason for the sudden shutoff... How can I a) verify the source of these internal links, given that google says the source pages are 'unavailable'.. Most critically, how can I do a 'RESET" and have google re-spider my site -- or block the signature of these URLs in order to get rid of these errors ASAP?? thanks
Technical SEO | | mantucket0 -
Crawl Errors and Duplicate Content
SEOmoz's crawl tool is telling me that I have duplicate content at "www.mydomain.com/pricing" and at "www.mydomain.com/pricing.aspx". Do you think this is just a glitch in the crawl tool (because obviously these two URL's are the same page rather than two separate ones) or do you think this is actually an error I need to worry about? Is so, how do I fix it?
Technical SEO | | MyNet0