Weird 404 error
-
I have 2 404 errors on my site.
The pages which are coming up as errors look like this
www.mywebsite.com/a-page-not-belong-to-wordpress.html
www.mywebsite.com/another-page-not-belong-to-wordpress.html
Just wondering if i can delete these pages? if so how
Regards
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix an 803 error?
Error Code 803: Incomplete HTTP Response Received How can I fix this error?
Technical SEO | | netprodjb0 -
Salvaging links from WMT “Crawl Errors” list?
When someone links to your website, but makes a typo while doing it, those broken inbound links will show up in Google Webmaster Tools in the Crawl Errors section as “Not Found”. Often they are easy to salvage by just adding a 301 redirect in the htaccess file. But sometimes the typo is really weird, or the link source looks a little scary, and that's what I need your help with. First, let's look at the weird typo problem. If it is something easy, like they just lost the last part of the URL, ( such as www.mydomain.com/pagenam ) then I fix it in htaccess this way: RewriteCond %{HTTP_HOST} ^mydomain.com$ [OR] RewriteCond %{HTTP_HOST} ^www.mydomain.com$ RewriteRule ^pagenam$ "http://www.mydomain.com/pagename.html" [R=301,L] But what about when the last part of the URL is really screwed up? Especially with non-text characters, like these: www.mydomain.com/pagename1.htmlsale www.mydomain.com/pagename2.htmlhttp:// www.mydomain.com/pagename3.html" www.mydomain.com/pagename4.html/ How is the htaccess Rewrite Rule typed up to send these oddballs to individual pages they were supposed to go to without the typo? Second, is there a quick and easy method or tool to tell us if a linking domain is good or spammy? I have incoming broken links from sites like these: www.webutation.net titlesaurus.com www.webstatsdomain.com www.ericksontribune.com www.addondashboard.com search.wiki.gov.cn www.mixeet.com dinasdesignsgraphics.com Your help is greatly appreciated. Thanks! Greg
Technical SEO | | GregB1230 -
Massive Increase in 404 Errors in GWT
Last June, we transitioned our site to the Magento platform. When we did so, we naturally got an increase in 404 errors for URLs that were not redirected (for a variety of reasons: we hadn't carried the product for years, Google no longer got the same string when it did a "search" on the site, etc.). We knew these would be there and were completely fine with them. We also got many 404s due to the way Magento had implemented their site map (putting in products that were not visible to customers, including all the different file paths to get to a product even though we use a flat structure, etc.). These were frustrating but we did custom work on the site map and let Google resolve those many, many 440s on its own. Sure enough, a few months went by and GWT started to clear out the 404s. All the poor, nonexistent links from the site map and missing links from the old site - they started disappearing from the crawl notices and we slowly went from some 20k 404s to 4k 404s. Still a lot, but we were getting there. Then, in the last 2 weeks, all of those links started showing up again in GWT and reporting as 404s. Now we have 38k 404s (way more than ever reported). I confirmed that these bad links are not showing up in our site map or anything and I'm really not sure how Google found these again. I know, in general, these 404s don't hurt our site. But it just seems so odd. Is there any chance Google bots just randomly crawled a big ol' list of outdated links it hadn't tried for awhile? And does anyone have any advice for clearing them out?
Technical SEO | | Marketing.SCG0 -
What does this error mean?
We recently merged our Google + & Google Local pages and sent a request to Webmaster tools to connect the Google + page to our website. The message was successfully sent. However, when clicking the 'Approve or reject this request' link, the following error message appears: 'Can't find associate request' Anyone know what we are doing incorrectly? Thanks in advance for any insight.
Technical SEO | | SEOSponge0 -
Webmaster Tools Server Error
We recently did a build to our site and after the build the build one of the softwares that we are using changed. This caused our server errors to go into the thousands. right now google webmaster tools gave us a list of top 1,000 pages with errors and we fixed them all is there a way to see the rest of the errors?
Technical SEO | | DoRM0 -
DNS error on webmaster tool
Google webmaster tool is showing DNS error and that is leading to many server error (502,500) almost 50+ in every crawl. Recently Google crawled one of our sub domains that we did not want google to crawl. We blocked it via Robots.txt and also removed all the URL's and since then we are having this issue. Any suggestions how to fix this DNS error? Thanks in advance.
Technical SEO | | tpt.com0 -
Why would SEOMoz and GWT report 404 errors for pages that are not 404ing?
Recently, I've noticed that nearly all of the 404 errors (not soft 404) reported in GWT actually resolve to a legitimate page. This was weird, but I thought it might just be old info, so I would go through the process of checking and "mark as fixed" as necessary. However, I noticed that SEOMoz is picking up on these 404 errors in the diagnostics of the site as well, and now I'm concerned with what the problem could be. Anyone have any insight into this? Rich
Technical SEO | | secretstache0 -
How do crawl errors from SEOmoz tool set effect rankings?
Hello - The other day I presented the crawl diagnostic report to a client. We identified duplicate page title errors, missing meta description errors, and duplicate content errors. After reviewing the report we presented it to the clients web company who operates a closed source CMS. Their response was that these errors are not worthy of fixing and in fact they are not hurting the site. We are having issues getting the errors fixed and I would like your opinion on this matter. My question is, how bad are these errors? Should we not fix them? Should they be fixed? Will fixing the errors have an impact on our site's rankings? Personally, I think the question is silly. I mean, the errors were found using the SEOmoz tool kit, these errors have to be effecting SEO.....right? The attached image is the result of the Crawl Diagnostics that crawled 1,400 pages. NOTE: Most of the errors are coming from Pages like blog/archive/2011-07/page-2 /blog/category/xxxxx-xxxxxx-xxxxxxx/page-2 testimonials/147/xxxxx--xxxxx (xxxx represents information unique to the client) Thanks for your insight! c9Q33.png
Technical SEO | | Gabe0