What should I do with all these 404 pages?
-
I have a website that Im currently working on that has been fairly dormant for a while and has just been given a face lift and brought back to life. I have some questions below about dealing with 404 pages.
In Google WMT/search console there are reports of thousands of 404 pages going back some years. It says there are over 5k in total but I am only able to download 1k or so from WMT it seems.
I ran a crawl test with Moz and the report it sent back only had a few hundred 404s in, why is that?
Im not sure what to do with all the 404 pages also, I know that both Google and Moz recommend a mixture of leaving some as 404s and redirect others and Id like to know what the community here suggests.
The 404s are a mix of the following:
Blog posts and articles that have disappeared (some of these have good back-links too)
Urls that look like they used to belong to users (the site used to have a forum) which where deleted when the forum was removed, some of them look like they were removed for spam reasons too eg /user/buy-cheap-meds-online and others like that
Other urls like this /node/4455 (or some other random number)
Im thinking I should permanently redirect the blog posts to the homepage or the blog but Im not sure what to do about all the others? Surely having so many 404s like this is hurting my crawl rate?
-
OK will try that thanks
-
thanks, I have planned to do that, there's so many of them though
-
The posts and articles with good backlinks, does that content still make sense in your renewed site? If so, I'd bring them back. If you don't have the content, you can try the Wayback Machine. The same goes for any old posts you think would be useful to your new readers.
The problem with redirecting a bunch of 404s to the same page (like the homepage) is that you end up with soft 404s and not a very good user experience. Pick the ones that correspond to specific pages that you have on the updated site and redirect those to the equivalent page.
Anything else, I'd let 404. A bunch of old posts, with no good links, the content of which you no longer have a use for on the site don't represent value to searchers—those pages will just drop out of Googles index (and crawl attempts) over time.
[This isn't just theoretical. We changed domains back in November and we had lots of old content—going back 10+ years, which is ancient history for a financial publisher. I ended up with about 6,000 404s. We are now down to about 4,000 404s as pages drop off. Google crawls us quickly and regularly and our organic traffic is up 86.49% .]
-
Remove all internal links leading to 404 pages. If you're using a redirect, your internal links shouldn't link to 404+302->new page either, link straight to the new source.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexed pages
Just started a site audit and trying to determine the number of pages on a client site and whether there are more pages being indexed than actually exist. I've used four tools and got four very different answers... Google Search Console: 237 indexed pages Google search using site command: 468 results MOZ site crawl: 1013 unique URLs Screaming Frog: 183 page titles, 187 URIs (note this is a free licence, but should cut off at 500) Can anyone shed any light on why they differ so much? And where lies the truth?
Technical SEO | | muzzmoz1 -
Duplicate Pages , Do they matter ?
I have been told buy the company who created my site that duplicate page warning are not a problem ? my site is small and only has 50 pages ( including product pages etc ) yet the crawl shows over 6500 duplicate pages am I right to be concerned?
Technical SEO | | Gardening4you0 -
Find where the not selected pages are from
Hi all Can anyone suggest how I can find where gtoogle is finding approx. 1000 pages not to select? In round numbers I have 110 pages on the site site: searech shows all pages index status shows 110 slected and 1000 not selected. For the life of me I cannot fingure where these pages are coming from. I have set my prefered domain to www., setup 301 's to www. as per below RewriteCond %{HTTP_HOST} ^growingyourownveg.com$
Technical SEO | | spes123
RewriteRule ^(.*)$ "http://www.growingyourownveg.com/$1" [R=301,L] site is www.growingyourownveg.com any suggestions much appreciated Simon0 -
Duplicate Page Issue
Dear All, I am facing stupid duplicate page issue, My whole site is in dynamic script and all the URLs were in dynamic, So i 've asked my programmer make the URLs user friendly using URL Rewrite, but he converted aspx pages to htm. And the whole mess begun. Now we have 3 different URLs for single page. Such as: http://www.site.com/CityTour.aspx?nodeid=4&type=4&id=47&order=0&pagesize=4&pagenum=4&val=Multi-Day+City+Tours http://www.tsite.com/CityTour.aspx?nodeid=4&type=4&id=47&order=0&pagesize=4&pagenum=4&val=multi-day-city-tours http://www.site.com/city-tour/multi-day-city-tours/page4-0.htm I think my programmer messed up the URL Rewrite in ASP.net(Nginx) or even didn't use it. So how do i overcome this problem? Should i add canonical tag in both dynamic URLs with pointing to pag4-0.htm. Will it help? Thanks!
Technical SEO | | DigitalJungle0 -
Does page speed affect what pages are in the index?
We have around 1.3m total pages, Google currently crawls on average 87k a day and our average page load is 1.7 seconds. Out of those 1.3m pages(1.2m being "spun up") google has only indexed around 368k and our SEO person is telling us that if we speed up the pages they will crawl the pages more and thus will index more of them. I personally don't believe this. At 87k pages a day Google has crawled our entire site in 2 weeks so they should have all of our pages in their DB by now and I think they are not index because they are poorly generated pages and it has nothing to do with the speed of the pages. Am I correct? Would speeding up the pages make Google crawl them faster and thus get more pages indexed?
Technical SEO | | upper2bits0 -
Ads at the top of the page
hi mozzers, I have a website made in dreamweaver. Right at the very top of all of my pages is a text advert with a link to an affiliate in the ad. This link is very lucrative and gets a lot of sales, but i'm concerned it may have some negatives from an SEO perspective. As it's the first text on every page, i'm guessing bots will read it first. Could this potentially cause problems? If so, is there a best practice that would allow me to keep it there and keep the bots happy? Cheers, Peter
Technical SEO | | PeterM220 -
How to proceed on this 404 situation
Hi folks, I have a website (www.mysite.com) where I can't host a 404 page inside the subdomain www because a CMS issue. In order to provide a 404 page, I can create a subdomain like “404.mysite.com” that returns 404 and then if I find that a page does not exist I can redirect it to 404.mysite.com. My question is: should I redirect as 301 or 302? Does it have any difference?
Technical SEO | | fabioricotta-840380 -
404 help
Hello all, firstly let me apologize if this is the wrong place to ask this question. I have a site www.promptresponseaccidentmanagement.com which gets a 200ok when checked for crawl issues, however pages such as /whiplash-injury-compensation-claims.php , /road-traffic-accident-compensation-claims.php and quite a few more return a 404. That's fine (usually) as I can quite happily fix that most of the time. However if you actually go to those pages in your browser, or click through to them on any part of the site you will see that they are in fact not redirecting to a 404 and everything is fine!? Any body got any ideas? Best H
Technical SEO | | haydyn0