Dealing with 404 pages
-
I built a blog on my root domain while I worked on another part of the site at .....co.uk/alpha I was really careful not to have any links go to alpha - but it seems google found and indexed it. The problem is that part of alpha was a copy of the blog - so now soon we have a lot of duplicate content. The /alpha part is now ready to be taken over to the root domain, the initial plan was to then delete /alpha. But now that its indexed I'm worried that Ill have all these 404 pages. I'm not sure what to do.. I know I can just do a 301 redirect for all those pages to go to the other ones in case a link comes on but I need to delete those pages as the server is already very slow. Or does a 301 redirect mean that I don't need those pages anymore? Will those pages still get indexed by google as separate pages? Please assist.
-
after a 301 redirect can I delete the pages and the databases/folders associated with them?
Yes. Think of a 301 redirect like mail forwarding. If you have an address, 1000 main street and then move to a new address you would leave a forward order (e.g. 301 redirect) with the post office. Once that is done, you can bulldozer the house (e.g.. delete the webpage/database) and the mail should still be forwarded properly.
How does one create a 301 redirect?
The method of creating a 301 redirect varies based on your server setup. If you have a LAMP setup with cPanel, there is a Redirect tool. Otherwise I would suggest contacting your host and ask how to create a redirect based on your particular setup.
-
Ryan,
Two things.
First - after a 301 redirect can I delete the pages and the databases/folders associated with them?
Second - How does one create a 301 redirect?
-
Hi Ryan,
Agree with you, but I thought to provide alternate solution to the problem. I know it is difficult and not chosen one.
But as I said that if he can't get any traffic from it then and then only it can delete the pages for index. Plus as he told earlier in question that mistakenly alpha folder was indexed so lines as per you said in the comment "That tool was designed to remove content which is damaging to businesses such as when confidential or personal information is indexed by mistake." and Its contradictory statement too "The indexed content are pages you want in the index but simply have the wrong URL - The wrong URL means the different page.
Anyways will definitely go with your solution but sometimes two options helps you to choose better one.
Thanks
-
Semil, your answer is a working solution but I would like to share why it is not a best practice.
Once the /alpha pages were indexed you could have traffic on them. You cannot possibly know who has linked to those pages, e-mailed links, bookmarked them, etc. By providing a simple 301 the change will be completely seamless to users. All their links and bookmarks will still work. Additionally if any website did link to your /alpha pages, you will retain the link.
The site will also benefit because it is already indexed by Google. You will not have to wait for Google to index your pages. This means more traffic for the site.
The 301 is very quick and easy to implement. If you are simply moving from the /alpha directory to your main site then a single 301 redirect can cover your entire site.
I will offer a simple best practice of SEO (my belief which not everyone agrees with) which I do my best to follow. NEVER EVER EVER use the robots.txt file unless you have exhausted every other possibility. The robots.txt file is an inferior solution that many people latch on to because it is quick and easy. In your case, there is no need to adjust your robots.txt file at all. The original poster stated an intention to delete the /alpha pages. Those pages will no longer exist. Why block URLs which don't exist? It doesn't offer any benefit.
Also, it makes no sense to use the Google removal tool. That tool was designed to remove content which is damaging to businesses such as when confidential or personal information is indexed by mistake. The indexed content are pages you want in the index but simply have the wrong URL. The 301 redirect will allow your pages to remain in the index and for the URL to be properly updated. In order for the 301 to work correctly, you would need to NOT block the /alpha pages with robots.txt.
The solution you shared would work, but it is not as friendly all around.
-
Whoops! Thanks for correcting my answer...
-
The reason behind not using 301 is alpha is not a page or folder you want to create for your users so I don't want to put 301. Its indexed that's it. Are you getting any traffic from it ?
No, then why you need to redirect. Remove the page and ask search engine to remove that page from index. That is all.
-
Thanks Dan,
Is there a way of blocking an entire folder or do I have to add each link?
-
How can I ask them to remove it from webmaster? How can I ask everything on the /alpha folder not to be indexed - or do I have to write each link out?
Why do you think my case isn't good for 301 redirects?
-
You have to be very careful from the start, but now Google indexed your alpha. So dont worry about the thing.
Using 301 is something which I dont like to do on your case. Ask google to remove that urls from indexing from GWT, and put robots.txt to prevent alpha to be indexed.
Thanks,
-
You can perform the 301 redirect and you will not need those pages anymore. Using the redirect would be a superior SEO solution over using the robots.txt file. Since the content is already indexed, it will stay indexed and Google will update each page over the next 30 days as it crawls your site.
If you block /alpha with robots.txt, Google will still retain the pages in their index, users will experience 404s and your new pages wont start to be properly indexed until Google drops the existing pages which takes a while. The redirect is better for everyone.
-
Hi
If you do not want them in the index you should block them in your robots.txt file like so:
-
-
-
-
- -
-
-
-
User-agent: *
Allow: /
Disallow: /alpha
-Dan
PS - Some documentation on robots.txt
-
-
-
-
- -
-
-
-
EDIT: I left my answer, but don't listen to it. Do what Ryan says
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best way to deal with 100 product pages
It feels good to be BACK. I miss Moz. I left for a long time but happy to be back! 🙂 My client is a local HVAC company. They sell Lennox system. Lennox provides a tool that we hooked up to that allows visitors to their site to 'see' 120+ different kind of air quality, furnace and AC units. They problem is (I think its a problem) is Google and other crawl tools are seeing these 100+ pages that are not unique, helpful or related to my client. There is a little bit of cookie cutter text and images and specs and that's it. Are these pages potentially hurting my client? I can't imagine they are helping. Best way to deal with these? Thank you! Thank you! Matthew
Technical SEO | | Localseo41440 -
Will redirecting a logged in user from a public page to an equivalent private page (not visible to google) impact SEO?
Hi, We have public pages that can obviously be visited by our registered members. When they visit these public pages + they are logged in to our site, we want to redirect them to the equivalent (richer) page on the private site e.g. a logged in user visiting /public/contentA will be redirected to /private/contentA Note: Our /public pages are indexed by Google whereas /private pages are excluded. a) will this affect our SEO? b) if not, is 302 the best http status code to use? Cheers
Technical SEO | | bernienabo0 -
Is the Authority of Individual Pages Diluted When You Add New Pages?
I was wondering if the authority of individual pages is diluted when you add new pages (in Google's view). Suppose your site had 100 pages and you added 100 new pages (without getting any new links). Would the average authority of the original pages significantly decrease and result in a drop in search traffic to the original pages? Do you worry that adding more pages will hurt pages that were previously published?
Technical SEO | | Charlessipe0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
After I 301 redirect duplicate pages to my rel=canonical page, do I need to add any tags or code to the non canonical pages?
I have many duplicate pages. Some pages have 2-3 duplicates. Most of which have Uppercase and Lowercase paths (generated by Microsoft IIS). Does this implementation of 301 and rel=canonical suffice? Or is there more I could do to optimize the passing of duplicate page link juice to the canonical. THANK YOU!
Technical SEO | | PFTools0 -
Duplicate page titles
Hi, I have a Joomla 2.5 site and I use categoryblogs. So I have a page with "reviews". All the reviews are shown on this page and there are about 15 pages of it. In my SEOMoz crawl result I get 71 errors ! about "duplicate titles". How can I diminish this? I don't know how to show all the reviews in a proper way other than what I have accomplished with categoryblog. Patrick
Technical SEO | | paddydaddy0 -
Weird 404 error
I have 2 404 errors on my site. The pages which are coming up as errors look like this www.mywebsite.com/a-page-not-belong-to-wordpress.html www.mywebsite.com/another-page-not-belong-to-wordpress.html Just wondering if i can delete these pages? if so how Regards
Technical SEO | | panda320 -
Does google like Category pages or pages with lots of Products on them?
We are having an issue with getting Google to rank the page we want. To have this page http://www.jakewilson.com/c/52/-/346/Cruiser-Motorcycle-Tires rank for the key word Cruiser Motorcycle Tires; however, this page http://www.jakewilson.com/t/52/-/343/752/Cruiser-Motorcycle-Tires is ranking instead and it has less links and page authority according to site explorer and it is farther down in the hierarchy. I am wondering if google just likes pages that have actual products on them instead of a page leading to the page with all the products. Thoughts?
Technical SEO | | DoRM0