Over 1000 pages de-indexed over night
-
Hello,
On my site (www.bridgman.co.uk) we had a lot of duplicate page issues as reported by the Seomoz site report tool - this was due to database driven URL strings. As a result, I sent an excel file with all the duplicate pages to my web developer who put rel canonical tags on what I assumed would be all the correct pages.
I am not sure if this is a coincidence, or a direct result of the canonical tags, but a few days after (yesterday) the amount of pages indexed by google dropped from 1,200 to under 200.
The number is still declining, and other than the canonical tags I can't work out why Google would just start de-indexing most of our pages.
If you could offer any solutions that would be greatly appreciated.
Thanks, Robert.
-
Canonical and duplicate content are both interresting issues, thanks for your anserws!
-
The 1st question I would ask myself as a website owner is: How many pages have duplicate content on them? Maybe try going the manual way and check for yourself. There will be pages you do not know about. Use a tool like Xenu's link sleuth to extract all links on your website(which are reachable from atleast 1 link of your website).
It may happen that google is adjusting its index for all. Worth keeping a track on whether your competitors are losing at the same or slower rate or not.
-
I would first remove the canonicals (or actually do them correctly) and wait for a re-crawl of the pages. If not, then re-submit.
Also, problems with canonicals do show up in the page grader app, so when your admin re-writes the canonical, test the page before Google sees it : )
-
It appears so
I've asked my webmaster to remove all of the canonical tags but after reading that article I feel that a lot of the damage has already been done!
Looks like I'll have to submit a reconsideration request and beg Google for forgiveness.
-
Hmm, I'd remove them.
The fact that all your individual product pages have a rel=canonical of productsdetail.asp and category level is products.asp is telling Google that all your products pages are the same!
The problem is there are a lot of parameters in your URLs, let me go have a look at how to deal with that when using the canonical tag.
EDIT - Spent half my lunch on this but couldn't find much I would guess you can get away with parameters in the canonical tag so decide which is the official URL and put that in the header for that paticular piece of content. Only really found this - http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html
-
Could you be in a canonical loop?
-
I don't think this is a result of an algo change, I've just had a look at the crawl diagnostics and it appears that he has put the tag value of the homepage (www.bridgman.co.uk/ - where all the back links are pointed to) as www.bridgman.co.uk/default.asp !!
This has also been done to all of our product pages!! i.e.all of the duplicated product pages now have the tag value - www.bridgman.co.uk/productdetail.asp (a page which doesn't even exist on our site!!!)
Now I may be stating the obvious, but I guess this is the problem?
He did this 8 days ago, where should I go from here? Ask him to remove all rel canonical tags and pray we bounce back....?
-
First question would be do you think this may have had an effect on you? - http://searchengineland.com/google-forecloses-on-content-farms-with-farmer-algorithm-update-66071
All your content unique? Your links come from sites that may have been effected?
If not then I'd give it a few days to see if they come back (as HR128 says, may just be Google working out what's changed and what to do) and I'm doing a quick crawl of your site to see what I can see
-
It could take a couple of days before you see results, Google has to step back and reassess most likely, or your web developer has no clue what he's doin haha.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Over 500 thin URLs indexed from dynamically created pages (for lightboxes)
I have a client who has a resources section. This section is primarily devoted to definitions of terms in the industry. These definitions appear in colored boxes that, when you click on them, turn into a lightbox with their own unique URL. Example URL: /resources/?resource=dlna The information for these lightboxes is pulled from a standard page: /resources/dlna. Both are indexed, resulting in over 500 indexed pages that are either a simple lightbox or a full page with very minimal content. My question is this: Should they be de-indexed? Another option I'm knocking around is working with the client to create Skyscraper pages, but this is obviously a massive undertaking given how many they have. Would appreciate your thoughts. Thanks.
Technical SEO | | Alces0 -
Paginated pages are being indexed?
I have lots of paginated pages which are being indexed. Should I add the noindex tag to page 2 onwards? The pages currently have previous and next tags in place. Page one also has a self-referencing canonical.
Technical SEO | | WTH0 -
Pages not indexed
Hey everyone Despite doing the necessary checks, we have this problem that only a part of the sitemap is indexed.
Technical SEO | | conversal
We don't understand why this indexation doesn't want to take place. The major problem is that only a part of the sitemap is indexed. For a client we have several projects on the website with several subpages, but only a few of these subpages are indexed. Each project has 5 to 6 subpages. They all should be indexed. Project: https://www.brody.be/nl/nieuwbouwprojecten/nieuwbouw-eeklo/te-koop-eeklo/ Mainly subelements of the page are indexed: https://www.google.be/search?source=hp&ei=gZT1Wv2ANouX6ASC5K-4Bw&q=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&oq=site%3Abrody.be%2Fnl%2Fnieuwbouwprojecten%2Fnieuwbouw-eeklo%2F&gs_l=psy-ab.3...30.11088.0.11726.16.13.1.0.0.0.170.1112.8j3.11.0....0...1c.1.64.psy-ab..4.6.693.0..0j0i131k1.0.p6DjqM3iJY0 Do you have any idea what is going wrong here?
Thanks for your advice! Frederik
Digital marketeer at Conversal0 -
I hope someone can help me with page indexing problem
I have a problem with all video pages on www.tadibrothers.com.
Technical SEO | | TadiBrothers
I can not understand why google do not index all the video pages?
I never blocked them with the robots.txt file, there are no noindex/nofollow tags on the pages. The only video page that I found in search results is the main video category page: https://www.tadibrothers.com/videos and 1 video page out of 150 videos: https://www.tadibrothers.com/video/front-side-rear-view-cameras-for-backup-camera-systems I hope someone can point me to the right way0 -
How long after disallowing Googlebot from crawling a domain until those pages drop out of their index?
We recently had Google crawl a version of the site we that we had thought we had disallowed already. We have corrected the issue of them crawling the site, but pages from that version are still appearing in the search results (the version we want them to not index and serve up is our .us domain which should have been blocked to them). My question is this: How long should I expect that domain (the .us we don't want to appear) to stay in their index after disallowing their bot? Is this a matter of days, weeks, or months?
Technical SEO | | TLM0 -
Post Site Migration - thousands of indexed pages, 4 months after
Hi all, Believe me. I think I've already tried and googled for every possible question that I have. This one is very frustrating – I have the following old domain – fancydiamonds dot net. We built a new site – Leibish dot com and done everything by the book: Individual 301 redirects for all the pages. Change of address via the GWT. Trying to maintain and improve the old optimization and hierarchy. 4 months after the site migration – we still have to gain back more than 50% of our original organic traffic (17,000 vs. 35,500-50,000 The thing that strikes me the most that you can still find 2400 indexed pages on Google (they all have 301 redirects). And more than this – if you'll search for the old domain name on Google – fancydiamonds dot net you'll find the old domain! Something is not right here, but I have no explanation why these pages still exist. Any help will be highly appreciated. Thanks!
Technical SEO | | skifr0 -
How to get Google to index another page
Hi, I will try to make my question clear, although it is a bit complex. For my site the most important keyword is "Insurance" or at least the danish variation of this. My problem is that Google are'nt indexing my frontpage on this, but are indexing a subpage - www.mydomain.dk/insurance instead of www.mydomain.dk. My link bulding will be to subpages and to my main domain, but i wont be able to get that many links to www.mydomain.dk/insurance. So im interested in making my frontpage the page that is my main page for the keyword insurance, but without just blowing the traffic im getting from the subpage at the moment. Is there any solutions to do this? Thanks in advance.
Technical SEO | | Petersen110 -
Top pages give " page not found"
A lot of my top pages point to images in a gallery on my site. When I click on the url under the name of the jpg file I get an error page not found. For instance this link: http://www.fastingfotografie.nl/architectuur-landschap/single-gallery/10162327 Is this a problem? Thanks. Thomas. JkLej.png
Technical SEO | | thomasfasting0