Duplicate Pages on GWT when redesigning website
-
Hi, we recently redesigned our online shop. We have done the 301 redirects for all product pages to the new URL (and went live about 1.5 week ago), but GWT indicated that the old product URL and the new product URL are 2 different pages with the same meta title tags (duplication) - when in fact, the old URL is 301 redirecting to the new URL when visited.
I found this article on google forum: https://productforums.google.com/forum/#!topic/webmasters/CvCjeNOxOUw
It says we either just wait for Google to re-crawl, of use the fetch URL function for the OLD URLs. Question is, after i fetch the OLD URL to tell Google that it's being redirected, should i click the button 'submit to index' or not? (See screengrab - please note that it was the OLD URL that was being fetched, not the NEW URL). I mean, if i click this button, is it telling Google that:a. 'This old URL has been redirected, therefore please index the new URL'? or
b. 'Please keep this old URL in your index'?What's your view on this? Thanks
-
Hi,
I migrated a load of product category pages on one of my websites recently to cleaner URLs and to force the crawl I submitted the new URLs (and children) to index via WMT. This was to pick them up quickly - and it worked (within seconds). The old URLs appearing were never a problem. However there are limits to the number of times you can do this so that might be a sticking point for your solution as I'm guessing you have lots of products. Try it with one page (a low traffic and selling product!) and see what happens - and let us know.
It's possible Google is holding onto your old URLs because they have a number of inbound links and the crawl will eventually catch up to only display the new URLs if you give it time.
Aside from agreeing with the sitemap submission suggestion, I'd also triple check that your 301s / canonicals are set up properly on your website's old URLs by firing Screaming Frog or another crawler at it.
George
-
Have you resubmitted your sitemap? That is a slightly simpler step. Personally I would wait for the pages to be indexed. This should really only take about 2 weeks. The SERP might reflect the old site until then, but if your rankings are good then that is a good thing for your SEO.
I don't think that fetching in this case will correctly reindex your site. The wait and see game is going to be your best chance at getting the natural response you want from Google without sacrificing your existing rankings.
-
Honestly speaking, I am sick of this Google Webmaster Tool delay in update… most of the time it shows me the days of months old when the website will be completely changed it will still talking about the old problems…
My first suggestion is to wait and I believe after few crawls it will understand that they have moved on from the problem you had before.
The image you attached will only tell you if the redirection is properly working or not and if the user is shorting from old page to the new one that means it is working.
I believe another thing you people can do is to give a social bump to your new pages and at the same time request Google to de-index the page. GWT have this option somewhere.
Hope this helps!
-
Sorry, forgot to attach.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why is my inner pages ranking higher than main page?
Hi everyone, for some reason lately i have discovered that Google is ranking my inner pages higher than the main subfolder page. www.domain.com/subfolder --> Target page to be ranked
Technical SEO | | davidboh
www.domain.com/subfolder/aboutus ---> page that is currently ranking Also in the SERP most of the time, it is showing both links in this manner. www.domain.com/subfolder/aboutus
-----------www.domain.com/subfolder Thanks in advance.1 -
Duplicate Page Content
Hello, After crawling our site Moz is detecting high priority duplicate page content for our product and article listing pages, For example http://store.bmiresearch.com/bangladesh/power and http://store.bmiresearch.com/newzealand/power are being listed as duplicate pages although they have seperate URLs, page titles and H1 tags. They have the same product listed but I would have thought the differentiation in other areas would be sufficient for these to not be deemed as duplicate pages. Is it likely this issue will be impacting on our search rankings? If so are there any recommendations as to how this issue can be overcome. Thanks
Technical SEO | | carlsutherland0 -
Duplicate Page Content but where?
Hi All Moz is telling me I have duplicate page content and sure enough the PA MR mT are all 0 but it doesnt give me a link to this content! This is the page: http://www.orsgroup.com/index.php?page=Scanning-services But I cant find where the duplicate content is other than on our own youtube page which I will get removed here: http://www.youtube.com/watch?v=Pnjh9jkAWuA Can anyone help please? Andy
Technical SEO | | ORS-Group0 -
Duplicate page/Title content - Where?
Hi, I have just run a crawl on a new clients site, and there is several 'duplicate page content' and 'Duplicate Page Title'' issues. But I cannot find any duplicate content. And to make matters worse. The actual report has confused me. Just for example the about us page is showing in both reports and for both under 'Other URLs' it is showing 1? Why? Does this mean there is 1 other page with duplicate page title? or duplicate page content? Where are the pages that have the duplicate page titles, or duplicate page content? I have run scans using other software and a copyscape scan. And apart from missing page titles, I cannot find any page that has duplicate titles or content. I can find % percentages of pages with similar/same page titles/content. But this is only partial and contextually correct. So I understand that SEO Moz may pick percentage of content, which is fine, and therefore note that there is duplicate content/page titles. But I cannot seem to figure out where I would the source of the duplicate content/page titles. As there is only 1 listed in both reports for 'Other URLs' Hopefully my long question, has not confused. many thanks in advance for any help
Technical SEO | | wood1e20 -
We have over 3000 duplicate page titles, please help!
Hi, we did a crawl report and have over 3000 duplicate page titles. I'm not sure why this is happening... could it be because we have put posts in multiple categories? Can anyone help us with a quick fix? our site is www.stayathomemum.com.au thank you kindly, Chris
Technical SEO | | stayathomemum0 -
Getting Rid of Duplicate Page Titles After URL Structure Change
I've had all sorts of issues with google when they just dropped us on our head a few weeks ago. Google is crawling again after I made some changes, but they're still not ranking our content like they were so I have a few questions. I changed our url structure from /year/month/date/post-title to just /post-title and 301 redirected the old link structure to the new. When I look I see over 3000 duplicate title errors listing both versions of the url. 1. How do I get google to crawl the old url structure and recognize the 301 redirect and update the index? 2. Google is crawling the site again, but they're not ranking us like they were before. We're in a highly competitive category and I'm aware of that, but we've always been an authority in our niche. We have plenty of quality backlinks and often we're originators of the content which is then rewritten by a trillion websites everywhere. We're not the best at writing and titles, but we're working on it and this did not matter much to google previously as it was ranking us pretty highly on the front page and certainly ranking us over many sites that are ranking above us today. Some backlinks http://www.alexa.com/site/linksin/dajaz1.com A few examples - if you google twista gucci louis prada you'll see many of the sites who trackbacked to us since we premiered the song rank much higher than us. 3 weeks ago we were ranking above them. http://dajaz1.com/twista-gucci-louis-prada/ google search jadakiss consignment mixtape 3 weeks ago we were ranking higher than all 4 sites ranking above us. The sites ranking above us even link to us or mention us, yet they rank above us now. original content here http://dajaz1.com/watch-jadakiss-confirms-cosignment-mixtape-2012-schedule/ I could throw out a ton of examples like this. How do we get google to rank us again. It should be noted that I'm not using any SEO plugin's on the site. I hand coded what's in there, and I know I can probably do it better so any tips or ideas is welcome. I'm pretty sure that our issues were caused by the Yoast SEO Plugin as when I search site:dajaz1.com the pages and topics that display were all indexed while the plugin was active. I've since removed it and all calls to it in the database, but I'm pretty nervous about plugins right now. Which brings me to my third and final question How do I get rid of the page category and topic pages that were indexed and seem to be ranking higher than the rest of our content? I lied one more. For category url I've set it to remove the category base so the url is dajaz1.com/news or dajaz1.com/music is that preferable or is this causing me issues? Any feedback is appreciated. Also google is crawling again (see attached image) but the Kilobytes downloaded per day hasn't. Should I be concerned about this? Gd9i6
Technical SEO | | malady0 -
Mitigating duplicate page content on dynamic sites such as social networks and blogs.
Hello, I recently did an SEOMoz crawl for a client site. As it typical, the most common errors were duplicate page title and duplicate content. The client site is a custom social network for researchers. Most of the pages that showing as duplicate are simple variations of each user's profile such as comment sections, friends pages, and events. So my question is how can we limit duplicate content errors for a complex site like this. I already know about the rel canonical tag, and rel next tag, but I'm not sure if either of these will do the job. Also, I don't want to lose potential links/link juice for good pages. Are there ways of using the "noindex" tag in batches? For instance: noindex all urls containing this character? Or do most CMS allow this to be done systematically? Anyone with experience doing SEO for a custom Social Network or Forum, please advise. Thanks!!!
Technical SEO | | BPIAnalytics0 -
Duplicate Page title - PHP Experts!
After running a crawl diagnostics i was surprised to see 336 duplicate page titles. But i am wondering if it is true or not. Most of them are not a page at all but a .php variation. for example: The following are all the same page, but a different limit on viewing listings. Limiting your view to 5, 10, 15, 20, 25 as you choose. .com/?lang=en&limit=5 .com/?lang=en&limit=5&limitstart=10
Technical SEO | | nahopkin
.com/?lang=en&limit=5&limitstart=15
.com/?lang=en&limit=5&limitstart=20
.com/?lang=en&limit=5&limitstart=25 Same type of things are going on all over the site causing 228 duplicate content errors and the already mentioned 336 duplicate pages. But is "crawl diagnostic telling the truth" or is it just some php thing? I am not a php expert so any input would be much appreciated. What should i do?0