Google keeps marking different pages as duplicates
-
My website has many pages like this:
mywebsite/company1/valuation
mywebsite/company2/valuation
mywebsite/company3/valuation
mywebsite/company4/valuation
...
These pages describe the valuation of each company.
These pages were never identical but initially, I included a few generic paragraphs like what is valuation, what is a valuation model, etc... in all the pages so some parts of these pages' content were identical.
Google marked many of these pages as duplicated (in Google Search Console) so I modified the content of these pages: I removed those generic paragraphs and added other information that is unique to each company. As a result, these pages are extremely different from each other now and have little similarities.
Although it has been more than 1 month since I made the modification, Google still marks the majority of these pages as duplicates, even though Google has already crawled their new modified version. I wonder whether there is anything else I can do in this situation?
Thanks
-
Google may mark different pages as duplicates if they contain very similar or identical content. This can happen due to issues such as duplicate metadata, URL parameters, or syndicated content. To address this, ensure each page has unique and valuable content, use canonical tags when appropriate, and manage URL parameters in Google Search Console.
-
Yes, there are a few other things you can do if Google is still marking your pages as duplicates after you have modified them to be unique:
-
Check your canonical tags. Canonical tags tell Google which version of a page is the preferred one to index. If you have canonical tags in place and they are pointing to the correct pages, then Google should eventually recognize that the duplicate pages are not actually duplicates.
-
Use the URL parameter tool in Google Search Console. This tool allows you to tell Google which URL parameters it should treat as unique and which ones it should ignore. This can be helpful if you have pages with similar content but different URL parameters, such as pages for different product categories or pages with different sorting options.
-
Request a recrawl of your website. You can do this in Google Search Console. Once Google has recrawled your website, it will be able to see the new, modified versions of your pages.
If you have done all of the above and Google is still marking your pages as duplicates, then you may need to contact Google Support for assistance.
-
-
If Google is marking different pages on your website as duplicates, it can negatively impact your website's search engine rankings. Here are some common reasons why Google may be doing this and steps you can take to address the issue:
Duplicate Content: Google's algorithms are designed to filter out duplicate content from search results. Ensure that your website does not have identical or near-identical content on multiple pages. Each page should offer unique and valuable content to users.
URL Parameters: If your website uses URL parameters for sorting, filtering, or tracking purposes, Google may interpret these variations as duplicate content. Use canonical tags or the URL parameter tool in Google Search Console to specify which version of the URL you want to be indexed.
Pagination: For websites with paginated content (e.g., product listings, blog archives), ensure that you implement rel="next" and rel="prev" tags to indicate the sequence of pages. This helps Google understand that the pages are part of a series and not duplicates.
www vs. non-www: Make sure you have a preferred domain (e.g., www.example.com or example.com) and set up 301 redirects to the preferred version. Google may treat www and non-www versions as separate pages with duplicate content.
HTTP vs. HTTPS: Ensure that your website uses secure HTTPS. Google may view HTTP and HTTPS versions of the same page as duplicates. Implement 301 redirects from HTTP to HTTPS to resolve this.
Mobile and Desktop Versions: If you have separate mobile and desktop versions of your site (e.g., responsive design or m.example.com), use rel="alternate" and rel="canonical" tags to specify the relationship between the two versions.
Thin or Low-Quality Content: Pages with little or low-quality content may be flagged as duplicates. Improve the content on such pages to provide unique value to users.
Canonical Tags: Implement canonical tags correctly to indicate the preferred version of a page when there are multiple versions with similar content.
XML Sitemap: Ensure that your XML sitemap is up-to-date and accurately reflects your website's structure. Submit it to Google Search Console.
Avoid Scraped Content: Ensure that your content is original and not scraped or copied from other websites. Google penalizes sites with duplicate or plagiarized content.
Check for Technical Errors: Use Google Search Console to check for crawl errors or other technical issues that might be causing duplicate content problems.
Structured Data: Ensure that your structured data (schema markup) is correctly implemented on your pages. Incorrectly structured data can confuse search engines.
Regularly monitor Google Search Console for any duplicate content issues and take prompt action to address them. It's essential to provide unique and valuable content to your website visitors while ensuring that search engines can correctly index and rank your pages.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel: Canonical - checking advice provided by SEO agency
Hey all, We have two brands one bigger and one smaller that are on 2 different domains. We are wanting to repost some of the articles from the smaller brand to the bigger brand and what was a bit of curve ball, our SEO agency advised us NOT to put a rel: canonical on the reposted articles on the bigger brands site. This is counter to what i'm used to and just wanted to confirm with the gurus out there if this is good advice or bad advice. Thanks 🙂
Technical SEO | | Redooo0 -
these sites where you have to pay to generate backlinks?
After opening a site here, we created a post containing a backlink and discovered a company with a top exposure. After opening the site in the same way, I created a post that included a backlink, but after a few days my account was suspended. By any chance, are these sites where you have to pay to generate backlinks? Of course, each site has a different policy, but my account is suspended every time.
SEO Tactics | | znjfl
Please reply if you have experienced anything similar to mine.0 -
Redirecting Homepage to Subdomain Bad or Good Idea??
I have a very old forum that still gets a lot of traffic, but when migrating over to another software that is cloud based we cannot redirect using same domain, SO the only option would to be to change the cname on a subdomain and then REDIRECT all the traffic from the ROOT domain permanently - would this be a bad move as the root domain wouldnt be used anymore as now its just setup to be redirected in order to use the software we need to use? Domain is 17 years old.
Technical SEO | | vbsk0 -
Rankings preferring English URL above local URL
We've recently had a redesign for our website and it has influenced our rankings a little bit. However, what I mainly noticed is that for some keywords in MOZ the English URL is looked at in terms of ranking, instead of the local URL. It used to be just the local URL ranking, even for keywords that are more English oriented, and I'm wondering if that might be hurting our rankings. And more importantly, why it's happening. An example of a page where it's happening is: https://www.bluebillywig.com/online-video-platform/
Local SEO | | Billywig0 -
National services provider and localized SEO (no physical stores)
Doing work for a telecom provider who operates in over 25 states. They are not trying to drive traffic to their brick-and-mortar stores. They want their marketing website to show products/services/pricing dynamically when a user enters their zip code. Previously, we could not show this until the shopper was already in the purchase flow that began with their serviceable address. They want to move these location-based details more forward in the shopping experience. They would likely have a "default" zip and set of services/pricing displaying until a user changes their location. My question is how does Google treat local SEO on a site where all location-targeted content is dynamic? Will the website suffer in localized search, when a shopper, say, in Colorado, wants to search for Internet providers? Is it better to have distinct landing pages for each territory with services/pricing?
Local SEO | | sprydigital0 -
Question regarding international SEO
Hi there, I have a question regarding international SEO and the APAC region in particular. We currently have a website extension .com and offer our content in English. However, we notice that our website hardly ranks in Google in the APAC region, while one of the main languages in that region is also English. I figure one way would be to set up .com/sg/ (or .com/au/ or .com/nz/), but then the content would still be in English. So wouldn't that be counted as duplicate content? Does anyone have experience in improving website rankings for various English-speaking countries, without creating duplicate content? Thanks in advance for your help!
International SEO | | Billywig0 -
Do multipe empty search result pages count as duplicate content?
I am writing an online application that among other things allows the users to search through our database for results. Pretty simply stuff. My question is this. When the site is starting out, there will probably be a lot of searches that will bring back empty pages since we will still be building it up. Each page will dynamically generate the title tags, description tags, H1, H2, H3 tags - so that part will be unique - but otherwise they will be almost identical empty results pages until then. Would Google Count all these empty result pages as duplicate content? Anybody have any experience with this? Thanks in advance.
Technical SEO | | rayvensoft0 -
Crawl Diagnostics and Duplicate Page Title
SOMOZ crawl our web site and say we have no duplicate page title but Google Webmaster Tool says we have 641 duplicate page titles, Which one is right?
Technical SEO | | iskq0