Should I be using use rel=author in this case?
-
We have a large blog, which it appears one of our regional blogs (managed separately) is simply scraping content off of our blog and adding it to theirs.
Would adding rel=author (for all of our guest bloggers) help eliminate google seeing the regional blog content as scraped or duplicate? Is rel=author the best solution here?
-
Check out Google's Source Attribution tag - that's the way I'd go in this situation rather than author, unless the "scraped" copy changes author info, which is a whole different issue..
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rank regional homepages using canonicals and hreflangs
Here’s a situation I’ve been puzzling with for some time: The situation
Technical SEO | | dmduco
Please consider an international website targeting 3 regions. The real site has more regions, but I simplified the case for this question. screenshot1.png There is no default language. The content for each regional version is meant for that region only. The website.eu page is dynamic. When there is no region cookie, the page is identical to website.eu/nl/ (because Netherlands is the most important region) When there is a region cookie (set by a modal), there is a 302 redirect to the corresponding regional homepage What we want
We want regional Google to index the correct regional homepages (eg. website.eu/nl/ on google.nl), instead of website.eu.
Why? Because visitors surfing to website.eu sometimes tend to ignore the region modal and therefor browse the wrong version.
For this, I set up canonicals and hreflangs as described below: screenshot2.png The problem
It’s 40 days now since the above hreflangs and canonicals have been setup, but Google is still ranking website.eu instead of the regional homepages.
Search console’s report for website.eu: screenshot3.png Any ideas why Google doesn’t respect our canonical? Maybe I’m overlooking something in this setup (combination of hreflangs and canonicals might be confusing)? Should I remove the hreflangs on the dynamic page, because there is no self-referencing hreflang? Or maybe it’s because website.eu has gathered a lot of backlinks over the years, whereas the regional homepages have much less, which might be why Google chooses to ig nore the canonical signals? Or maybe it’s a matter of time and I just need to wait longer? Note: I’m aware the language subfolders (eg. /be_nl) are not according to Google’s recommendations. But I’ve seen similar setups (like adobe.com and apple.com) where the regional homepage is showing ok. Any help appreciated!0 -
I want to use a domain that has previously been forwarded elsewhere. Any considerations?
There's a domain name (we will call it A) with no domain authority that is currently forwarded to a domain with 36 DA (we will call this domain B). B has been dormant for about two years. I am getting both domains, but domain A works better for what I will be using it for. So basically, I want to swap things around so B forwards to A, instead of A forwarding to B. Any dangers here or things to consider that I may be overlooking?
Technical SEO | | CWBFriedman0 -
The use of tabs on productpages, do or don't?
Does google has any trouble reading content tabs? The content is not loaded by ajax and is already in the page source code.
Technical SEO | | wilcoXXL
As i'm checking some big e-commerce websites or (amazon.com for example) they get rid of the tabs with content and put the different content below eachother. Is his better for SEO purpose? But what about user experience? For users it think it is easier to navigate by tabs then to have a long page to scroll. What do you guys think about this issue?0 -
Using the Moz to weed out bad backlinks
How do you use the opensite explorer to weed out bad backlinks in your profile, and then how do you remove them if you cannot contact the various webmasters.
Technical SEO | | marketing-man19900 -
Timely use of robots.txt and meta noindex
Hi, I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents. When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good. When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages. When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag. Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this. I need a clear solution, which solves both issues (index and crawling). What I try to do now, is the following: I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory. Can this work the way I imagine, or do you have a better way of doing so? Thank you in advance for all your help.
Technical SEO | | Dilbak0 -
Why is the ideal rel canonical URL structure?
I currently have the rel canonical point to wepay.com/donations/123456. Is it worth the effort making it point to wepay.com/donations/donation-name-123456? I would also need to track histories if users change the vanity URL with this new structure.
Technical SEO | | wepayinc0 -
Rel=canonical + no index
We have been doing an a/b test of our hp and although we placed a rel=canonical tag on the testing page it is still being indexed. In fact at one point google even had it showing as a sitelink . We have this problem through out our website. My question is: What is the best practice for duplicate pages? 1. put only a rel= canonical pointing to the "wanted original page" 2. put a rel= canonical (pointing to the wanted original page) and a no index on the duplicate version Has anyone seen any detrimental effect doing # 2? Thanks
Technical SEO | | Morris770 -
Syndication: Link back vs. Rel Canonical
For content syndication, let's say I have the choice of (1) a link back or (2) a cross domain rel canonical to the original page, which one would you choose and why? (I'm trying to pick the best option to save dev time!) I'm also curious to know what would be the difference in SERPs between the link back & the canonical solution for the original publisher and for sydication partners? (I would prefer not having the syndication partners disappeared entirely from SERPs, I just want to make sure I'm first!) A side question: What's the difference in real life between the Google source attribution tag & the cross domain rel canonical tag? Thanks! PS: Don't know if it helps but note that we can syndicate 1 article to multiple syndication partners (It would't be impossible to see 1 article syndicated to 50 partners)
Technical SEO | | raywatson0