Rel="author" - This could be KickAss!
-
Google is now encouraging webmasters to attribute content to authors with rel="author". You can read what google has to say about it here and here.
A quote from one of google's articles....
When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, though, and we’re constantly tweaking and improving our algorithm to improve overall search quality.
I am guessing that google might use it like this..... If you have several highly successful articles about "widgets", your author link on each of them will let google know that you are a widget expert. Then when you write future articles about widgets, google will rank them much higher than normal - because google knows you are an authority on that topic.
If it works this way the rel="author" attribute could be the equivalent of a big load of backlinks for highly qualified authors.
What do you think about this? Valuable?
Also, do you think that there is any way that google could be using this as a "content registry" that will foil some attempts at content theft and content spinning?
Any ideas welcome! Thanks!
-
I own a company and usually write my own blogs but not every time. The times I don't I pay to have them written and thus own the copy. Can an author be a company and the link point to the company about us page?
-
To anyone following this topic... A good thread at cre8asiteforums.com
-
Pretty sure both say they are interchangeable.
-
I was wondering if this is needed? Doesn't the specfication at schema.org cover this? Or would Google use the Author itemscope different from rel="Author"?
-
Right now, rel="author" is only useful with intra-domain URLs. It does not "count" if you are linking to other domains.
BUT...
In the future it might, so doing this could either give you a nice head start, or not. Time will tell.
-
I think it's a good idea and may open up some content syndication options that were discounted before...
In the past I have been firmly against content syndication - I want the content on my own site. However, if I think that the search engines are going to give me credit for doing it then I might do it when a great opportunity arrives.
-
I think it's a good idea and may open up some content syndication options that were discounted before (as per Dunamis' post) however I've not see the rel tag do much for me.
Tagging links to SM sites as rel="me" has not helped those pages get into the SERPs for my brand (though I've not been super consistent with doing it), rel="nofollow" obviously had the rug pulled from under it a while ago and I even once got carried away and tried linking language sites together with rel="alternate" lang="CC" but didn't get the uplift in other language version sites I hoped (though it was a bit of a long shot to begin with).
I'm just wondering how much value this is going to have. I still like it in principal and will attempt to use it where I can.
-
Or, the other issue could be that content sites could grab content from a non-web-savvy site owner. If the original owner didn't have an author tag, then the content site could slap their own author tag on and Google would think that they were the original author.
-
However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
Oh.... I really like that. I would like to see google internally put a date on first publication. One problem that some people might have is that their site is very new and weak and content scrapers hit them with a higher frequency than googlebot.
-
When I read it, I understood it to mean that the author tag was telling google that I was the original author. (I actually thought of you EGOL as I know you have been pushing for a content registry). Now, if someone steals my stuff I wouldn't expect them to put a rel author on it. However, I can see a few ways that the tag may be helpful:
-I recently had someone want to publish one of my articles on their site. I said no because I didn't want there to be duplicates of my stuff online. But, perhaps with rel author I could let another site publish my site as long as it is credited to me. Then, Google will know that my site deserves to be the top listing for this content.
-If I have stuff that I know scrapers are going to get, I can use the rel-author tag. My first thought was that a scraper site could sneakily put their own rel author on it and claim it as theirs. However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
-
This might be helpful for you, especially if you can get the syndication sites to place author tags on the blog posts.
rel=canonical might also be worth investigating.
I am also confused about this. I'd like to see more information from Google on exactly how these will be used - especially in cross-domain situations.
-
I actually have similar questions about this. The company I work for hosts a blog that is also syndicated across 4 to 5 other websites. The other sites have bigger reach on the web and our blog isn't getting much direct traffic out of this. I have a feeling adding the author tags to our content will eventually pay off to show that the content is being originated on our site and then syndicated. I am interested / excited to see other ways this will be used. I think its a great fix for the scraping issue and will hopefully prevent needing panda updates X.X
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What is the feeliing of "Here's where our site can help" text links used for conversions?
If you have an ecommerce site that is using editorial content on topics related to the site's business model to build organic traffic and draw visitors who might be interested in using the site's services eventually, what is the SEO (page ranking) impact -- as well as the impact on the visitors' perceptions about the reliability of the information on the site -- of using phrases like "Here is where [our site] can help you." in nearly every article. Note: the "our site" text would be linked in each case as a conversion point to one of the site's services pages to get visitors to move from content pages on a site to the sales pages on the site. Will this have an impact on page rankings? Does it dilute the page's relevance to search engines? Will the content look less authoritative because of the prevalence of these types of links? What about the same conversion links without the "we can help" text - i.e., more natural-sounding links that stem from the flow of the article but can lead interested visitors deeper into the ecommerce section of the site?
Algorithm Updates | | Will-McDermott0 -
Do I need to track my rankings on the keywords "dog" and "dogs" separately? Or does Google group them together?
I'm creating an SEO content plan for my website, for simplicity's sake lets say it is about dogs. Keeping SEO in mind, I want to strategically phrase my content and monitor my SERP rankings for each of my strategic keywords. I'm only given 150 keywords to track in Moz, do I need to treat singular and plural keywords separately? When I tried to find estimated monthly searches in Google's keyword planner, it is grouping together "dog" and "dogs" under "dogs"... and similarly "dog company" and "dog companies" under "dog companies". But when I use Moz to track my rankings for these keywords, they are separate and my rankings vary between the plural version and singular version of these words. Do I need to track and treat these keywords separately? Or are they grouped together for SEO's sake?
Algorithm Updates | | Fairstone0 -
SEO Myth-Busters -- Isn't there a "duplicate content" penalty by another name here?
Where is that guy with the mustache in the funny hat and the geek when you truly need them? So SEL (SearchEngineLand) said recently that there's no such thing as "duplicate content" penalties. http://searchengineland.com/myth-duplicate-content-penalty-259657 by the way, I'd love to get Rand or Eric or others Mozzers aka TAGFEE'ers to weigh in here on this if possible. The reason for this question is to double check a possible 'duplicate content" type penalty (possibly by another name?) that might accrue in the following situation. 1 - Assume a domain has a 30 Domain Authority (per OSE) 2 - The site on the current domain has about 100 pages - all hand coded. Things do very well in SEO because we designed it to do so.... The site is about 6 years in the current incarnation, with a very simple e-commerce cart (again basically hand coded). I will not name the site for obvious reasons. 3 - Business is good. We're upgrading to a new CMS. (hooray!) In doing so we are implementing categories and faceted search (with plans to try to keep the site to under 100 new "pages" using a combination of rel canonical and noindex. I will also not name the CMS for obvious reasons. In simple terms, as the site is built out and launched in the next 60 - 90 days, and assume we have 500 products and 100 categories, that yields at least 50,000 pages - and with other aspects of the faceted search, it could create easily 10X that many pages. 4 - in ScreamingFrog tests of the DEV site, it is quite evident that there are many tens of thousands of unique urls that are basically the textbook illustration of a duplicate content nightmare. ScreamingFrog has also been known to crash while spidering, and we've discovered thousands of URLS of live sites using the same CMS. There is no question that spiders are somehow triggering some sort of infinite page generation - and we can see that both on our DEV site as well as out in the wild (in Google's Supplemental Index). 5 - Since there is no "duplicate content penalty" and there never was - are there other risks here that are caused by infinite page generation?? Like burning up a theoretical "crawl budget" or having the bots miss pages or other negative consequences? 6 - Is it also possible that bumping a site that ranks well for 100 pages up to 10,000 pages or more might very well have a linkuice penalty as a result of all this (honest but inadvertent) duplicate content? In otherwords, is inbound linkjuice and ranking power essentially divided by the number of pages on a site? Sure, it may be some what mediated by internal page linkjuice, but what's are the actual big-dog issues here? So has SEL's "duplicate content myth" truly been myth-busted in this particular situation? ??? Thanks a million! 200.gif#12
Algorithm Updates | | seo_plus0 -
How important is Social Media for building domain authority / Google rankings? Are there any cases?
I really would like to know if someone tested the importance of Social Media for Google rankings.
Algorithm Updates | | Seeders
Are there some sites who build authority only by doing good social media?
Ofcourse, I know it is all about the mix (content, linkbuilding, social media, etc.) but how important is it?
I know many sites who rank good without any form of social media, but I do not know any sites who do only social media and rank high. I hope there are some good cases which give good insight. ps. I know it becomes more and more important...0 -
Google "In-Depth Article" Question
Google started featuring "In-Depth Articles" a few days ago. You can read about them here and here. I have two questions about them... If you already hold a great position in the SERPs. Let's say your existing article ranks at #2 or #3. If that article becomes one of the "In-Depth Articles", will it disappear from the #2 or #3 position? I have lots of content that I could mark as an In-Depth Article, but I don't want to do that if it will pull me out of a hard-earned SERP position. Has anyone seen "In-Depth Articles" that do not have the Schema markup? Thanks!
Algorithm Updates | | EGOL1 -
How could Google define "low quality experience merchants"?
Matt Cutts mentioned at SXSW that Google wants to take into consideration the quality of the experience ecommerce merchants provide and work this into how they rank in SERPs. Here's what he said if you missed it: "We have a potential launch later this year, maybe a little bit sooner, looking at the quality of merchants and whether we can do a better job on that, because we don’t want low quality experience merchants to be ranking in the search results.” My question; how exactly could Google decide if a merchant provides a low and high quality experience? I would image it would be very easy for Google to decide this with merchants in their Trusted Store program. I wonder what other data sets Google could realistically rely upon to make such a judgment. Any ideas or thoughts are appreciated.
Algorithm Updates | | BrianSaxon0 -
Why is my domain authority (and page authority) plummeting?
In June our domain authority was at a 41. In July we were 38 and ever since then our domain authority is gradually getting worse and worse. We went from a 33 to a 29 in one week! Possible explanations include: Maybe the SEO we hired (for a few months in late 2011) added our domain to some less-than-awesome directories The 301 redirects on our home page are hurting us somehow Duplicate content for URL's with different capitalization (IE: /pages/aboutus and /Pages/AboutUs) Can someone please point me in the right direction? Which of the above possibilities would likely impact domain/page authority? Any other ideas as to why this might be happening? Any suggestions for improving our domain or page authority? Thanks for the help!
Algorithm Updates | | MichaelBrown550 -
Google SERPS problem - "block all results from this domain - click here".
Anyone know what can be done about this when it happens to one of your own domains? On the Google SERPS page, underneath the Title, next to the Description, Google has added "Block all results from this domain?". I understand that this is a new "feature", aimed at allowing users to filter out results from low quality, pornograhphic or offensive sites. But the site in question is none of the above - any ideas how to tackle? Couldn't find anything yet by searching.
Algorithm Updates | | Understudy0