Rel="author" - This could be KickAss!
-
Google is now encouraging webmasters to attribute content to authors with rel="author". You can read what google has to say about it here and here.
A quote from one of google's articles....
When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, though, and we’re constantly tweaking and improving our algorithm to improve overall search quality.
I am guessing that google might use it like this..... If you have several highly successful articles about "widgets", your author link on each of them will let google know that you are a widget expert. Then when you write future articles about widgets, google will rank them much higher than normal - because google knows you are an authority on that topic.
If it works this way the rel="author" attribute could be the equivalent of a big load of backlinks for highly qualified authors.
What do you think about this? Valuable?
Also, do you think that there is any way that google could be using this as a "content registry" that will foil some attempts at content theft and content spinning?
Any ideas welcome! Thanks!
-
I own a company and usually write my own blogs but not every time. The times I don't I pay to have them written and thus own the copy. Can an author be a company and the link point to the company about us page?
-
To anyone following this topic... A good thread at cre8asiteforums.com
-
Pretty sure both say they are interchangeable.
-
I was wondering if this is needed? Doesn't the specfication at schema.org cover this? Or would Google use the Author itemscope different from rel="Author"?
-
Right now, rel="author" is only useful with intra-domain URLs. It does not "count" if you are linking to other domains.
BUT...
In the future it might, so doing this could either give you a nice head start, or not. Time will tell.
-
I think it's a good idea and may open up some content syndication options that were discounted before...
In the past I have been firmly against content syndication - I want the content on my own site. However, if I think that the search engines are going to give me credit for doing it then I might do it when a great opportunity arrives.
-
I think it's a good idea and may open up some content syndication options that were discounted before (as per Dunamis' post) however I've not see the rel tag do much for me.
Tagging links to SM sites as rel="me" has not helped those pages get into the SERPs for my brand (though I've not been super consistent with doing it), rel="nofollow" obviously had the rug pulled from under it a while ago and I even once got carried away and tried linking language sites together with rel="alternate" lang="CC" but didn't get the uplift in other language version sites I hoped (though it was a bit of a long shot to begin with).
I'm just wondering how much value this is going to have. I still like it in principal and will attempt to use it where I can.
-
Or, the other issue could be that content sites could grab content from a non-web-savvy site owner. If the original owner didn't have an author tag, then the content site could slap their own author tag on and Google would think that they were the original author.
-
However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
Oh.... I really like that. I would like to see google internally put a date on first publication. One problem that some people might have is that their site is very new and weak and content scrapers hit them with a higher frequency than googlebot.
-
When I read it, I understood it to mean that the author tag was telling google that I was the original author. (I actually thought of you EGOL as I know you have been pushing for a content registry). Now, if someone steals my stuff I wouldn't expect them to put a rel author on it. However, I can see a few ways that the tag may be helpful:
-I recently had someone want to publish one of my articles on their site. I said no because I didn't want there to be duplicates of my stuff online. But, perhaps with rel author I could let another site publish my site as long as it is credited to me. Then, Google will know that my site deserves to be the top listing for this content.
-If I have stuff that I know scrapers are going to get, I can use the rel-author tag. My first thought was that a scraper site could sneakily put their own rel author on it and claim it as theirs. However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
-
This might be helpful for you, especially if you can get the syndication sites to place author tags on the blog posts.
rel=canonical might also be worth investigating.
I am also confused about this. I'd like to see more information from Google on exactly how these will be used - especially in cross-domain situations.
-
I actually have similar questions about this. The company I work for hosts a blog that is also syndicated across 4 to 5 other websites. The other sites have bigger reach on the web and our blog isn't getting much direct traffic out of this. I have a feeling adding the author tags to our content will eventually pay off to show that the content is being originated on our site and then syndicated. I am interested / excited to see other ways this will be used. I think its a great fix for the scraping issue and will hopefully prevent needing panda updates X.X
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can we ignore "broken links" without redirecting to "new pages"?
Let's say we have reaplced www.website.com/page1 with www.website.com/page2. Do we need to redirect page1 to page2 even page1 doesn't have any back-links? If it's not a replacement, can we ignore a "lost page"? Many websites loose hundreds of pages periodically. What's Google's stand on this. If a website has replaced or lost hundreds of links without reclaiming old links by redirection, will that hurts?
Algorithm Updates | | vtmoz0 -
Rel=Canonical Tag on Homepage
I have a Rel=canonical Tag (link rel="canonical" href="htttps://homepage.com") on the homepage. Could this possibly have a negative effect? is it necessary?
Algorithm Updates | | JMSCC0 -
Directories and Domain Authority
I read all the time about how directories have very little weight in SEO anymore, but in my field, a lot of our competitors are propped up by paying for "profiles" aka links from places like martindale-hubbard, superlawyers, findlaw, nolo, Avvo, etc (which are essentially directories IMO) yet all those sites have very high DAs of 80 and above. So, are links from these sites worth it? I know that's a vague questions, but if Moz's algo seems to rank them so highly, I'm guessing that's reasonably close to what google thinks as well...maybe? Thanks for any insight, Ruben
Algorithm Updates | | KempRugeLawGroup0 -
Is Moz Domain Authority still relvant when it comes to Google ranking?
My understanding of Moz DA is that it is predominantly based on external links. Since Penguin I am noticing more and more websites ranking high in Google with a "low" number of links and certainly a low DA but quality and relevancy of content and also of offering. I understand that there was always more to ranking than DA but is it anymore even relevant to how a site will rank in Google?
Algorithm Updates | | halloranc0 -
Why is my domain authority (and page authority) plummeting?
In June our domain authority was at a 41. In July we were 38 and ever since then our domain authority is gradually getting worse and worse. We went from a 33 to a 29 in one week! Possible explanations include: Maybe the SEO we hired (for a few months in late 2011) added our domain to some less-than-awesome directories The 301 redirects on our home page are hurting us somehow Duplicate content for URL's with different capitalization (IE: /pages/aboutus and /Pages/AboutUs) Can someone please point me in the right direction? Which of the above possibilities would likely impact domain/page authority? Any other ideas as to why this might be happening? Any suggestions for improving our domain or page authority? Thanks for the help!
Algorithm Updates | | MichaelBrown550 -
Measuring Author Rank
It's pretty clear that "AuthorRank" is going to be a big thing for SEO. Although the main principles seem to be pretty straightforward (http://www.seomoz.org/blog/authorship-google-plus-link-building) what I'm less clear about is how we can start to think about author influence as a measurable metric. Webmaster tools gives us Author Stats as an impact on the site's impressions/CTR, but how do we measure the influence of an individual author? Are those factors even defined? Will we get to a stage where authors can be given a Klout-like score for Google Authorship? If not that, how will it look? This will be a HUGE question to solve for future content development strategies, and is something I'm thinking a lot about right now. Best, Matt.
Algorithm Updates | | MattBarker3 -
Low Domain Authority - Rank Well For Competitive Keywords
I have been following a competitor's link profile on OSE for over 8 months. Their linkbacks have remained the same (3 follow, 9 nofollow links), all from low-quality directory sites. However, my competitor continues to improve in rankings and is now #1 for competitive keyword searches. How is this possible? Is there a way to hide your link profile or links from OSE? Any tips are appreciated - Thanks!
Algorithm Updates | | TheSEODR0 -
Has Google problems in indexing pages that use <base href=""> the last days?
Since a couple of days I have the problem, that Google Webmaster tools are showing a lot more 404 Errors than normal. If I go thru the list I find very strange URLs that look like two paths put together. For example: http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm If I check on which page Google found that path it is showing me the following URL: http://www.domain.de/languages/languageschools/havanna/spanishcourse.htm If I check the source code of the Page for the Link leading to the London Page it looks like the following: [...](languages/languageschools/london/london.htm) So to me it looks like Google is ignoring the <base href="..."> and putting the path together as following: Part 1) http://www.domain.de/laguages/languageschools/havanna/ instead of base href Part 2) languages/languageschools/london/london.htm Result is the wrong path! http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm I know finding a solution is not difficult, I can use absolute paths instead of relative ones. But: - Does anyone make the same experience? - Do you know other reasons which could cause such a problem? P.s.: I am quite sure that the CMS (Typo3) is not generating these paths randomly. I would like to be sure before we change the CMS's Settings to absolute paths!
Algorithm Updates | | SimCaffe0