Rel="author" - This could be KickAss!
-
Google is now encouraging webmasters to attribute content to authors with rel="author". You can read what google has to say about it here and here.
A quote from one of google's articles....
When Google has information about who wrote a piece of content on the web, we may look at it as a signal to help us determine the relevance of that page to a user’s query. This is just one of many signals Google may use to determine a page’s relevance and ranking, though, and we’re constantly tweaking and improving our algorithm to improve overall search quality.
I am guessing that google might use it like this..... If you have several highly successful articles about "widgets", your author link on each of them will let google know that you are a widget expert. Then when you write future articles about widgets, google will rank them much higher than normal - because google knows you are an authority on that topic.
If it works this way the rel="author" attribute could be the equivalent of a big load of backlinks for highly qualified authors.
What do you think about this? Valuable?
Also, do you think that there is any way that google could be using this as a "content registry" that will foil some attempts at content theft and content spinning?
Any ideas welcome! Thanks!
-
I own a company and usually write my own blogs but not every time. The times I don't I pay to have them written and thus own the copy. Can an author be a company and the link point to the company about us page?
-
To anyone following this topic... A good thread at cre8asiteforums.com
-
Pretty sure both say they are interchangeable.
-
I was wondering if this is needed? Doesn't the specfication at schema.org cover this? Or would Google use the Author itemscope different from rel="Author"?
-
Right now, rel="author" is only useful with intra-domain URLs. It does not "count" if you are linking to other domains.
BUT...
In the future it might, so doing this could either give you a nice head start, or not. Time will tell.
-
I think it's a good idea and may open up some content syndication options that were discounted before...
In the past I have been firmly against content syndication - I want the content on my own site. However, if I think that the search engines are going to give me credit for doing it then I might do it when a great opportunity arrives.
-
I think it's a good idea and may open up some content syndication options that were discounted before (as per Dunamis' post) however I've not see the rel tag do much for me.
Tagging links to SM sites as rel="me" has not helped those pages get into the SERPs for my brand (though I've not been super consistent with doing it), rel="nofollow" obviously had the rug pulled from under it a while ago and I even once got carried away and tried linking language sites together with rel="alternate" lang="CC" but didn't get the uplift in other language version sites I hoped (though it was a bit of a long shot to begin with).
I'm just wondering how much value this is going to have. I still like it in principal and will attempt to use it where I can.
-
Or, the other issue could be that content sites could grab content from a non-web-savvy site owner. If the original owner didn't have an author tag, then the content site could slap their own author tag on and Google would think that they were the original author.
-
However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
Oh.... I really like that. I would like to see google internally put a date on first publication. One problem that some people might have is that their site is very new and weak and content scrapers hit them with a higher frequency than googlebot.
-
When I read it, I understood it to mean that the author tag was telling google that I was the original author. (I actually thought of you EGOL as I know you have been pushing for a content registry). Now, if someone steals my stuff I wouldn't expect them to put a rel author on it. However, I can see a few ways that the tag may be helpful:
-I recently had someone want to publish one of my articles on their site. I said no because I didn't want there to be duplicates of my stuff online. But, perhaps with rel author I could let another site publish my site as long as it is credited to me. Then, Google will know that my site deserves to be the top listing for this content.
-If I have stuff that I know scrapers are going to get, I can use the rel-author tag. My first thought was that a scraper site could sneakily put their own rel author on it and claim it as theirs. However, it wouldn't be hard for Google to have a system whereby they recognize that my site was the first one to have the rel author and therefore I'm likely the original owner. This is basically a content registry.
-
This might be helpful for you, especially if you can get the syndication sites to place author tags on the blog posts.
rel=canonical might also be worth investigating.
I am also confused about this. I'd like to see more information from Google on exactly how these will be used - especially in cross-domain situations.
-
I actually have similar questions about this. The company I work for hosts a blog that is also syndicated across 4 to 5 other websites. The other sites have bigger reach on the web and our blog isn't getting much direct traffic out of this. I have a feeling adding the author tags to our content will eventually pay off to show that the content is being originated on our site and then syndicated. I am interested / excited to see other ways this will be used. I think its a great fix for the scraping issue and will hopefully prevent needing panda updates X.X
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Great DA but page authority not increasing!
Hey team, I hope you are doing great, I have been working effortlessly to increase the authority of my blog. I have used a number of Moz recommended methods like long-form content, posting frequency, getting references from influencers and great websites. It has all resulted in a good domain authority but no matter what I do, the page authority of my blog isn't increasing. Can you please have a look and guide: https://androidcompare.com/ Kind regards...
Algorithm Updates | | Bsmdi0 -
Page content is not very similar but topic is same: Will Google considers the rel canonical tags?
Hi Moz community, We have multiple pages from our own different sub-domains for same topics. These pages even rank in SERP for related keywords. Now we are planning to show only one of the pages in SERP. We cannot redirect unfortunately. We are planning to use rel canonical tags. But the page content is not same, only 20% is similar and 80% is different but the context is same. If we use rel canonicals, does Google accepts this? If not what should I do? Making header tags similar works? How Google responds if content is not matching? Just ignore or any negative score? Thanks
Algorithm Updates | | vtmoz0 -
Why my Domain Authority (DA) is Decreased
Hello, I would like to know how the changes in domain authority is considered by MOZ? Domain Authority for my this domain https://factohr.com was 14 and it is decreased to 13 in this week. Though i have a very decent and good links going over to all my pages howcome my DA is affected and decreased. As its regularly being updated and has a high quality traffic! i would like to know the reason behind decrement in DA and is there any connection with redirection of .com domain? How can i increase DA for my website?
Algorithm Updates | | MyMoz710 -
Homepage title tag: "Keywords for robots" vs "Phrases for users"
Hi all, We keep on listening and going through the articles that "Google is all about user" and people suggesting to just think about users but not search engine bots. I have gone through the title tags of all our competitors websites. Almost everybody directly targeted primary and secondary keywords and few more even. We have written a very good phrase as definite title tag for users beginning with keyword. But we are not getting ranked well comparing to the less optimised or backlinked websites. Two things here to mention is our title tag is almost 2 years old. Title tag begins with secondary keyword with primary keyword like "seo google" is secondary keyword and "seo" is primary keyword". Do I need to completely focus on only primary keyword to rank for it? Thanks
Algorithm Updates | | vtmoz0 -
Website and landing pages - Proportionate authority
Does website's (homepage) ranking going to influence landing pages ranking or vice-versa? If the homepage is ranking good for a "keyword", will that improve ranking of other landing pages which are optimised for related "keywords" & Vice-versa?
Algorithm Updates | | vtmoz0 -
Google "In-Depth Article" Question
Google started featuring "In-Depth Articles" a few days ago. You can read about them here and here. I have two questions about them... If you already hold a great position in the SERPs. Let's say your existing article ranks at #2 or #3. If that article becomes one of the "In-Depth Articles", will it disappear from the #2 or #3 position? I have lots of content that I could mark as an In-Depth Article, but I don't want to do that if it will pull me out of a hard-earned SERP position. Has anyone seen "In-Depth Articles" that do not have the Schema markup? Thanks!
Algorithm Updates | | EGOL1 -
Large site with faceted navigation using rel=canonical, but Google still has issues
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question..... "Googlebot found an extremely high number of URLs on your site:" Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them. First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x. So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table. Thanks for those who take the time to respond in advance.
Algorithm Updates | | PeteGregory0 -
Local SEO url format & structure: ".com/albany-tummy-tuck" vs ".com/tummy-tuck" vs ".com/procedures/tummy-tuck-albany-ny" etc."
We have a relatively new site (re: August '10) for a plastic surgeon who opened his own solo practice after 25+ years with a large group. Our current url structure goes 3 folders deep to arrive at our tummy tuck procedure landing page. The site architecture is solid and each plastic surgery procedure page (e.g. rhinoplasty, liposuction, facelift, etc.) is no more than a couple clicks away. So far, so good - but given all that is known about local seo (which is a very different beast than national seo) quite a bit of on-page/architecture work can still be done to further improve our local rank. So here a a couple big questions facing us at present: First, regarding format, is it a given that using geo keywords within the url indispustibly and dramatically impacts a site's local rank for the better (e.g. the #2 result for "tummy tuck" and its SHENANIGANS level use of "NYC", "Manhattan", "newyorkcity" etc.)? Assuming that it is, would we be better off updating our cosmetic procedure landing page urls to "/albany-tummy-tuck" or "/albany-ny-tummy-tuck" or "/tummy-tuck-albany" etc.? Second, regarding structure, would we be better off locating every procedure page within the root directory (re: "/rhinoplasty-albany-ny/") or within each procedure's proper parent category (re: "/facial-rejuvenation/rhinoplasty-albany-ny/")? From what I've read within the SEOmoz Q&A, adding that parent category (e.g. "/breast-enhancement/breast-lift") is better than having every link in the root (i.e. completely flat). Third, how long before google updates their algorithm so that geo-optimized urls like http://www.kolkermd.com/newyorkplasticsurgeon/tummytucknewyorkcity.htm don't beat other sites who do not optimize so aggressively or local? Fourth, assuming that each cosmetic procedure page will eventually have strong link profiles (via diligent, long term link building efforts), is it possible that geo-targeted urls will negatively impact our ability to rank for regional or less geo-specific searches? Thanks!
Algorithm Updates | | WDeLuca0