Rel="author" showing old image
-
I'm using http://www.google.com/webmasters/tools/richsnippets to test my rel="author" tag which was successful, but I noticed I wanted to change my image in Google+ as it is not what I want.
I changed my image in Google+, it's been over 14 hours now and still not showing the new picture using the RichSnippets tool. I know Google can take a couple weeks at least to show changes in search results, but this RichSnippet tool I thought was immeidate.
Am I missing something here or am I just impatient? I want my new photo to show.
-
Just an update, on Oct 15th, 2012 I added the links in my various websites to use Rel=author. Today, October 30th, 2012 (only 15 days later) I see my image live in the SERPs! Ya hoo!
and that's for all my websites I added including my company site, books website, and my blog sub-domain. Thanks SEO Moz for an awesome suggestion to do this. I really stand out now.
P.S. I just did my happy dance before writing this.
-
OK, I just checked again now and my new image is showing in Google's Rich Snippet Tester. The not so good news is my image doesn't show in the live results yet. At least I know it updates the image pretty quickly, but apparently, what I've read, it can take a while to show up in the live SERPs, so I'll have to just be patient now and hope they allow it soon.
-
I've never tested the turn around time of the rich snippet tester - but actual Google SERPs for Author markup are changed almost instantly. Have you checked a live search result for one of your articles to see how quickly it changes?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rel=canonical and redirect on same page
Hi Guys, Am I going slightly mad but why would you want to have a redirect and a canonical redirecting back to the same page. For Instance https://handletrade.co.uk/pull-handles/pull-handles-zcs-range/d'-pull-handle-19mm-dia.-19-x-150mm-ss/?tag=Dia.&page=2 and in the source code:- <link href="<a class="attribute-value">https://handletrade.co.uk/d'-pull-handle-19mm-dia.-19-x-150mm-ss/</a>" rel="<a class="attribute-value">canonical</a>" /> Perfect! exactly what it is intended to do. But then this page is 301 redirected tohttps://handletrade.co.uk/pull-handles/pull-handles-zcs-range/d'-pull-handle-19mm-dia.-19-x-150mm-ss/ The site is built in open cart and I think it's the SEO plugin that needs tweaking. Could this cause poor SERP visibility? This is happening across the whole site. Surely the canonical should just point to the proper page and then there is no need for an additional bounce.
Technical SEO | | nezona1 -
Do you Index your Image Repository?
On our backend system, when an image is uploaded it is saved to a repository. For example: If you upload a picture of a shark it will go to - oursite.com/uploads as shark.png When you use a picture of this shark on a blog post it will show the source as oursite.com/uploads/shark.png This repository (/uploads) is currently being indexed. Is it a good idea to index our repository? Will Google not be able to see the images if it can't crawl the repository link (we're in the process of adding alt text to all of our images ). Thanks
Technical SEO | | SteveDBSEO0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Combining variants of "last modified", cache-duration etc
Hiya, As you know, you can specify the date of the last change of a document in various places, for example the sitemap, the http-header, ETag and also add an "expected" change, for example Cache-Duration via header/htaccess (or even the changefreq in the sitemap). Is it advisable or rather detrimental to use multiple variants that essentially tell browser/search engines the same thing? I.e. should I send a lastmod header AND ETag AND maybe something else? Should I send a cache duration at all if I send a lastmod? (Assume that I can keep them correct and consistent as the data for each will come from the very same place.) Also: Are there any clear recommendations on what change-indicating method should be used? Thanks for your answers! Nico
Technical SEO | | netzkern_AG0 -
Rel="publisher" validation error in html5
Using HTML5 I am getting a validation error on in my HTML Validation error: Bad value publisher for attribute rel on element link: Not an absolute IRI. The string publisher is not a registered keyword or absolute URL. This just started showing up on Tuesday in validation errors. Never showed up in the past. Has something changed?
Technical SEO | | RoxBrock0 -
"Fourth-level" subdomains. Any negative impact compared with regular "third-level" subdomains?
Hey moz New client has a site that uses: subdomains ("third-level" stuff like location.business.com) and; "fourth-level" subdomains (location.parent.business.com) Are these fourth-level addresses at risk of being treated differently than the other subdomains? Screaming Frog, for example, doesn't return these fourth-level addresses when doing a crawl for business.com except in the External tab. But maybe I'm just configuring the crawls incorrectly. These addresses rank, but I'm worried that we're losing some link juice along the way. Any thoughts would be appreciated!
Technical SEO | | jamesm5i0 -
"Not Selected" in index status rising continously
Hello, After the penguin update my site slowly suffered loss in traffic. and now from daily 15K-18K its droped to 8K. (6K in weekends) I have been trying to find out what the reasons are but i havent got any good luck yet been few months now. I noticed this change in the GWT tho : Not selected in index status significantly risen up. please see attached image. My site is Designzzz i am continously fixing errors and problems shown in the seomoz pro tools. If you guys can take few mins to evaluate what could be the reason for such drop i will be thankful :} cheers 6Xtkp.jpg
Technical SEO | | wickedsunny10 -
Domain authority and rankings?
I have a site that sits in #1 position for its keywords right now. But it only got there about 1mth ago. The site is only about 6mths old with lots of link building. I check the domain authority and its only 37/100 with the #2, #3 sites having domain authority of 57 and 82 respectively. This site has like 800+ backlinks. While the #2 and #3 sites have 20,000+ backlinks. Does it mean that my site will LIKELY drop in rankings very soon? I know there is no certainty but wld you say that it is highly probable my site will drop?
Technical SEO | | jl2550