No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
-
Hello,
New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt)
We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin)
See image here of Google listing and site URL: http://imgur.com/46wajJw
I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools.
When can we expect these results to update? Is there a step I may have overlooked?
Thank you,
Adam -
Great, the good news is following submission of a sitemap via Webmaster Tools, things appear to be remedied on Google! It does seem, however, that the issue still persists on Bing/Yahoo.
Some of the 404's are links from an old site that weren't carried over following my redesign; so that will be handled shortly as well.
I've submitted the sitemap via Bing Webmaster Tools, as such I presume it's a similar matter of simply 'waiting on Bing'?
Many thanks for your valuable insight!
-
Hi There
It seems like there are some other issues tangled up in this.
- First off it looks like some non-www URLs indexed in Google are 301 redirecting to www but then 404'ing. It's good they redirect to www, but they should end up on active pages.
- The NON-www homepage is the one showing the robots.txt message. This should hopefully resolve in a week or two when Google re-crawled the NON-www URL, sees the 301 - the actual solution is getting the non-www URL out of the index, and having them rank the www homepage instead. The www homepage description shows up just fine.
- You may want to register the non-www version of the domain in webmaster tools, and make sure to clean up any errors that pop up there as well.
-
I just got this figured out, let's try dropping this into Google!
-
The 404 error could be around a common error experienced with Yoast sitemaps: http://kb.yoast.com/article/77-my-sitemap-index-is-giving-a-404-error-what-should-i-do
1st step is to try and reset the permalink structure, it could resolve the 404 error you're seeing. You definitely want to resolve your sitemap 404 error to submit a crawlable sitemap to Google.
-
Thanks! It would seem that the Sitemap URL http://www.altaspartners.com/sitemap_index.xml brings up a 404 page, so I'm a bit confused with that step - but otherwise it appears to be very clear!
-
In WordPress, go to the Yoast plugin and locate the sitemap URL / settings. Plug the sitemap URL into your browser and make sure that it renders properly.
Once you have that exact URL, drop it into Google Webmaster Tools and let it process. Google will let you know if they found any errors that need correcting. Once submitted, you just need to wait for Google to update its index and reflect your site's meta description.
Yoast has a great blog that goes in depth about its sitemap features: https://yoast.com/xml-sitemap-in-the-wordpress-seo-plugin/
-
Sounds great Ray, how would I go about checking these URLs for the Yoast siteap?
-
Yoast sets up a pretty efficient sitemap. Make sure the sitemap URL settings are correct, load it up in the browser to confirm, and submit your sitemap through GWT - that will help get a new crawl of the site and hopefully an update to their index so your meta descriptions begins to show in the SERPs.
-
Hi Ray,
With fetch as Googlebot, I see a redirection for the non-www, and a correct fetch for the www.Using SEO Yoast, it would seem the sitemap link leads to a 404?
-
Ha, that's exactly what I did.
I'm not showing any restrictions in your robots.txt file and the meta tag is assigned appropriately.
Have you tried to fetch the site with the Webmaster Tools 'fetch as googlebot' tool? If there is an issue, it should be apparent there. Doing this may also help get your page re-crawled more quickly and the index updated.
If everything is as it should be and you're only waiting on a re-index, that usually takes no longer than two weeks (for very infrequently indexed websites). Fetching with the Google bot may speed things up and getting an external link on a higher trafficked page could help as well.
Have you tried resubmitting a sitemap through GWT as well? That could be another trick to getting the page re-crawled more quickly.
-
Hello Ray,
Specifically, the firm name, which is spelled a-l-t-a-s p-a-r-t-n-e-r-s (it is easy to confuse with "Atlas Partners" which is another company altogether
-
What was the exact search term you used to bring up those SERPs?
When i search 'atlastpartners' and 'atlastpartners.com' it brings up your site with a meta description.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Role of Robots.txt and Search Console parameters settings
Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?
Technical SEO | | LivDetrick0 -
How to use robots.txt to block areas on page?
Hi, Across the categories/product pages on out site there are archives/shipping info section and the texts are always the same. Would this be treated as duplicated content and harmful for seo? How can I alter robots.txt to tell google not to crawl those particular text Thanks for any advice!
Technical SEO | | LauraHT0 -
Google Places, Google Plus, Oh my!
Ok - So I am in the position to try and clean up the current Google places nightmare for a company. Right now there is about 3 or 4 different google places listings for them that they have no control over. So here is what I did: 1. I took control of them all by verifying via phone and confirmed all of them. 2. I suspended all the listings but 1 3. I edited the one listing to be accurate and complete.
Technical SEO | | DylanPKI
Then I waited, and waited... A month later, the old listings are still up and none of the changes to the one listing have been made. Today it gets a bit more complicated. Today I created a Google+ page for the business which seems like it may end up adding yet ANOTHER Google Places listing, is that correct? They are sending a post card to verify, but I have the page all set up ready to go and plan on tying it to the website. I am not exactly sure what my specific question is, but I am looking for any advice anyone has on the best way to go about this situation. Thank you in advance!0 -
January 2013 Google update affected my projects ?
I am running 400+ projects. Mostly all projects keyword rank has been effected recently. IS there any new update from google between 10-19 January 2013 ?
Technical SEO | | deepakwadhwa0 -
Google Off/On Tags
I came across this article about telling google not to crawl a portion of a webpage, but I never hear anyone in the SEO community talk about them. http://perishablepress.com/press/2009/08/23/tell-google-to-not-index-certain-parts-of-your-page/ Does anyone use these and find them to be effective? If not, how do you suggest noindexing/canonicalizing a portion of a page to avoid duplicate content that shows up on multiple pages?
Technical SEO | | Hakkasan1 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0 -
Search Engine Blocked by Robot Txt warnings for Filter Search result pages--Why?
Hi, We're getting 'Yellow' Search Engine Blocked by Robot Txt warnings for URLS that are in effect product search filter result pages (see link below) on our Magento ecommerce shop. Our Robot txt file to my mind is correctly set up i.e. we would not want Google to index these pages. So why does SeoMoz flag this type of page as a warning? Is there any implication for our ranking? Is there anything we need to do about this? Thanks. Here is an example url that SEOMOZ thinks that the search engines can't see. http://www.site.com/audio-books/audio-books-in-english?audiobook_genre=132 Below are the current entries for the robot.txt file. User-agent: Googlebot
Technical SEO | | languedoc
Disallow: /index.php/
Disallow: /?
Disallow: /.js$
Disallow: /.css$
Disallow: /checkout/
Disallow: /tag/
Disallow: /catalogsearch/
Disallow: /review/
Disallow: /app/
Disallow: /downloader/
Disallow: /js/
Disallow: /lib/
Disallow: /media/
Disallow: /.php$
Disallow: /pkginfo/
Disallow: /report/
Disallow: /skin/
Disallow: /utm
Disallow: /var/
Disallow: /catalog/
Disallow: /customer/
Sitemap:0