No descripton on Google/Yahoo/Bing, updated robots.txt - what is the turnaround time or next step for visible results?
-
Hello,
New to the MOZ community and thrilled to be learning alongside all of you! One of our clients' sites is currently showing a 'blocked' meta description due to an old robots.txt file (eg: A description for this result is not available because of this site's robots.txt)
We have updated the site's robots.txt to allow all bots. The meta tag has also been updated in WordPress (via the SEO Yoast plugin)
See image here of Google listing and site URL: http://imgur.com/46wajJw
I have also ensured that the most recent robots.txt has been submitted via Google Webmaster Tools.
When can we expect these results to update? Is there a step I may have overlooked?
Thank you,
Adam -
Great, the good news is following submission of a sitemap via Webmaster Tools, things appear to be remedied on Google! It does seem, however, that the issue still persists on Bing/Yahoo.
Some of the 404's are links from an old site that weren't carried over following my redesign; so that will be handled shortly as well.
I've submitted the sitemap via Bing Webmaster Tools, as such I presume it's a similar matter of simply 'waiting on Bing'?
Many thanks for your valuable insight!
-
Hi There
It seems like there are some other issues tangled up in this.
- First off it looks like some non-www URLs indexed in Google are 301 redirecting to www but then 404'ing. It's good they redirect to www, but they should end up on active pages.
- The NON-www homepage is the one showing the robots.txt message. This should hopefully resolve in a week or two when Google re-crawled the NON-www URL, sees the 301 - the actual solution is getting the non-www URL out of the index, and having them rank the www homepage instead. The www homepage description shows up just fine.
- You may want to register the non-www version of the domain in webmaster tools, and make sure to clean up any errors that pop up there as well.
-
I just got this figured out, let's try dropping this into Google!
-
The 404 error could be around a common error experienced with Yoast sitemaps: http://kb.yoast.com/article/77-my-sitemap-index-is-giving-a-404-error-what-should-i-do
1st step is to try and reset the permalink structure, it could resolve the 404 error you're seeing. You definitely want to resolve your sitemap 404 error to submit a crawlable sitemap to Google.
-
Thanks! It would seem that the Sitemap URL http://www.altaspartners.com/sitemap_index.xml brings up a 404 page, so I'm a bit confused with that step - but otherwise it appears to be very clear!
-
In WordPress, go to the Yoast plugin and locate the sitemap URL / settings. Plug the sitemap URL into your browser and make sure that it renders properly.
Once you have that exact URL, drop it into Google Webmaster Tools and let it process. Google will let you know if they found any errors that need correcting. Once submitted, you just need to wait for Google to update its index and reflect your site's meta description.
Yoast has a great blog that goes in depth about its sitemap features: https://yoast.com/xml-sitemap-in-the-wordpress-seo-plugin/
-
Sounds great Ray, how would I go about checking these URLs for the Yoast siteap?
-
Yoast sets up a pretty efficient sitemap. Make sure the sitemap URL settings are correct, load it up in the browser to confirm, and submit your sitemap through GWT - that will help get a new crawl of the site and hopefully an update to their index so your meta descriptions begins to show in the SERPs.
-
Hi Ray,
With fetch as Googlebot, I see a redirection for the non-www, and a correct fetch for the www.Using SEO Yoast, it would seem the sitemap link leads to a 404?
-
Ha, that's exactly what I did.
I'm not showing any restrictions in your robots.txt file and the meta tag is assigned appropriately.
Have you tried to fetch the site with the Webmaster Tools 'fetch as googlebot' tool? If there is an issue, it should be apparent there. Doing this may also help get your page re-crawled more quickly and the index updated.
If everything is as it should be and you're only waiting on a re-index, that usually takes no longer than two weeks (for very infrequently indexed websites). Fetching with the Google bot may speed things up and getting an external link on a higher trafficked page could help as well.
Have you tried resubmitting a sitemap through GWT as well? That could be another trick to getting the page re-crawled more quickly.
-
Hello Ray,
Specifically, the firm name, which is spelled a-l-t-a-s p-a-r-t-n-e-r-s (it is easy to confuse with "Atlas Partners" which is another company altogether
-
What was the exact search term you used to bring up those SERPs?
When i search 'atlastpartners' and 'atlastpartners.com' it brings up your site with a meta description.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Drupal, http/https, canonicals and Google Search Console
I’m fairly new in an in-house role and am currently rooting around our Drupal website to improve it as a whole. Right now on my radar is our use of http / https, canonicals, and our use of Google Search Console. Initial issues noticed: We serve http and https versions of all our pages Our canonical tags just refer back to the URL it sits on (apparently a default Drupal thing, which is not much use) We don’t actually have https properties added in Search Console/GA I’ve spoken with our IT agency who migrated our old site to the current site, who have recommended forcing all pages to https and setting canonicals to all https pages, which is fine in theory, but I don’t think it’s as simple as this, right? An old Moz post I found talked about running into issues with images/CSS/javascript referencing http – is there anything else to consider, especially from an SEO perspective? I’m assuming that the appropriate certificates are in place, as the secure version of the site works perfectly well. And on the last point – am I safe to assume we have just never tracked any traffic for the secure version of the site? 😞 Thanks John
Technical SEO | | joberts0 -
Google Gives me different results
Hello Everybody My website is giving me 2 results for my page ...
Technical SEO | | falguniwpi
if i search in google.com by "domain.com" it shows results without current pagetitles and if i search in google.com using keyword phrase "site:adomain.com", then it shows me all my website pages with current meta titles . why i am facing these 2 results and how to solve these . Why This happens, I have correctly redirectd URL to domain.com to www.domain.com, but Still it same, please Suggest me, what should i do With,0 -
I have made my new website live. But while checking in Google it is not showing in search result ( site: www.oomfr.com ). Can anybody please advice.
Hi Team, I have made my new website live. But while checking in Google it is not showing in search result ( site: www.oomfr.com ). Can anybody please advice.
Technical SEO | | nlogix0 -
Will an XML sitemap override a robots.txt
I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?
Technical SEO | | KCBackofen0 -
Should component pages be visible in the search result?
Hi everyone, My question is suppose i have a blog having 200 pages arranged in footer like seomoz blog and when i move to 2nd page and say the url is http://www.seomoz.org/blog?page=2 and when i search exact url on google should this page be visible in search result or not. Since all component pages of seomoz blog are visible, i think this should not be a problem but when i see other popular blogs like SEJ and seroundtable none of their component pages are visible in search result. By the way i am using rel=prev and next but not robots: noindex, follow
Technical SEO | | himanshu3019890 -
Use of Robots.txt file on a job site
We are performing SEO on a large niche Job Board. My question revolves around the thought of no following all the actual job postings from their clients as they only last for 30 to 60 days. Anybody have any idea on the best way to handle this?
Technical SEO | | WebTalent0 -
Help: Google Time Spent Downloading a Page, My Site is Slow
All, My site: http://www.nationalbankruptcyforum.com shows an average time spent downloading a page of 1,489 (in milliseconds) We've had spikes of well over 3,000 and lows of around 980 (all according to WMT). I understand that this is really slow. Does anyone have some suggestions as to how I could improve load times? Constructive criticism welcomed and encouraged.
Technical SEO | | JSOC0