Fetching & Rendering a non ranking page in GWT to look for issues
-
Hi
I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages:
Googlebot couldn't get all resources for this page
Some boiler plate js plugins not found & some js comments reply blocked by robots (file below):
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not.
Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking.
Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ?
All Best
Dan
-
ok thanks !
nothing has changed just hoped it might do something
-
If anything changed between the 15th and today, it'll help ensure it gets updated. But that's all.
-
thanks Donna ! yes its all there and cache date is 15 Jan but still thought worthwhile fetching & rendering & submitting again, or does that do nothing more if its already indexed apart from asking G to take another look ?
-
Can you see if it's cached? Try cutting and pasting the entire URL into the search window, minus the http://. If it's indexed, it should show up in search results. Not the address bar, the search window.
-
Thanks for commenting Donna !
And providing the link to the interesting Q&A although this isn't the scenario i'm referring to with my original question.
The page isn't ranking at all although its very well optimised (and not overly so) and the keyword isn't that competitive so i would expect to be somewhere in the first 3-4 pages but its not even in first 100
Very similarly optimised pages (for other target keywords which are more competitive) are ranking well. Hence the fetch and render & submit to index i did, just to double check Googles seeing the page.
Cheers
Dan
-
Hi Dan,
You might find this Q&A helpful. It offers suggestions for what to do when an unexpected page is ranking for your targeted keyword phrase. I think most, if not all, suggestions apply in your case as well. Good luck!
-
Marvellous !
Many Thanks Robert !
All BEst
Dan
-
Yes there are a lot of overlaps when it comes to GWT - for the most part if you are making a submission request for crawling, it is indexed simultaneously - I believe the difference lies in some approaches which allow you to crawl as Google as opposed to submitting for official index.
In other words, what you have done is a definitive step in crawling and indexing, as opposed to seeing what Google would find if it were to crawl your site (as a test). "Submit to Index" is normally something I reserve for completed sites (as opposed to Stub content) to avoid accidental de-indexing.
In your circumstances, however, I don't think it will hurt you and it may help you identify any outstanding issues. Just remember to avoid it if you don't want a site indexed before it is ready!
Hope this helps,
Rob
-
Hi Robert,
Thanks for your help again !
That's great thanks, but what about 'submit to index' which i did also ? As in did i need to do that or not ?(since GWT says all pages submitted are indexed in sitemap section of GWT, so i take it i didn't need to, but did anyway as a precaution) ?
All Best
Dan
-
Hello again, Dan,
From what I can tell from your description, you have done what you can to make this work. We would expect JS to be blocked by that robots.txt file.
To answer your questions:
Fetch & render does allow Google to re-crawl the page using GWT. A request of this nature typically takes between 1-3 days to process, so you should know where you stand at that point.
Feel free to put an update here and if there is further information I will see what I can do to help out.
Cheers!
Rob
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Lower Level Pages Being Ranked for Key Terms
Good Afternoon We've been having problems with a site for a little while now. It had a penalty (partial link) a few years ago and never really recovered back to it's full potential despite the fact that the penalty was eventually removed and we've since changed the domain completely as well as moving over to https and left behind / disavowed bad links. In the Moz ranking stats now, I'm seeing that some of our lower level pages are ranking for core terms and the erratic nature of the ranking graph seems to indicate that Google is confused and not knowing what page to pull. For example, the top level page would be Hotel in Spain but the page that is ranking for that term is one of the individual hotel information (lower level) pages lets say the Holiday Inn . The lower level page has info on the individual property but also makes reference to it being a "Cheap Hotel In Spain" My suggestion to resolve the problem is to scale back the references to the top level terms on the hotel pages and reintroduce breadcrumb links to help Google follow the structure of the site again Does this sound reasonable or would anyone be able to suggest anything else to try?
Technical SEO | | Ham19790 -
3,511 Pages Indexed and 3,331 Pages Blocked by Robots
Morning, So I checked our site's index status on WMT, and I'm being told that Google is indexing 3,511 pages and the robots are blocking 3,331. This seems slightly odd as we're only disallowing 24 pages on the robots.txt file. In light of this, I have the following queries: Do these figures mean that Google is indexing 3,511 pages and blocking 3,331 other pages? Or does it mean that it's blocking 3,331 pages of the 3,511 indexed? As there are only 24 URLs being disallowed on robots.text, why are 3,331 pages being blocked? Will these be variations of the URLs we've submitted? Currently, we don't have a sitemap. I know, I know, it's pretty unforgivable but the old one didn't really work and the developers are working on the new one. Once submitted, will this help? I think I know the answer to this, but is there any way to ascertain which pages are being blocked? Thanks in advance! Lewis
Technical SEO | | PeaSoupDigital0 -
Need Help With WWW vs. Non-WWW Duplicate Pages
A friend I'm working with at RedChairMarket.com is having duplicate page issues. Among them, both www and non-www URLs are being generated automatically by his software framework, ASP.net mvc 3. How should we go about finding and tackling these duplicates? Thanks!
Technical SEO | | BrittanyHighland0 -
How to make my good sub-page rank ahead of my generic home page?
I have an ecommerce site for the clothes drying racks my family business makes, and it sells a few other laundry items also. It's about 5 years old. We used to rank on the first page for basic phrases like "clothes drying rack" and "umbrella clothesline". About 1.5 years ago we fell hard in the rankings. Since then "umbrella clothesline" has moved back to the first page, but "clothes drying rack" is stuck on the 3rd page and always with the result being the generic homepage instead of the good sub-page (which used to rank on the first page) that really shows-n-tells about our drying rack. Here are the three pages I am talking about. Home page = http://www.bestdryingrack.com/ Drying rack page = http://www.bestdryingrack.com/clothes-drying-rack-main.html and umbrella clothesline page = http://www.bestdryingrack.com/umbrella-clotheslines.html Any ideas on how to get the drying rack page to start ranking well again? (hopefully better than the generic homepage ranks) A little technical background: the Moz campaign on this site says that the home page has a PA = 42 with 190 LRD's and 344 external links. Both the umbrella clothesline page and the clothes drying rack page have almost equal statistics of PA = 35 with 20 LRD's and 23 external links. My anchor text distribution is maybe unbalanced. The drying rack page has 15 external links with the anchor of "Clothes Drying Rack". But the umbrella clothesline page has 14 external links with the anchor of "outdoor umbrella clothesline" and it ranks on the first page for that search. I can't figure out how to get OSE to tell me anchor text stats for just the homepage and not the whole site since www.bestdryingrack.com/index.html 301's to the plain www.bestdryingrack.com (if you know how, please share) What's wrong with my poor neglected clothes drying rack page? The only way I can get it to show up on the first page is to do a real specific search like "round wooden clothes drying rack" Your help could save a faltering family business. Thank you!
Technical SEO | | GregB1230 -
Has Google stopped rendering author snippets on SERP pages if the author's G+ page is not actively updated?
Working with a site that has multiple authors and author microformat enabled. The image is rendering for some authors on SERP page and not for others. Difference seems to be having an updated G+ page and not having a constantly updating G+ page. any thoughts?
Technical SEO | | irvingw0 -
Local Keywords Not Ranking Well in a Geographic Location (but Rank Very Well Outside of Geographic Location)
Has anyone experienced, in the last few months, an issue where a website that once ranked well for 'local' terms in Google stopped ranking well for those terms (but saw a ranking decrease only within the geographic location contained within those keywords)? For example only, some 'root' keywords could be: Chicago dentist Chicago dentists dentist Chicago dentists Chicago What happens is that when a searcher searches from within the geographic area of Chicago, IL, the target website no longer ranks on the 1st page for these types of keyword phrases, but they used to rank in the top 3 perhaps. However, if someone was to search for the same keyword phrases from another city outside of Chicago or set a custom location (such as Illinois or even Milwaukee, WI perhaps) in their Google search, the target website appears to have normal (high) 1st page rankings for these types of terms. My own theory: At first I thought it was a Penguin related issue but the client's rankings overall haven't appeared to have been affected on the date(s) of Penguin updates. Authority Labs and Raven Tools (which uses Authority Labs data) did not detect any ranking decrease and still reports all the local keyword rankings as high on the 1st page of Google. However, when the client themselves goes to check their own rankings (as they are within that affected geographic area), they are no where to be found on the 1st page. :S After some digging I found that (one of) the company's Google Places listings (the main office listing) became an 'unsupported' status in Google Maps. So now I am thinking that this phenomenon is due to the fact that other listings are now appearing in search results for the same location. For example, in this case, an individual dentist's Google Places listing (who works within the dental office) is being displayed instead of the actual dental office's listing. Also, the dentist's name on the Google Places listing is being swapped out by Google with the name of the dental office, but if you click through to the Google Places listing, it shows the name of the individual Dentist. Anyone encounter a similar issue or have any other theories besides the Google Places issue?
Technical SEO | | OrionGroup0 -
Duplicate page content issue needs resolution.
After my last "crawl" report, I received a warning about "duplicate page content". One page was: http://anycompany.com and the other was: http://anycompany.com/home.html How do I correct this so these pages aren't competing with each other or is this a problem?
Technical SEO | | JamesSagerser0 -
Ranked on second page for keyword; now not within first 1000 listings?
Our website www.clientfirstfunding.com ranked 13th for the keyword "structured settlement". After this weekend we are no longer ranking for this keyword at all. We haven't made any changes at all to the site and i haven't gained any backlinks that appear to be spammy. We have held this position for the last several months. I can understand a drop in SERPs but one this drastic is shocking. Any ideas as to what could have caused this would be greatly appreciated.
Technical SEO | | Tony19860