Fetching & Rendering a non ranking page in GWT to look for issues
-
Hi
I have a clients nicely optimised webpage not ranking for its target keyword so just did a fetch & render in GWT to look for probs and could only do a partial fetch with the below robots.text related messages:
Googlebot couldn't get all resources for this page
Some boiler plate js plugins not found & some js comments reply blocked by robots (file below):
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/As far as i understand it the above is how it should be but just posting here to ask if anyone can confirm whether this could be causing any prrobs or not so i can rule it out or not.
Pages targeting other more competitive keywords are ranking well and are almost identically optimised so cant think why this one is not ranking.
Does fetch and render get Google to re-crawl the page ? so if i do this then press submit to index should know within a few days if still problem or not ?
All Best
Dan
-
ok thanks !
nothing has changed just hoped it might do something
-
If anything changed between the 15th and today, it'll help ensure it gets updated. But that's all.
-
thanks Donna ! yes its all there and cache date is 15 Jan but still thought worthwhile fetching & rendering & submitting again, or does that do nothing more if its already indexed apart from asking G to take another look ?
-
Can you see if it's cached? Try cutting and pasting the entire URL into the search window, minus the http://. If it's indexed, it should show up in search results. Not the address bar, the search window.
-
Thanks for commenting Donna !
And providing the link to the interesting Q&A although this isn't the scenario i'm referring to with my original question.
The page isn't ranking at all although its very well optimised (and not overly so) and the keyword isn't that competitive so i would expect to be somewhere in the first 3-4 pages but its not even in first 100
Very similarly optimised pages (for other target keywords which are more competitive) are ranking well. Hence the fetch and render & submit to index i did, just to double check Googles seeing the page.
Cheers
Dan
-
Hi Dan,
You might find this Q&A helpful. It offers suggestions for what to do when an unexpected page is ranking for your targeted keyword phrase. I think most, if not all, suggestions apply in your case as well. Good luck!
-
Marvellous !
Many Thanks Robert !
All BEst
Dan
-
Yes there are a lot of overlaps when it comes to GWT - for the most part if you are making a submission request for crawling, it is indexed simultaneously - I believe the difference lies in some approaches which allow you to crawl as Google as opposed to submitting for official index.
In other words, what you have done is a definitive step in crawling and indexing, as opposed to seeing what Google would find if it were to crawl your site (as a test). "Submit to Index" is normally something I reserve for completed sites (as opposed to Stub content) to avoid accidental de-indexing.
In your circumstances, however, I don't think it will hurt you and it may help you identify any outstanding issues. Just remember to avoid it if you don't want a site indexed before it is ready!
Hope this helps,
Rob
-
Hi Robert,
Thanks for your help again !
That's great thanks, but what about 'submit to index' which i did also ? As in did i need to do that or not ?(since GWT says all pages submitted are indexed in sitemap section of GWT, so i take it i didn't need to, but did anyway as a precaution) ?
All Best
Dan
-
Hello again, Dan,
From what I can tell from your description, you have done what you can to make this work. We would expect JS to be blocked by that robots.txt file.
To answer your questions:
Fetch & render does allow Google to re-crawl the page using GWT. A request of this nature typically takes between 1-3 days to process, so you should know where you stand at that point.
Feel free to put an update here and if there is further information I will see what I can do to help out.
Cheers!
Rob
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Our protected pages 302 redirect to a login page if not a member. Is that a problem for SEO?
We have a membership site that has links out in our unprotected pages. If a non-member clicks on these links it sends a 302 redirect to the login / join page. Is this an issue for SEO? Thanks!
Technical SEO | | rimix1 -
How to activate Page Fetching and Rendering
We have a coupons and deal website. Coupons are added and removed from the website on a daily basis. But crawler isn't crawling it that often. Lately we started fetching and rendering the page, but that is a time taking task as we have more than 500 stores with coupons. So, I was looking for some API or some method using which the crawler would crawl the website as defined. Suppose "x" store page should be crawled every alternate day as we daily update the coupons there, whereas "y" store coupons are update fortnightly so , they can be crawled weekly. Can somebody suggest me something..
Technical SEO | | jaintechnosoft0 -
Log files vs. GWT: major discrepancy in number of pages crawled
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider: Number of pages crawled per log files: 2993 Crawl frequency (i.e. number of times those pages were crawled): 61438 Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages) We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
Technical SEO | | ufmedia0 -
If my home page never shows up in SERPS but other pages do, does that mean Google is penalizing me?
So my website I do local SEO for, xyz.com is finally getting better on some keywords (Thanks SEOMOZ) But only pages that are like this xyz.com/better_widgets_ or xyz.com/mousetrap_removals Is Google penalizing me possibly for some duplicate content websites I have out there (working on, I know I know it is bad)...
Technical SEO | | greenhornet770 -
Descriptions missing from rankings associated with Google Place pages.
Can anyone help me figure out why my rankings that are associated with Google Place pages are missing descriptions? I have a number one result for the top searched keyword in my category but it just doesn't look the same without a description and I'm sure it's affecting CTR too.
Technical SEO | | glideagency0 -
Non-Canonical Pages still Indexed. Is this normal?
I have a website that contains some products and the old structure of the URL's was definitely not optimal for SEO purposes. So I created new SEO friendly URL's on my site and decided that I would use the canonical tags to transfer all the weight of the old URL's to the New URL's and ensure that the old ones would not show up in the SERP's. Problem is this has not quite worked. I implemented the canonical tags about a month ago but I am still seeing the old URL's indexed in Google and I am noticing that the cache date of these pages was only about a week ago. This leads me to believe that the spiders have been to the pages and seen the new canonical tags but are not following them. Is this normal behavior and if so, can somebody explain to me why? I know I could have just 301 redirected these old URL's to the new ones but the process I would need to go through to have that done is much more of a battle than to just add the canonical tags and I felt that the canonical tags would have done the job. Needless to say the client is not too happy right now and insists that I should have just used the 301's. In this case the client appears to be correct but I do not quite understand why my canonical tags did not work. Examples Below- Old Pages: www.awebsite.com/something/something/productid.3254235 New Pages: www.awebsite.com/something/something/keyword-rich-product-name Canonical tag on both pages: rel="canonical" href="http://www.awebsite.com/something/something/keyword-rich-product-name"/> Thanks guys for the help on this.
Technical SEO | | DRSearchEngOpt0 -
How to Solve Duplicate Page Content Issue?
I have created one campaign over SEOmoz tools for my website. I have found 89 duplicate content issue from report. Please, look in to Duplicate Page Content Issue. I am quite confuse to resolve this issue. Can any one suggest me best solution to resolve it?
Technical SEO | | CommercePundit0 -
Frequent updating of pages on website to rank better
Will updating each page often for example everyday or few days, rank the page and/or website better in google. The reason I ask is that I made 18 websites about three months ago and the traffic initially was alot higher and has fallen little by little thru the 3 months. Also how well my pages rank has also fallen. I just put out the websites, have done nothing else. No linking, etc. No updates. It is evident that without new links coming in, the website will fall in rank ie., link aquistion velocity But my question is if I update the pages and change content frequently will this improve my position in google and other search engines. The traffic on websites over the three months if graphed sort of looks like a stair case going down.
Technical SEO | | mickey110