Googlebot size limit
-
Hi there,
There is about 2.8 KB of java script above the content of our homepage. I know it isn't desirable, but is this something I need to be concerned about?
Thanks,
SarahUpdate: It's fine. Ran a Fetch as Google and it's rendering as it should be. I would delete my question if I could figure out how!
-
Agreed. Besides, maybe someone (a newbie like me!) with the same question could see how I figured it out, then try it on their own. Or someone can see what I did and say "wait, that's not right ... ".
I think it comes from my mentality of not to wanting waste people's time on questions I found the answer to - but, yes, we wouldn't want to punish the people putting time into answering, especially when it can help someone else. Thanks for bringing that up, Keri!
-
I would agree. Delete option is not necessary.
-
Roger is very reluctant to delete questions, and feels that it most cases, it's not TAGFEE to do so. Usually by the time the original poster wants to delete a question, there are multiple responses, and deleting the questions would also remove the effort the other community members have put in to answer the question, and remove the opportunity for other people to learn from the experience.
-
Haven't figured that one out either :). Apparantly Roger Mozbot does not like questions being deleted , only edited:)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap Size effect SEO
So I've noticed that the sitemap I use has a capacity of 4500 URLs, but my website is much larger. Is it worth paying for a commercial sitemap that encompasses my entire site? I also notice that of the 4500 URLs which have been submitted, only 104 are indexed. Is this normal, if not, why is the index rate so low?
Technical SEO | | moon-boots0 -
Site not getting indexed by googlebot.
The following question is in regards to http://footeschool.org/. This site is not getting indexed with google(googlebot) This only happens when the user agent is set googlebot. This is a recent issue. We are using DNN as CMS. Are there any suggestion to help resolve this issue?
Technical SEO | | bcmull0 -
Is there a limit to Internal Redirect?
I know Google says there is no limit to it but I have seen on many websites that too many 301 redirects can be a problem and might negatively affect your rankings in SERPs. I wanted to know especially from people who worked on large ecommerce site. How do they manage internal redirect from one URL to other and how many according to you are too many. I mean if you get a website that contain 300 plus 301 redirections within the website, how will you deal with that? Please let me know if the question is not clear.
Technical SEO | | MoosaHemani0 -
GWT False Reporting or GoogleBot has weird crawling ability?
Hi I hope someone can help me. I have launched a new website and trying hard to make everything perfect. I have been using Google Webmaster Tools (GWT) to ensure everything is as it should be but the crawl errors being reported do not match my site. I mark them as fixed and then check again the next day and it reports the same or similar errors again the next day. Example: http://www.mydomain.com/category/article/ (this would be a correct structure for the site). GWT reports: http://www.mydomain.com/category/article/category/article/ 404 (It does not exist, never has and never will) I have been to the pages listed to be linking to this page and it does not have the links in this manner. I have checked the page source code and all links from the given pages are correct structure and it is impossible to replicate this type of crawl. This happens accross most of the site, I have a few hundred pages all ending in a trailing slash and most pages of the site are reported in this manner making it look like I have close to 1000, 404 errors when I am not able to replicate this crawl using many different methods. The site is using a htacess file with redirects and a rewrite condition. Rewrite Condition: Need to redirect when no trailing slash RewriteCond %{REQUEST_FILENAME} !-f
Technical SEO | | baldnut
RewriteCond %{REQUEST_FILENAME} !.(html|shtml)$
RewriteCond %{REQUEST_URI} !(.)/$
RewriteRule ^(.)$ /$1/ [L,R=301] The above condition forces the trailing slash on folders. Then we are using redirects in this manner: Redirect 301 /article.html http://www.domain.com/article/ In addition to the above we had a development site whilst I was building the new site which was http://dev.slimandsave.co.uk now this had been spidered without my knowledge until it was too late. So when I put the site live I left the development domain in place (http://dev.domain.com) and redirected it like so: <ifmodule mod_rewrite.c="">RewriteEngine on
RewriteRule ^ - [E=protossl]
RewriteCond %{HTTPS} on
RewriteRule ^ - [E=protossl:s] RewriteRule ^ http%{ENV:protossl}://www.domain.com%{REQUEST_URI} [L,R=301]</ifmodule> Is there anything that I have done that would cause this type of redirect 'loop' ? Any help greatly appreciated.\0 -
I have a mobile version and a standard version of my website. I'd like to show users some pages on the non-mobile site but keep googlebot mobile out. Is that ok?
On the mobile version not all the content of the normal site is available to the users. Since we didn't want googlebot mobile to index the non-mobile site, all the non-existent pages were returned with a 404 error. But now we'd like to show the mobile users these pages and send them to the normal site. If we allow the users to see these pages, is it ok to block googlebot mobile so these non-mobile pages are not indexed by googlebot mobile or will that create some issues for google?
Technical SEO | | bgs0 -
Googlebot does not obey robots.txt disallow
Hi Mozzers! We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter. We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming. But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems. What could be the reason? Best regards, Martin
Technical SEO | | TalkInThePark0 -
Googlebot cannot access your site
"At the end of July I received a message in my Google webmaster tools saying "Googlebot can't access your site" We checked our robots.txt file and removed a line break in it, and then I had Google Fetch the file again. I have not received any more messages since then. When we created the website I wrote all of the content and optimized each page for about 1 local keyword. A few weeks after I checked my keywords and did have a few on the first page of google. Since then almost all of them have completely disappeared. Because we had not link building effort I would not expect to still be on the first page, but I should definitely be seeing them before the 5th or even 10th page of Google. The address is http://www.tile-pompanobeach.com I'm not sure if these horrible results have something to do with the message from Google or something else. The problem is this client now wants to sign a contract with us for SEO and I really have no Idea what happened and if I will be able to figure it out. The main keyword for my home page is tile pompano beach and I aslo was using Pompano Beach Tile store for the About page which was previously on the first page of Google. Does anyone have some input?
Technical SEO | | DTOSI0 -
Images on page appear as 404s to Googlebot
When I fetch my website as Googlebot it returns 404s for all the images on the page. This despite the fact that each image is hyperlinked! What could be causing this issue? Thanks!
Technical SEO | | Netpace0