Removing CSS & JS Files from Index
-
Hi,
Google has indexed a few .CSS and .JS files that belong to our WordPress plugins and themes. I had them blocked via robots, but realized this doesn't prevent indexation (and can likely hurt us since Google wants to access these files).
I've since removed the robots instructions, submitted a removal request via Search Console, but want to make sure they don't come back.
Is there a way to put a noindex tag within .CSS and .JS files? Or should I do something with .htaccess instead?
-
I figured .htaccess would be the best route. Thank you for researching and confirming. I appreciate it.
-
Hi Tim,
Assigning a noindex tag to these files will not block them, only prevent them from showing in SERPs. This is the intended goal and the reason I deleted my robots.txt file which prevented crawling.
-
There's quite a big difference between crawling directives, which block and indexing directives. This article by (former?) Moz user S_ebastian_ is a good foundation read.
This article at developers.google.com is a good second read. If I'm understanding it right, Google thinks in terms of crawling directives vs indexing / serving directives.
My attempt at <tl rl="">:</tl>
crawling = looking, using in any way :: controlled via robots.txt
indexing / serving = indexing, archiving, displaying snippets in results, etc :: controlled via html meta tags or web server htaccess (or similar for other web servers).
I'm not convinced yet, that asking for noindex via htaccess causes the same sort of grief that deny in robots.txt causes.
-
I would seriously think again when it comes to blocking/no-indexing your CSS and JS files - Google has in the past stated that if they cannot fully render your site properly then this could lead to poorer rankings.
You will also likely get notifications in your Search Console as errors for this too.
Check out this great article from July this year which goes into more details.
-
I haven't encountered undesirable .css or .js indexing myself (yet), but as you surmised, maybe this htaccess directive might be worth trying?
<filesmatch ".(txt|log|xml|css|js)$"="">Header set X-Robots-Tag "noindex"</filesmatch>
Google seems to support it
-
Unless I'm severely misreading the links provided, which I've read before, it seems Google is stating that they read, render, and sometimes index .CSS and .JS files. Here's an article written a week after the second article you posted.
The aforementioned WordPress plugin and theme files hosted on my server are indeed showing up in Google SERPs.
I do not want to prevent Googlebot from reaching these files as they're needed for optimal site performance, but I do want them to be no-indexed. Thus, I don't want robots.txt to prevent crawling, only indexing.
Let me know if I'm misunderstanding.
-
TL;DR - You're hesitated about problem that doesn't exist.
Googlebot doesn't index CSS or JS files. They index text files, HTML, PDF, DOC, XLS, etc. But doesn't index style sheets or javascript files.
All you need in WordPress is to create blank robots.txt file where WP is installed with this content:
User-agent: *
Disallow:
Sitemap: http://site/sitemap-file-name.xmlAnd that's all. This is explain many times:
http://googlewebmastercentral.blogspot.bg/2014/05/understanding-web-pages-better.html
http://googlewebmastercentral.blogspot.bg/2014/10/updating-our-technical-webmaster.html
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I Lost Index Status of My Sitemap
We have a simple WordPress website for our law firm, with an English version and a Spanish version. I have created a sitemap (with appropriate language markup in the XML file) and submitted it to Webmaster Tools. Google crawled the site and accepted the sitemap last week, 24/24 pages indexed, 12 English and 12 Spanish. This week, Google decided to remove one of the pages from the index, showing 23/24 pages indexed. So, my questions are as follows: How can I find out which page was dropped from the index? If the pages are the same content, but different language, why did only one version of the page get dropped, while the other version remains? Why did the Big G drop one of my pages from the index? How can I reindex the dropped page? I know this is a fairly basic issue, and I'm embarrassed for asking, but I sure do appreciate the help.
Technical SEO | | RLG0 -
Google Indexing - what did I missed??
Hello, all SEOers~ I just renewed my web site about 3 weeks ago, and in order to preserve SEO values as much as possible, I did 301 redirect, XML Sitemap and so on for minimize the possible data losses. But the problem is that about week later from site renewal, my team some how made mistake and removed all 301 redirects. So now my old site URLs are all gone from Google Indexing and my new site is not getting any index from Google. My traffic and rankings are also gone....OMG I checked Google Webmaster Tool, but it didn't say any special message other than Google bot founds increase of 404 error which is obvious. Also I used "fetch as google bot" from webmaster tool to increase chance to index but it seems like not working much. I am re-doing 301 redirect within today, but I am not sure it means anything anymore. Any advise or opinion?? Thanks in advance~!
Technical SEO | | Yunhee.Choi0 -
Why google indexed pages are decreasing?
Hi, my website had around 400 pages indexed but from February, i noticed a huge decrease in indexed numbers and it is continually decreasing. can anyone help me to find out the reason. where i can get solution for that? will it effect my web page ranking ?
Technical SEO | | SierraPCB0 -
Pages not being indexed
Hi Moz community! We have a client for whom some of their pages are not ranking at all, although they do seem to be indexed by Google. They are in the real estate sector and this is an example of one: http://www.myhome.ie/residential/brochure/102-iveagh-gardens-crumlin-dublin-12/2289087 In the example above if you search for "102 iveagh gardens crumlin" on Google then they do not rank for that exact URL above - it's a similar one. And this page has been live for quite some time. Anyone got any thoughts on what might be at play here? Kind regards. Gavin
Technical SEO | | IrishTimes0 -
A week ago I asked how to remove duplicate files and duplicate titles
Three weeks ago we had a very large number of site errors revealed by crawl diagostics. These errors related purely to the presence of both http://domain name and http://www.domain name. We used the rel canonical tag in the head of our index page to direct all to the www. preference, and we have no improvement. Matters got worse two weeks ago and I checked with Google Webmaster and found that Google had somehow lost our preference choice. A week ago I asked how to overcome this problem and received good advice about how to re-enter our preference for the www.tag with Google. This we did and it was accepted. We aso submitted a new sitemap.xml which was also acceptable to Google. Today, a week later we find that we have even more duplicate content (over 10,000 duplicate errors) showing up in the latest diagnostic crawl. Does anyone have any ideas? (Getting a bit desperate.)
Technical SEO | | FFTCOUK0 -
Name Servers & SEO
We have decided to create a few blogs and will eventually be linking to some of our clients. I have domain privacy and different class C addresses for each of my domains. But the name servers area all the same. Ex: If we create an article for one client on all 5 blogs, will the name servers be a problem?
Technical SEO | | waqid0 -
No index directory pages?
All, I have a site built on WordPress with directory software (edirectory) on the backend that houses a directory of members. The Wordpress portion of the site is full of content and drives traffic through to the directory. Like most directories, the results pages are thin on content and mainly contain links to member profiles. Is it best to simply no index the search results for the directory portion of the site?
Technical SEO | | JSOC0 -
Continued Lack of Google Indexing
I run a baseball site (http://www.mopupduty.com) that is in a very good link neighbourhood. ESPN, The Score, USA Today, MSG Network, The Toronto Star, Baseball Prospectucs, etc etc. New content has not been getting indexed on Google ever since the last update. Site has no dup content, 100% original. I can't think of any spammy links, we get organic links day after day. In the past Google has indexed the site in minutes. It currently has expanded site links within Google search. Bing & Yahoo index the site in minutes. Are there any quick fixes I can make to increase my chance to get indexed by Google. Or just keep pumping out content and hope to see a change in the upcoming future?
Technical SEO | | mkoster1