How to Hide Directories in Search?
-
I noticed bad 404 error links in Google Webmaster Tools and they were pointing to directories that do not have an actual page, but hold information.
Ex: there are links pointing to our PDF folder which holds all of our pdf documents. If i type in , example.com/pdf/ it brings up a unformated webpage that displays all of our PDF links.
How do I prevent this from happening. Right now I am blocking these in my robots.txt file, but if i type them in, they still appear.
Or should I not worry about this?
-
Yes, a visit to example.com/dir should now return a 404 error (if you haven't done any redirecting/canonicalizing). This will increase your 404 count in Web Master tools but it's far preferable to the alternative. If you're not redirecting the robots.txt will eventually work and hopefully the links will just fall out of WMT.
-
My hosting company turned off directory browsing and now everything is how it should be. So to my understanding, if the server sees a file that does not have a index file, it should not be view able and should be forbidden. This shoujld not affect us from an SEO standpoint should it? My hosting company said they disabled all directories in our site, however everything still works, except for the forbidden file directories.
-
Basically it shouldn't really have an affect; those unformatted file listings are literally the web server automatically saying 'here's the files that are in this folder', there's no meta tags, description, on page elements, etc.
If you have these pages and they're ranking well, you generally don't want them to be. The automatic file browsing pages don't have your name, your company, etc. in them, and they're generally pretty ugly. They also theoretically could be 'stealing' juice from your 'real' pages, if your internal structure isn't flowing relevance properly.
Basically what I'm saying is that if these pages are having some kind of SEO effect, you probably don't want them to be since they're so basic.
Also I can't overstate the security concerns that directory browsing might be introducing. If someone can directory browse to where your code lives (.php, .aspx.vb, whatever) they may be able to read it. Code sometimes has important things like logins, passwords, merchant account ids, etc. in it that you definitely don't want people reading.
-
Agreed with Valerie that step 1 is to turn off those directory listing pages - that can be a security issue and you don't necessarily want people to see/access the whole list. Also, make doubly sure you don't have any internal links to that directory (Google crawled it somehow).
Generally, Robots.txt should prevent crawling, but it's not foolproof, and it's pretty bad about removing pages once they're indexed. If you can block the page from browsing and return a 404 for the root page, that should be fine. The other option would be to have the page removed in Google Webmaster Tools. You could request removal for the entire folder, but I'm guessing that you may want the actual PDFs indexed.
-
Will turning of directory browsing affect Search for all directories?
-
I really don't want to 301 redirect them as they are just holding files. This is happening with my includes file too. that holds our header, footer, navigation etc. I can check with our hosting company to find out.
-
I'd create an index.html for the directory, and then redirect it somewhere. This way, you're capturing the inbound links and then rescuing some of the inbound juice.
Otherwise, you can also check out this post for more info on other solutions and modifying your htaccess file to prevent the directory view - http://perishablepress.com/better-default-directory-views-with-htaccess/
-
Blocking it in robots.txt will work to hide it from search engines.
If you want to hide it from users or people to who type in the url, you can simply drop a blank "index.html" in the /pdf folder.
-
I would suggest 301'ing them to their /index.htm or /pdf.htm equivalents. If you don't know, a 301 is a signal to a web browser (or search crawler) saying "this page has permanently moved, please go to (otherpage.htm) instead".
Here's a good SEOMoz article explaining it a bit more:
http://www.seomoz.org/learn-seo/redirection
What might be more of a concern, is it sounds like your web server has directory browsing enabled. This could be a security issue (depending on your web server setup). Generally you don't want to expose directories if you don't have to because it gives a potential attacker insight into your system setup. Here's an example how to do it in Apache:
www.camelrichard.org/topics/Apache/Turn_OffDirectoryBrowsing
And IIS:
technet.microsoft.com/en-us/library/cc731109(v=ws.10).aspx
If you like I can confirm if you have open directories if you give me the link, either here or through private message.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How should I handle conflict between custom taxonomy and categories in a directory site?
Hello! I posted about this a week ago but haven't solidly figured it out yet. I'm building a website that is a directory of local therapists. I have categories for my blog and custom taxonomy to classify therapists. My problem is that my categories and my custom taxonomy overlap by necessity. For example I have the category "anxiety therapy" and the custom taxonomy "anxiety". Will this confuse google?...Do you think google will be able to figure out the differences between my blog archives and my therapist listing archives?...even though their names are similar and in a couple of cases the same? should I noindex my categories because the point of my site is to get customers to the directory....not the blog?.....even though the blog has lots of useful content? I should note here that I have my custom taxonomy pages set up so that they will display the 6 most recent blog posts in the corresponding category at the bottom of the page....so maybe that makes noindexing the categories more ok? Thank you for your help!
Intermediate & Advanced SEO | | angelamaemae0 -
On-site Search - Revisited (again, *zZz*)
Howdy Moz fans! Okay so there's a mountain of information out there on the webernet about internal search results... but i'm finding some contradiction and a lot of pre-2014 stuff. Id like to hear some 2016 opinion and specifically around a couple of thoughts of my own, as well as some i've deduced from other sources. For clarity, I work on a large retail site with over 4 million products (product pages), and my predicament is thus - I want Google to be able to find and rank my product pages. Yes, I can link to a number of the best ones by creating well planned links via categorisation, silos, efficient menus etc (done), but can I utilise site search for this purpose? It was my understanding that Google bots don't/can't/won't use a search function... how could it? It's like expeciting it to find your members only area, it can't login! How can it find and index the millions of combinations of search results without typing in "XXXXL underpants" and all the other search combinations? Do I really need to robots.txt my search query parameter? How/why/when would googlebot generate that query parameter? Site Search is B.A.D - I read this everywhere I go, but is it really? I've read - "It eats up all your search quota", "search results have no content and are classed as spam", "results pages have no value" I want to find a positive SEO output to having a search function on my website, not just try and stifle Mr Googlebot. What I am trying to learn here is what the options are, and what are their outcomes? So far I have - _Robots.txt - _Remove the search pages from Google _No Index - _Allow the crawl but don't index the search pages. _No Follow - _I'm not sure this is even a valid idea, but I picked it up somewhere out there. _Just leave it alone - _Some of your search results might get ranked and bring traffic in. It appears that each and every option has it's positive and negative connotations. It'd be great to hear from this here community on their experiences in this practice.
Intermediate & Advanced SEO | | Mark_Elton0 -
Robots.txt: how to exclude sub-directories correctly?
Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.
Intermediate & Advanced SEO | | fablau1 -
Anyone managed to change 'At a glance:' in local search results
On Google's local search results, i.e when the 'Google places' data is displayed along with the map on the right hand side of the search results, there is also an element 'At a glance:'
Intermediate & Advanced SEO | | DeanAndrews
The data that if being displayed is from some years ago and the client would if possible like it to reflect there current services, which they have been providing for some five years. According to Google support here - http://support.google.com/maps/bin/answer.py?hl=en&answer=1344353 this cannot be changed, they say 'Can I edit a listing’s descriptive terms or suggest a new one?
No; the terms are not reviewed, curated, or edited. They come from an algorithm, and we do not help that algorithm figure it out. ' My question is has anyone successfully influenced this data and if so how.0 -
Directory VS Article Directory
Which got hit harder in penguin update. I was looking at SEER Interactive backlink profile (the SEO company that didn't rank for it's main keyword phrases) and noticed a pretty big trend on why it might not rank for its domain name. SEER was in a majority of anchor text, many coming from directories. i'm guessing THEY were effected because they matched the exact match domain link profile rule I'm not an expert programmer, but if i was playing "Google Programmer" I would think the Algo update went something like. If ((exact match domain) & (certain % anchor text==domain) & (certain % of anchor text== partial domain + services/company)) { tank the rankings } So back to the question, do you think that this update had a lot to do with directories, article directories, or neither. Is article directories still a legit way to get links. (not ezine)
Intermediate & Advanced SEO | | imageworks-2612900 -
How to Improve or Recover Google Image Search Performance (Queries, Impressions, Clicks)?
I want to improve or recover Google image search performance for my eCommerce website. My website was working well in Google image search but, I found negative performance since implementation of CDN for all images. Before CDN my image path was as follow. http://www.vistastores.com/media/catalog/product/cache/1/image/265x/9df78eab33525d08d6e5fb8d27136e95/1/0/10133_1.jpg After CDN my image path is as follow. http://lghttp.11720.nexcesscdn.net/805298/images/media/catalog/product/cache/1/image/900x800/9df78eab33525d08d6e5fb8d27136e95/1/0/10133_1.jpg I can see that, Google image search performance is going down after set up CDN on my website. Because, all images are available on external server. So, How to recover Google image search performance after CDN or any idea to improve performance? 6871173584_a85e22ce1c_b.jpg
Intermediate & Advanced SEO | | CommercePundit0 -
Could Temporarily Linking New Directory Pages to my Homepage Help SEO?
Within my website we maintain a nationwide directory of auto repair shops. When we add or significantly update / modify a particular listing, would it help improve the individual search engine rankings, Google PageRank, and / or Page Authority of the new auto shop page if we linked these pages to an area on the home page for "Our Newest Featured Shops" or "Latest Member Additions" or something of the nature? Each new shop profile would then be linked directly from the homepage for a period of time. I assume that it might be crawled and added to the indexes quicker, but would there be other benefits? If so, would those benefits only be temporary if eventually the new shop no longer linked to the homepage? Would keeping all featured shops in rotational display on the homepage make any difference? Any input is appreciated. Thanks. Kelly Vaught
Intermediate & Advanced SEO | | kelly_vaught0 -
SEO Correlation Between Code and Search Engine Rankings
I posted this on my blog and wanted to get everyones opinion on this (http://palatnikfactor.com/2011/06/07/seo-correlation-between-code-and-search-engine-rankings/) I’m always looking to see what top ranking websites may be doing to get the rankings they do. One of the tasks of any SEO I guess is to really analyze competitors, right? I want to really stress that what I am writing here is completely opinion based and have not (due to time) validated this correlation enough but would like to get the discussion started. Nevertheless, I did enough research to see that there may be a correlation between code validation and top ranking websites, at least for certain queries where the number of real big players/brands is limited or non-existent. So, what do I mean? http://validator.w3.org/ validates code on websites. This tool shows you errors and warnings that may be making it harder for search engines to crawl your website. Looking at top competitors for certain niches, I was surprised to find that top sites had very few errors compared to 2+ page rankings. That’s not to say that all the sites on the first page had fewer errors (cleaner code) than websites in the 2<sup>nd</sup> page plus. However, again, top ranking websites for keywords that I was looking at had cleaner code which may have a correlation in regards to organic rankings. What’s your take? Does this have any effect in regards to SEO?
Intermediate & Advanced SEO | | PaulDylan0