Adding academic content for a school in a sub folder, sub domain, or different site?
-
I manage the website for a school and we are planning to put our academic policies/student handbook online. I’m curious if there is any SEO value to including this content in our main website?
This isn’t stuff that anyone is going to link to externally (student orientation procedures, how to enroll/drop credits, academic warning policies etc) and there would be limited internal linking as well (someone looking for course information doesn’t want to see this type of stuff).
I’m not interested in SERPs for this content, but I’m wondering if the additional content could help the site’s SEO overall? It is naturally rich with ‘academic’ keywords and the only websites that use this type of content are universities.
On a similar note, I need to put up student profiles for potential employers to view. Like the policies, this is not priority content for someone visiting our website, but it is still keyword-rich content, which would add to the overall 'size' of the site.
Should this stuff go in a folder, a subdomain, or in a different location altogether?
-
The more content that ends up ranking for it's own topical focus, the better the whole site does - it all gets a lift. The critical key though is topical relationships. If the content becomes too diverse across a site, it can weaken the site's topical consistency. The exception to this concept is if the site is intended to be a "general information" site covering a vast range of topics. However even in that case, when a site gets too diverse, it becomes increasingly more difficult to get individual topics to rank because only so much time and energy can go into supporting any individual topic.
-
Thanks - great answers.
Beyond the technical considerations, is there an SEO benefit to having a larger site with more content, especially if the content is keyword-rich?
Again, I'm not looking for this additional content to rank in search results (students who want it can navigate from the home page themselves) but does the additional content help the overall domain credibility/authority?
-
If the information contained in the academic policies is not needing to be kept confidential, and if that information is valuable to students (or even perhaps people considering becoming students, or parents of existing or potential students), then for those reasons, it would be legitimate to make it accessible from the main site without them having to go hunt for it.
Given those reasons, I believe it would be perfectly valid to have it be crawlable and indexable by search engines.
I would also group it all together in a dedicated location (such as a sub-folder hierarchically) with it's own sectional sub-navigation because it's no different than any other quality content - grouping topically focused similar content is proper for user experience.
As for the student profiles that's a completely different issue. This one involves the reality that it is most likely most student profiles are going to have very little depth of unique content. I assume students will fill out the content themselves. That leaves the door open for all sorts of good, bad and ugly.
Further, if there is some reason for students, faculty or other staff to be able to access it without having to sign in to a secure area, that is not a reason to have it found in search engines.
There are privacy concerns (so a secure area is then in fact, the best option if that's the case).
Most likely being "thin" or even some low quality or perceived duplicate content, if it's not hidden behind a log-int, it really should be blocked from search engines in a robots.txt file or use noindex,nofollow meta tags. (no valid reason to do noindex,follow).
Having said all that, I would suggest it could just as well go in a sub-folder of the main site or a separate sub-domain. Since it will be blocked from indexation/crawling, either would work.
One final reason it shouldn't be indexed or followed is as students come and go, that way you don't need to worry about a "301" redirect system to deal with them.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Images on sub domain fed from CDN
I have a client that uses a CDN to fill images, from a sub domain ( images.domain.com). We've made sure that the sub domain itself is not blocked. We've added a robots.txt file, we're creating an image sitemap file & we've verified ownership of the domain within GWT. Yet, any crawler that I use only see's the first page of the sub domain (which is .html) but none of the subsequent URL's which are all .jpeg. Is there something simple I'm missing here?
Technical SEO | | TammyWood0 -
Do mobile and desktop sites that pull content from the same source count as duplicate content?
We are about to launch a mobile site that pulls content from the same CMS, including metadata. They both have different top-level domains, however (www.abcd.com and www.m.abcd.com). How will this affect us in terms of search engine ranking?
Technical SEO | | ovenbird0 -
Why is Google's cache preview showing different version of webpage (i.e. not displaying content)
My URL is: http://www.fslocal.comRecently, we discovered Google's cached snapshots of our business listings look different from what's displayed to users. The main issue? Our content isn't displayed in cached results (although while the content isn't visible on the front-end of cached pages, the text can be found when you view the page source of that cached result).These listings are structured so everything is coded and contained within 1 page (e.g. http://www.fslocal.com/toronto/auto-vault-canada/). But even though the URL stays the same, we've created separate "pages" of content (e.g. "About," "Additional Info," "Contact," etc.) for each listing, and only 1 "page" of content will ever be displayed to the user at a time. This is controlled by JavaScript and using display:none in CSS. Why do our cached results look different? Why would our content not show up in Google's cache preview, even though the text can be found in the page source? Does it have to do with the way we're using display:none? Are there negative SEO effects with regards to how we're using it (i.e. we're employing it strictly for aesthetics, but is it possible Google thinks we're trying to hide text)? Google's Technical Guidelines recommends against using "fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash." If we were to separate those business listing "pages" into actual separate URLs (e.g. http://www.fslocal.com/toronto/auto-vault-canada/contact/ would be the "Contact" page), and employ static HTML code instead of complicated JavaScript, would that solve the problem? Any insight would be greatly appreciated.Thanks!
Technical SEO | | fslocal0 -
Redirecting Entire Microsite Content to Main Site Internal Pages?
I am currently working on improving site authority for a client site. The main site has significant authority, but I have learned that the company owns several other resource-focused microsites which are stagnant, but which have accrued significant page authority of their own (thought still less than the main site). Realizing the fault in housing good content on a microsite rather than the main site, my thought is that I can redirect the content of the microsites to internal pages on the main site as a "Resources" section. I am wondering a: if this is a good idea and b: the best way to transfer site authority from these microsites. I am also wondering how to organize the content and if, for example, an entire microsite domain (e.g. microsite.com) should in fact be redirected to internal resource pages (e.g. mainsite.com/resources). Any input would be greatly appreciated!
Technical SEO | | RightlookCreative1 -
Geotargeting duplicate content to different regions - href and canonical tag confusion
If you duplicate content onto a sub-folder for say a new US geotargeted site (to target kw spelling differences) and, in addition to GWT geotargeting settings, implement the 'Canonical' and 'Hreflang' tags on these new pages to show G different region and language version (en-us). Then does the original/main site similar pages also need to have canonical and href tags ? The main/original sites page I don't really want to target a specific country (although existing signals (hosting etc) will be UK (primary target of main site) but pages show up in other country searches too (which we want). Im presuming fine to leave the original/main site as it currently is although wording in google blog/webmaster central articles etc are a bit confusing hence why im asking for anyone elses opinion/input on this. Also is there are any benefit (or just best practice) to use 'www.example.com/en-us/...' in the subdirectory URL as opposed to just 'www.example.com/us/' many thanks in advance to any commentators 🙂
Technical SEO | | Dan-Lawrence0 -
Htaccess 301s to 3 different sites
Hi, I'm an htaccess newbie, and I have to redirect and split traffic to three new domains from site A. The original home page has most of the inbound links so I've set up a 301 that goes to site B, the new corporate domain. Options +FollowSymLinks
Technical SEO | | ellenru
RewriteEngine on
RewriteRule (.*) http://www.newdomain.com/$1 [R=301,L] Brand websites C and D need 301s for their folders in site A but I have no idea how to write that in relationship to the first redirect, which really is about the home page, contact and only a few other pages. The urls are duplicates except for the new domain names. They're all on Linux..Site A is about 150 pages, should I write it by page, or can I do some kind of catch all (the first 301) plus the two folders? I'd really appreciate any insight you have and especially if you can show me how to write it. Thanks 🙂0 -
Content loc and player log tags for XML video site maps
I need a little help understanding how to create two of the required tags for a XML video site map for Google. 1. video:content_loc2.<video:player_loc< p=""></video:player_loc<></video:content_loc> Google explains their Video XML Site map requirements here:
Technical SEO | | dsexton10
www.google.com/support/webmasters/bin/answer.py?answer=80472
Using the example on this Google Web Master Help page (where they explain all six of the required tags) , here are examples of the two tags I need help with: video:content_locwww.example.com/video123.flv</video:content_loc> <video:player_loc allow_embed="yes" autoplay="ap=1">www.example.com/videoplayer.swf?video=12...video:player_loc></video:player_loc> The video I am trying to optimize is located on a page on my site:
www.mountainbikingmaine.com/races/bradbury_hawk.html
This page has an embedded Vimeo video. So I don't have the video file on my domain. It is on Vimeo. Here is source code from my page that I think provides the information I need to create the two tags that Google requires. <iframe src="<a rel=" nofollow"="" href="http://player.vimeo.com/video/24580638?title=0&byline=0&portrait=0"" target="_blank">player.vimeo.com/video/24580638?title=0&...amp;portrait=0"</a> width="400" height="533" frameborder="0"></iframe> [vimeo.com/24580638">Bradbury](<a rel=) Mountain Maine Hawk Migration Count from [vimeo.com/user3219915">dan](<a rel=) sexton Using this source from my site, can you suggest what to put in the two tags? Thanks! Dan0 -
Redirecting root domains to sub domains
Mozzers: We have a instance where a client is looking to 301 a www.example.com to www.example.com/shop I know of several issues with this but wondered if anyone could chip in with any previous experiences of doing so, and what outcomes positive and negative came out of this. Issues I'm aware of: The root domain URL is the most linked page, a HTTP 301 redirect only passes about 90% of the value. you'll loose 10-15% of your link value of these links. navigational queries (i.e.: the "domain part" of "domain.tld") are less likely to produce google site-links less deep-crawling: google crawls top down - starts with the most linked page, which will most likely be your domain url. as this does not exist you waste this zero level of crawling depth. robots.txt is only allowed on the root of the domain. Your help as always is greatly appreciated. Sean
Technical SEO | | Yozzer0