Base href
-
I'm having a discussion with a third party that's building a website for a client that I advised concerning his SEO.
The site went live 2 weeks ago and it's not getting indexed very well, so my client asked me what could be the problem.
I checked several things that could be the problem, like an xml sitemap that was missing etc. But there was another thing that I saw in the source code:
<base href="http://www.domain.com/">
Can this be a problem for Google to follow internal links? I always thought that you should use the base href like this:
<base href="http://www.domain.com"> so without the trailing / behind the TLD
And even better using absolute instead of relative links, no?
-
I suggest looking through previous Q&A forum results for various answers that have been provided for this question. Here's the query to find them: https://www.google.com/search?q=seomoz+base+href.
Basically, you are right, you should use absolute links instead of relative links. You only need to use a base href if you are using relative URLs on page and the base href is needed for them to resolve correctly.
The base href you are seeing in the code shouldn't be causing any issues for Google when following internal links. The issues, if any, would be caused by the relative links themselves if they are not set up correctly (this should be easily tested).
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Sitemap use for very large forum-based community site
I work on a very large site with two main types of content, static landing pages for products, and a forum & blogs (user created) under each product. Site has maybe 500k - 1 million pages. We do not have a sitemap at this time.
Technical SEO | | CommManager
Currently our SEO discoverability in general is good, Google is indexing new forum threads within 1-5 days roughly. Some of the "static" landing pages for our smaller, less visited products however do not have great SEO.
Question is, could our SEO be improved by creating a sitemap, and if so, how could it be implemented? I see a few ways to go about it: Sitemap includes "static" product category landing pages only - i.e., the product home pages, the forum landing pages, and blog list pages. This would probably end up being 100-200 URLs. Sitemap contains the above but is also dynamically updated with new threads & blog posts. Option 2 seems like it would mean the sitemap is unmanageably long (hundreds of thousands of forum URLs). Would a crawler even parse something that size? Or with Option 1, could it cause our organically ranked pages to change ranking due to Google re-prioritizing the pages within the sitemap?
Not a lot of information out there on this topic, appreciate any input. Thanks in advance.0 -
Nofollow versus data-href
We have a couple of Tier-1 websites that contain a lot of affiliate links. These outgoing affiliate links currently have the rel="nofollow" element. Yet, I am seeing a lot of other websites and competitors, use data-href="" instead of nofollow. Is the latter better for SEO purposes or are they just using data-href for better tracking?
Technical SEO | | LoyensT0 -
GWT giving me 404 errors based on old and deleted site map
I'm getting a bunch of 404 crawl errors in my Google Webmaster Tools because we just moved our site to a new platform with new URL structure. We 301 redirected all the relevant pages. We submitted a new site map and then deleted all the site maps to the old website url structure. However, google keeps crawling the OLD urls and reporting back the 404 errors. It says that the website is linking to these 404 pages via an old outdated sitemap (which if you goto shows a 404 as well, so it's not as if Google is reading these old site maps now). Instead it's as if Google has cached the old sitemap but continues to use it to crawl these non-existent pages. Any thoughts?
Technical SEO | | Santaur0 -
Wiki/Knowledge bases
Hi A client of mine is creating a knowledge base/wiki for their website. There using there suppliers own knowledge base (basically their a reseller). What would be the best practice with regards to duplicate content. Would it be best to make all the pages "no follow"? and block the pages by the robot.txt?
Technical SEO | | Cocoonfxmedia0 -
How to handle city-based product selection and duplicate content?
Hi everyone, I've been searching the interwebs for a solution to my problem, but haven't really found anything conclusive. I've got a client with duplicate content issues; their website not only has a nation-wide website, but also 10 different sub-categories for different cities, with each subcategory having the same content as the main website. The reason they wanted city-based websites was due to the changing product offerings in each city. So City 1 may not have all the products available that City 2 does. Needless to say this has caused some duplicate content issues as most sections of the website have been multiplied by 10. When a visitor lands on any page of the website, they are greeted by a pop up asking for their location, which will then redirect them to their selected version of the website. As the copy cannot really be changed enough for each city to make it unique, I've been looking into canonical tags, but this would mean the localised versions will not be indexed by Google. Has anyone had any experience of a similar situation where the product range changes according to location, but it doesn't hurt SEO? Thanks in advance for any advice!
Technical SEO | | Nimbus30000 -
Value of key word based URL
I was researching some keywords and I found something that kind of confuses me. If you search google for Denver IT Consulting the second hit is for a site** - **denveritconsulting.com. This is a one page site with just a paragraph of text and links to the actual company's site but it's getting the second place on a pretty good keyword. I also checked open site explorer and they have no links at all. I am assuming that their placement is based solely on the exact match of the keywords in the URL? Does anyone have any feedback on this? I have purchased and used keyword based URLs in the past but I have never done or seen something like this that is so successful. Any input on this would be great. Thanks!
Technical SEO | | ZiaTG0 -
What is campaign based rank tracking tool? How to use it?
I'm having difficulties with SEOmoz Rank Tracker tool. During last month, it hasn't worked properly. I was suggested to use "campaign based rank tracking tool"- I would like to learn if anyone has already used it? Thanks, Sema
Technical SEO | | WTGEvents0 -
Do links count if they have no href parameter?
A SEOmoz report indicates that we have a large number of links on our pages, mainly due to an embedded mega-drop down and lots of product display options that are user activated, but otherwise hidden. Most of the links have the paramter href="#", because the links are used in combination with jQuery to trigger actions. It is still possible to trigger the actions without the href parameter, so the question is: Do links without href parameters count towards the total amount of links on the page, since a link without a href parameter is actually an internal page link? Our site (this version of the site has not had empty tags removed): http://emilea.be/
Technical SEO | | Webxtrakt0