I just found something weird I can't explain, so maybe you guys can help me out.
-
I just found something weird I can't explain, so maybe you guys can help me out.
In Google http://www.google.nl/#hl=nl&q=internet. The number 3 result is a big telecom provider in the Netherland called Ziggo. The ranking URL is https://www.ziggo.nl/producten/internet/. However if you click on it you'll be directed to https://www.ziggo.nl/#producten/internet/
HttpFox in FF however is not showing any redirects. Just a 200 status code.
The URL https://www.ziggo.nl/#producten/internet/ contains a hash, so the canonical URL should be https://www.ziggo.nl/. I can understand that. But why is Google showing the title and description of https://www.ziggo.nl/producten/internet/, when the canonical URL clearly is https://www.ziggo.nl/?
Can anyone confirm my guess that Google is using the bulk SEO value (link juice/authority) of the homepage at https://www.ziggo.nl/ because of the hash, but it's using the relevant content of https://www.ziggo.nl/producten/internet/ resulting in a top position for the keyword "internet".
-
The site you've pointed to uses ajax to load its content. When the page loads there's a javascript snippet which takes over and adds the # to the page (hence why you're not seeing it as a httpd header). If you click on any other link you'll see that the base URL stays the same with some extra parameters on the end.
There are potential crawling issues with this and a number of fixes (some Google documentation here, although this isn't the fix that the site in question is using: http://code.google.com/intl/en-US/web/ajaxcrawling/).
So, in short, there's nothing fishy going on - it's just good old ajax content loading
- Matt
-
This is actually a fairly crude attempt of loading AJAX content. I say 'crude' because it's not quite using Google's documented AJAX protocol using the hashbang (#!). There was an SEOmoz post about Google's protocol a while back that had some good examples:
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
For this specific website, there actually is a JavaScript redirect involved. The original URL will load, then some JS will do some work and eventually do a document.location.replace() to do the redirect to the URL with the hash. As far as GoogleBot is concerned it won't necessarily do the redirect and will index the original page.
One thing I want to caution is to again remember that this site is not exactly adhering to Google's recommendations on AJAX content. Coupled with the fact that there is a JS redirect going on I would say that there might be a risk of cloaking. On the front end, the content looks the same and I would kinda hope that Google would just treat this scenario similar to their hashbang solution because this site is not intending to do some tricky stuff here. But we can't trust that Google will always give a free pass.
-
This looks more like a dynamic site using AJAX, rather than anchors in the page like you're thinking.
See: http://code.google.com/web/ajaxcrawling/docs/getting-started.html
No funny stuff. The page you see is the page google intended to show you, with all the SEO value for the page itself being responsible for its spot in the SERPs.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My Homepage Won't Load if Javascript is Disabled. Is this an SEO/Indexation issue?
Hi everyone, I'm working with a client who recently had their site redesigned. I'm just going through to do an initial audit to make sure everything looks good. Part of my initial indexation audit goes through questions about how the site functions when you disable, javascript, cookies, and/or css. I use the Web Developer extension for Chrome to do this. I know, more recently, people have said that content loaded by Javascript will be indexed. I just want to make sure it's not hurting my clients SEO. http://americasinstantsigns.com/ Is it as simple as looking at Google's Cached URL? The URL is definitely being indexed and when looking at the text-only version everything appears to be in order. This may be an outdated question, but I just want to be sure! Thank you so much!
Technical SEO | | ccox10 -
Redirect url that don't end with slash
Hi! I need to add in my .htaccess a way to redirect URL that don't end with / to the one thats end wth /.For example, I need http://example.com to redirect to http://www.example.com/. And, of course, this redirection should be only for URLs that don't end with /, so I don't get double // at the end. I already have the code to redirect the non-www to the www, but can't find a way to do the slash thing. Thanks!
Technical SEO | | arielbortz0 -
Joomla creating duplicate pages, then the duplicate page's canonical points to itself - help!
Using Joomla, every time I create an article a subsequent duplicate page is create, such as: /latest-news/218-image-stabilization-task-used-to-develop-robot-brain-interface and /component/content/article?id=218:image-stabilization-task-used-to-develop-robot-brain-interface The latter being the duplicate. This wouldn't be too much of a problem, but the canonical tag on the duplicate is pointing to itself.. creating mayhem in Moz and Webmaster tools. We have hundreds of duplicates across our website and I'm very concerned with the impact this is having on our SEO! I've tried plugins such as sh404SEF and Styleware extensions, however to no avail. Can anyone help or know of any plugins to fix the canonicals?
Technical SEO | | JamesPearce0 -
Should you use the canonicalization tag when the content isn't exactly a duplicate?
We have a site that pull data from different sources with unique urls onto a main page and we are thinking about using the canonicalization tag to keep those source pages from being indexed and to give any authority to the main page. But this isn’t really what canonicalization is supposed to be used for so I’m unsure of if this is the right move.
Technical SEO | | Fuel
To give some more detail: We manage a site that has pages for individual golf courses. On the golf course page in addition to other general information we have sections on that page that show “related articles” and “course reviews”.
We may only show 4 or 5 on each of those courses pages per page, but we have hundreds of related articles and reviews for each course. So below “related articles” on the course page we have a link to “see more articles” that would take the user to a new page that is simply a aggregate page that houses all the article or review content related to that course.
Since we would rather have the overall course page rank in SERPs rather than the page that lists these articles, we are considering canonicalizing the aggregate news page up to the course page.
But, as I said earlier, this isn’t really what the canonicalization tag is intended for so I’m hesitant.
Has anyone else run across something like this before? What do you think?0 -
Can duplicate [*] reports be suppressed?
That's the best question could come up with! Have searched but can't find any info. New user: First crawl error report show listings of pages with same titles/descriptions. In reality they are all the same page but with different parameters eg Email_Me_When_Back_In_Stock.asp?productId=xxxxxxxxx etc These have been excluded in both robots.txt (for some time ie disallow: /*?)and google webmaster tools (just done). Will they still show in updated report and if so is there a way to suppress them if the issues have been rectified as can be done in webmaster tools. Is there a way to test to see if they are being excluded by robots.txt and GWT?
Technical SEO | | RobWillox0 -
Need some help with an old wordpress site we just merged with a new template
Sorry. URL is awardrealty.com I have a new website that we merged into a new wordpress theme. I just crawled the site with my seomoz crawl tool and it is showing a ridiculous amount of 4xx pages (200+) and we cant find the 4xx pages in the sitemap or within wordpress. Need some help? Am i missing something easy?
Technical SEO | | Mark_Jay_Apsey_Jr.0 -
Site 'filtered' by Google in early July.... and still filtered!
Hi, Our site got demoted by Google all of a sudden back in early July. You can view the site here: http://alturl.com/4pfrj and you may read the discussions I posted in Google's forums here: http://www.google.com/support/forum/p/Webmasters/thread?tid=6e8f9aab7e384d88&hl=en http://www.google.com/support/forum/p/Webmasters/thread?tid=276dc6687317641b&hl=en Those discussions chronicle what happened, and what we've done since. I don't want to make this a long post by retyping it all here, hence the links. However, we've made various changes (as detailed), such as getting rid of duplicate content (use of noindex on various pages etc), and ensuring there is no hidden text (we made an unintentional blunder there through use of a 3rd party control which used CSS hidden text to store certain data). We have also filed reconsideration requests with Google and been told that no manual penalty has been applied. So the problem is down to algorithmic filters which are being applied. So... my reason for posting here is simply to see if anyone here can help us discover if there is anything we have missed? I'd hope that we've addressed the main issues and that eventually our Google ranking will recover (ie. filter removed.... it isn't that we 'rank' poorly, but that a filter is bumping us down, to, for example, page 50).... but after three months it sure is taking a while! It appears that a 30 day penalty was originally applied, as our ranking recovered in early August. But a few days later it dived down again (so presumably Google analysed the site again, found a problem and applied another penalty/filter). I'd hope that might have been 30 or 60 days, but 60 days have now passed.... so perhaps we have a 90 day penalty now. OR.... perhaps there is no time frame this time, simply the need to 'fix' whatever is constantly triggering the filter (that said, I 'feel' like a time frame is there, especially given what happened after 30 days). Of course the other aspect that can always be worked on (and oft-mentioned) is the need for more and more original content. However, we've done a lot to increase this and think our Guide pages are pretty useful now. I've looked at many competitive sites which list in Google and they really don't offer anything more than we do..... so if that is the issue it sure is puzzling if we're filtered and they aren't. Anyway, I'm getting wordy now, so I'll pause. I'm just asking if anyone would like to have a quick look at the site and see what they can deduce? We have of course run it through SEOMoz's tools and made use of the suggestions. Our target pages generally rate as an A for SEO in the reports. Thanks!
Technical SEO | | Go2Holidays0 -
I need some HTACCESS help (Magento)
Hi Guys, I need some help on this htaccess issue in Magneto. So here is what I am trying to do: I wanted to change mysite.com/index.php/etc to mysite.com/etc so I turned on the web friend URLS. That did that, BUT there are still two versions of every page on the site. www.mysite.com/etc and mysite.com/index.php/etc So that isn't good for SEO. So then I applied a 301 matching redirect, RedirectMatch 301 /index.php/(.*) http://www.mysite.com/$1 That solved that problem. But now I am not able to log into the admin. It is mysite.com/index.php/pg45admin. It should redirect to mysite.com/pg45admin but the page just hangs.... It goes into a continuous loop. I tired using the custom URL and then the site crashed and I had to redo it. So what do I need to do for this to work?
Technical SEO | | netviper0