Ajax #! URL support?
-
Hi Moz,
My site is currently following the convention outlined here:
https://support.google.com/webmasters/answer/174992?hl=en
Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content.
For example, if the bot sees this url:
it will replace it will instead access the page:
http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73
In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine.
However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest.
If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great.
Also, pushstate is not practical for everyone due to limited browser support, etc.
Thanks,
Dustin
Updates:
I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago?
Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page:
And when it is ready to spider the page for content it, it spider's this URL instead:
http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73
The server does the rest, it is simply telling Roger to recognize the #! format and replace it with
?escaped_fragment
Though I obviously do not know how Roger is coded but it is a simple string replacement.
Thanks.
-
Hello Dustin, this is Abe on the Moz Help team.
This question is a bit intricate, I apologize if i am not reading your question correctly.
With AJAX content like this, I know Google's full specifications
https://developers.google.com/webmasters/ajax-crawling/docs/specification
indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method. Click here:
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
to find out how to create crawl-able content using pushState . This should help our crawler read AJAX content. Let me know if this information works for you!
I hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Need help fixing a duplicate content issue for my website. The moz crawl is show OMG my website with https:// and https://www. But I have never used the url https:// so I don’t understand why moz is showing this
Moz is showing my url with two different starts. Https:// and then the one I use https://www. The problem is I don’t think I have ever used the url without the www. at the start. How do I fix this?
Moz Bar | | jdp_uk0 -
How To Stop Moz Crawl From Prepending /blog/ on all our site urls that it crawls
Hello, At some time in the past our WP site had urls like this: www.oursite.com/blog/post-title-pretty-link The site has not used that url structure for quite some time, but Moz crawl is still hitting every post with /blog/prepended and as a result is generating thousands of 404s. When the /blog/ is removed from the url, then the urls work fine. Where are those old urls being stored and how can we update them? How do we address this issue? Any assistance will be appreciated. Thanks!
Moz Bar | | dbcooper1 -
How Can I Batch Upload URLs to get PA for many pages?
Howdy folks, I'm using advanced search operators to generate lists of long tail queries related to my niche and I'd like to take the batch of URLs I've gathered and upload them in a batch so I can see what the PA is for each URL. This would help me determine which long tail query is receiving the most love and links and help inform my content strategy moving forward. But I can't seem to find a way to do this. I went to check out the Moz API but it's a little confusing. It says there's a free version, but then it looks like it's actually not free, then I try to use it and it says I've gone over my limit even though I haven't used it yet. Anyone that can help me with this, I'd really appreciate it. If you're familiar with SEMRush, they have a batch analysis tool that works well, but I ideally want to upload these URLs to Moz because it's better for this kind of research. Thanks!
Moz Bar | | brettmandoes2 -
MOZ Page Grader - Sorry, but that URL is inaccessible?
Hi, I am trying to use the MOZ page grader on this page - https://www.respuestasparaelhombre.com/datos-sobre-disfunción-eréctil-preguntas but it is saying Sorry, but that URL is inaccessible - any ideas? Thanks
Moz Bar | | Jason_Marsh1230 -
Does SEOMOZ bot not know where to look for AJAX site snapshots?
snapshot://www.fubo.tv/?escaped_fragment=video/Nigeria_out_to_stop_Messi page: http://www.fubo.tv/video/Nigeria_out_to_stop_Messi
Moz Bar | | FuboTV0 -
On Page Grader can't access my URLs
HI- I am trying to grade some specific pages for keywords with the on page grader but it keeps telling me "Sorry, but that URL is inaccessible. " I can reach them via the browser and they are not https. Any thoughts? Here is a sample: www.bulkcandystore.com/kosher-candy Any help is appreciated. Ken
Moz Bar | | CandymanKen0