Ajax #! URL support?
-
Hi Moz,
My site is currently following the convention outlined here:
https://support.google.com/webmasters/answer/174992?hl=en
Basically since pages are generated via Ajax we are setup to direct bots that replace the #! in a url with ?escaped_fragment to cached versions of the ajax generated content.
For example, if the bot sees this url:
it will replace it will instead access the page:
http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73
In which case my server serves the cached html instead of the live page. This is all per Googles direction and is indexing fine.
However the MOZ bot does not do this. It seems like a fairly straight-forward feature to support. Rather than ignoring the hash, you look to see if it is a #! and then try to spider the url replaced with ?escaped_fragment. Our server does the rest.
If this is something MOZ plans on supporting in the future I would love to know. If there is other information that would be great.
Also, pushstate is not practical for everyone due to limited browser support, etc.
Thanks,
Dustin
Updates:
I am editing my question because it won't let me respond to my own question. It says I need to sign up for MOZ analytics. I was signed up for Moz Analytics?! Now I am not? I responded to my invitation weeks ago?
Anyway, you are misunderstanding how this process works. There is no site-map involved. The bot reads this URL on the page:
And when it is ready to spider the page for content it, it spider's this URL instead:
http://www.discoverymap.com/?escaped_fragment=/California/Map-of-Carmel/73
The server does the rest, it is simply telling Roger to recognize the #! format and replace it with
?escaped_fragment
Though I obviously do not know how Roger is coded but it is a simple string replacement.
Thanks.
-
Hello Dustin, this is Abe on the Moz Help team.
This question is a bit intricate, I apologize if i am not reading your question correctly.
With AJAX content like this, I know Google's full specifications
https://developers.google.com/webmasters/ajax-crawling/docs/specification
indicate that the #! and ?escaped_fragment= technique works for their crawlers. However, Roger is a bit picky and isn't robust enough yet to use only the sitemap as the reference in this case. Luckily, one of our wonderful users came up with a solution using pushState() method. Click here:
http://www.moz.com/blog/create-crawlable-link-friendly-ajax-websites-using-pushstate
to find out how to create crawl-able content using pushState . This should help our crawler read AJAX content. Let me know if this information works for you!
I hope this helps
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How To Stop Moz Crawl From Prepending /blog/ on all our site urls that it crawls
Hello, At some time in the past our WP site had urls like this: www.oursite.com/blog/post-title-pretty-link The site has not used that url structure for quite some time, but Moz crawl is still hitting every post with /blog/prepended and as a result is generating thousands of 404s. When the /blog/ is removed from the url, then the urls work fine. Where are those old urls being stored and how can we update them? How do we address this issue? Any assistance will be appreciated. Thanks!
Moz Bar | | dbcooper1 -
URL is Inaccesible
Hi, I have tried this url: https://www.3dquickprinting.com/ on Moz onpage grader but it responds "Sorry, but that URL is inaccessible." I have checked my Robots.txt but it does not have any entry to block MOZ crawler. Please see: #User-Agent: *
Moz Bar | | HiteshP
#Crawl-Delay: 30 #For robots.txt
User-agent: BLEXBot
Disallow: /
User-agent: MJ12bot
Disallow: /
User-agent: TwengaBot
Disallow: /
User-agent: 008
Disallow: /
User-agent: WotBox
Disallow: / Please advice what to do get rid from this error. Thanks,0 -
Only One Canonical URL Tag
HI, I'm an SEO novice - company owner with no money so doing it all myself with help from my web designer using wordpress. Ive just completed some seo and done the moz page scoring analysis for optimisation and gained 92% - however - there is one outstanding issue on canonical url tags - i.e. recommened fix = The canonical URL tag is intended to refer duplicate pages to a single canonical URL. To ensure the search engines properly parse the canonical source, your page should use only one version of this tag in the header. See Canonical URL Tag - the Most Important Advancement in SEO Practices Since Sitemaps Ive gone through the page code and can see I have 2 rel=canonical references - am I able to simply delete one - how do I do this if its been created by the yoast/wordpress plug-in? Many thanks in advance for any help!
Moz Bar | | M-J-Smith0 -
Why do i get multiple variations of my url with ?order=asc and ?view=list at the end of it in my crawl report?
I just did a crawl for one my clients to validate any error in the structure. Next thing I know is that the website have multiple variation of the same url with query like ?order=asc and ?view=list at the end of it. I am wondering why these url variations appears in the crawl I just did since bots aren't suppose to go further thant the ? normally. Just to show you a couple of url's of my crawl test. <colgroup><col width="484"></colgroup>
Moz Bar | | alexrbrg
| https://test.com/exemple/?per_page=9 |
| https://test.com/exemple/?per_page=15 |
| https://test.com/exemple/?per_page=30 |
| https://test.com/exemple/?orderby=popularity |
| https://test.com/exemple/?orderby=date |
| https://test.com/exemple/?orderby=price |
| https://test.com/exemple/?orderby=price-desc |
| https://test.com/exemple/?order=asc |
| https://test.com/exemple/?order=desc |
| https://test.com/exemple/?view=list | Thank you Guys0 -
Has anyone had to deal with Moz crawl issues on their Zendesk support site?
If so - how did you end up resolving them? For instance we have 85 "temporary redirect" errors from our Zendesk support site in our crawl error report and we don't have access to the robots.txt file through Zendesk.
Moz Bar | | zspace0 -
Why is the exact same URL being seen as duplicate and showing an error in my SEO reports
Well, I am still having duplicate page issues. I have a question about one of the errors SEO is giving me when I download a crawl report. I am going to attach a screen shot of part of the report so you can see for yourself, along with explaining it here. SEO shows the list of URL's that it crawled in the report. In this(see attachment) portion of the report it has 321 results for the exact same URL. It also says all of these exact same URL's have received a 404 error. What I want to know is how does it make 321 results for the same URL? And with this error that I don't see when I look at the page? 0hkRDST
Moz Bar | | JoshMaxAmps0 -
How to find all 301 redirect for URL xyz.com/products (internal and external)?
This is what we are thinking: Get all URL of the xyz.com/products using XENU software. Search those URL on google (site;xyz.com url ) to find out if they are crawled by google, do the same on bing (as currently google shows 4k URL and bing 11k ) Use opensiteexplorer (301 redirect ) and using (internal external) to get the desired result. Is this the right approach? If not, what is the best way to find the correct result? All suggestions are welcome.
Moz Bar | | tpt.com0