Meta tags in Single Page Apps
-
Since the deprecation of the AJAX Crawling Scheme back last October I am curious as to when Googlebot actually reads meta tag information from a page.
We have a website at whichledlight.com that is implemented using emberjs. Part of the site is our results pages (i.e. gu10-led-bulbs). This page updates the meta and link tags in the head of the document for things like canonicalisation and robots, but can only do so after the page finishes loading and the JavaScript has been run.When the AJAX crawling scheme was still in place we were able to prerender these pages (including the modified meta and link tags) and serve these to Googlebot. Now Googlebot no longer uses these prerendered snapshots and instead is sophisticated enough load and run our site.So the question I have is does Googlebot read the meta and links tags downloaded from the original response or does it wait until the page finishes rendering before reading them (including any modifications that have been performed on them)
-
Hi ipressman,
You've asked some great questions! I encourage you to start a new thread of your own for each one and provide a few more details in order to get the best responses.
As this thread is quite old, I'm going to lock it to new responses. Thanks for your understanding!
Christy
-
Hi TrueluxGroup! Did Oleg answer your question, and if so, would you mind marking his response as a Good Answer?
If not, what can we still help you with?
-
Thanks for getting back to us. Never knew I coudl look at the index source like that. Amazing! It is however as I expected and it has not cached/indexed the updated head of the document which is altered by the JavaScript. That is going to be very problematic for us. Thanks for you help
-
Here is what Google sees and indexes for that page: http://webcache.googleusercontent.com/search?q=cache:https://www.whichledlight.com/t/gu10-led-bulbs&num=1&strip=0&vwsrc=1
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Best Practices For Angular Single Page Applications & Progressive Web Apps
Hi Moz Community, Is there a proper way to do SPA (client side rendered) and PWA without having a negative impact on SEO? Our dev team is currently trying to covert most of our pages to Angular single page application client side rendered. I told them we should use a prerendering service for users that have JS disabled or use server side rendering instead since this would ensure that most web crawlers would be able to render and index all the content on our pages even with all the heavy JS use. Is there an even better way to do this or some best practices? In terms of the PWA that they want to add along with changing the pages to SPA, I told them this is pretty much separate from SPA's because they are not dependent. Adding a manifest and service worker to our site would just be an enhancement. Also, if we do complete PWA with JS for populating content/data within the shell, meaning not just the header and footer, making the body a template with dynamic JS as well would that effect our SEO in any way, any best practices here as well? Thanks!
Technical SEO | | znotes0 -
Duplicate content: using the robots meta tag in conjunction with the canonical tag?
We have a WordPress instance on an Apache subdomain (let's say it's blog.website.com) alongside our main website, which is built in Angular. The tech team is using Akamai to do URL rewrites so that the blog posts appear under the main domain (website.com/more-keywords/here). However, due to the way they configured the WordPress install, they can't do a wildcard redirect under htaccess to force all the subdomain URLs to appear as subdirectories, so as you might have guessed, we're dealing with duplicate content issues. They could in theory do manual 301s for each blog post, but that's laborious and a real hassle given our IT structure (we're a financial services firm, so lots of bureaucracy and regulation). In addition, due to internal limitations (they seem mostly political in nature), a robots.txt file is out of the question. I'm thinking the next best alternative is the combined use of the robots meta tag (no index, follow) alongside the canonical tag to try to point the bot to the subdirectory URLs. I don't think this would be unethical use of either feature, but I'm trying to figure out if the two would conflict in some way? Or maybe there's a better approach with which we're unfamiliar or that we haven't considered?
Technical SEO | | prasadpathapati0 -
Can You Use More Then One Google Local Rich Snippet on a single site/ on a single page.
I am currently working on a website for a business that has multiple office locations. As I am trying to target all four locations I was wondering if it is okay to have more then one Local Rich Snippet on a single page. (For example they list all four locations and addresses within their footer and I was wondering if I could make these local rich snippets). What about having more then one on a single website. For example if a company has multiple offices located in several different cities and have set up individual contact pages for these cities, can each page have it's own Local Rich Snippet? Will Google look at these multiple "local rich snippets" as spaming or will they recognize the multiple locations and count it towards their local seo?
Technical SEO | | webdesignbarrie1 -
Banned Page
I have been using a 3rd party checker on indexed pages in google. It has shown several banned pages. I type the page in and it comes up. But it is nowhere to be found for me to delete it. It is not in the wordpress pages. It also shows up in the duplicate content section in my campaigns in moz.com. I can find the page to delete it. If it is banned then I do not want to redirect it to the correct page. Any ideas on how to fix this?
Technical SEO | | Roots70 -
How to remove the 4XX Client error,Too many links in a single page Warning and Cannonical Notices.
Firstly,I am getting around 12 Errors in the category 4xx Client error. The description says that this is either bad or a broken link.How can I repair this ? Secondly, I am getting lots of warnings related to too many page links of a single page.I want to know how to tackle this ? Finally, I don't understand the basics of Cannonical notices.I have around 12 notices of this kind which I want to remove too. Please help me out in this regard. Thank you beforehand. Amit Ganguly http://aamthoughts.blogspot.com - Sustainable Sphere
Technical SEO | | amit.ganguly0 -
301 redirecting some pages directly, and the rest to a single page
I've read through the Redirect guide here already but can't get this down in my .htaccess I want to redirect some pages specifically (/contactinfo.html to the new /contact.php) And I want all other pages (not all have equivalent pages on the new site) to redirect to my new (index.php) homepage. How can I set it up so that some specific pages redirect directly, and all others go to one page? I already have the specific oldpage.html -> newpage.php redirects in place, just need to figure out the broad one for everything else.
Technical SEO | | RyanWhitney150 -
Backtracking from verification meta tag to the correct Google account is difficult
A Google verification meta tag was created and implemented on a site that I am now responsible for (I took over an SEO project after a long lapse), but no one seems to know what Google account was used to create the meta tag in the first place. I'm finding it very difficult to backtrack from verification meta tag to the Google account, and all the online help is for those having trouble moving forward with the verification. Any suggestions or advice?
Technical SEO | | MaryDoherty0 -
Using the Canonical Tag
Hi, I have an issue that can be solve with a canonical tag, but I am not sure yet, we are developing a page full of statistics, like this: www.url.com/stats/ But filled with hundreds of stats, so users can come and select only the stats they want to see and share with their friends, so it becomes like a new page with their slected stats: www.url.com/stats/?id=mystats The problems I see on this is: All pages will be have a part of the content from the main page 1) and many of them will be exactly the same, so: duplicate content. My idea was to add the canonical tag of "www.url.com/stats/" to all pages, similar as how Rand does it here: http://www.seomoz.org/blog/canonical-url-tag-the-most-important-advancement-in-seo-practices-since-sitemaps But I am not sure of this solution because the content is not exactly the same, page 2) will only have a part of the content that page 1) has, and in some cases just a very small part. Is the canonical tag useful in this case? Thank you!
Technical SEO | | andresgmontero0