How to solve JavaScript paginated content for SEO
-
In our blog listings page, we limit the number of blogs that can be seen on the page to 10. However, all of the blogs are loaded in the html of the page and page links are added to the bottom.
Example page: https://tulanehealthcare.com/about/newsroom/
When a user clicks the next page, it simply filters the content on the same page for the next group of postings and displays these to the user. Nothing in the html or URL change. This is all done via JavaScript.
So the question is, does Google consider this hidden content because all listings are in the html but the listings on page are limited to only a handful of them?
Or is Googlebot smart enough to know that the content is being filtered by JavaScript pagination?
If this is indeed a problem we have 2 possible solutions:
- not building the HTML for the next pages until you click on the 'next' page.
- adding parameters to the URL to show the content has changed.
Any other solutions that would be better for SEO?
-
thanks for the thorough response. I was leaning toward leaving it alone for the time being and this helps affirm my decision. I don't think we are going to see much benefit from tampering with it to make it more Googlebot-friendly
-
It will be strongly de-valued and the links may or may not even be noticed / seen at all. Googlebot can leverage headless browsers (something similar to Selenium or Windmill in Python, with targeting handled via XPath maybe). The only thing is, this takes ages longer than basic source-code scraping. To scrape the modified source with a headless browser can take, 5-10 seconds instead of less than 1 second
Since Google's mission is the 'index the web', you have to fathom that they wouldn't take this colossal efficiency hit all the time, or for everyone. Certainly looking at the results of many sites and their different builds, that's exactly what I see. Just because 'Google can' that doesn't mean that 'Google will' on all crawls and websites
Some very large websites rely on such technologies, but usually they're household name sites which offer a unique value-proposition of cultural trust signals for the specified audience. If you're not a titan of industry, then you're likely not one of the favoured few who gets such special treatment from Googlebot so regularly
This is an interesting post to read:
https://medium.com/@baphemot/whats-server-side-rendering-and-do-i-need-it-cb42dc059b38
... you may also have the option of building the HTML on the server side and then serving it in different URLs to the user. To me it sounds like a case where SSR might be the best option. That way you can still use your existing technologies (which are FAST) to render the modified HTML, but render it on the server side and then serve the static HTML (after the render) to users using SSR. That's personally what I would start looking at as it will keep the best of both worlds
Implementation could be costly though!
I don't think you'd get accused of cloaking but that doesn't change the fact, part of your site's architecture will 90% become invisible to Google 90% of the time which is not really very good for SEO (at all)
Another option, instead of building all the post listings on page-load (which will cause stutter between pages), just load all of them at once in the source code and use the JavaScript to handle the visual navigation (from page to page) only. Let JS handle the visual effect, but keep all listings in the HTML right from the get-go. That can work fine too, but maybe SSR would be better for you (I don't know)
...
after looking at your source code, it seems you have already done this. The only real problem would be if the links themselves were 'created' through the JS, which they are not (they all start visible in your non-modified source code). Yes, things which begin hidden, are slightly de-valued (but not completely). This might impact you slightly, but to be honest I don't think separating them out and making the pages load entirely separately would be much better. It would help architectural internal-indexation slightly, but likely would hamper content-loading speeds significantly
Maybe think about the SSR option. You might get the best of both worlds and you might be able to keep the JS intact whilst also allowing deep-linking of paginated content (which currently is impossible, can't link to page 2 of results)
Let me know if you have previously thought about SSR
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
International SEO Options?
Hi we currently have a site which is a example.com domain in the Australian market (we have geo-targeted to Australia within search console). We are looking to expand to United States. I have added the potential options down below, just wondering which one you guys think would be best from a SEO and practical standpoint? Or if there are other options i should consider? Option 1 The Australian domain is strong so this option takes this into consideration. Keep example.com (Australian) Add on: Sub-Directory for US Which would be: example.com/us/ In Search Console set the sub-folder to target US and also setup hreflang tags. Setup the US site on the sub-directory. Option 2 Add sub-folders for both Aus and US example/au/ (Australian)
Intermediate & Advanced SEO | | jaynamarino
example/us/ (United States) Setup hreflang targeting. Cons
Need to set up redirects for the current site to new location which is .com/au/ might also see drop in performance due to redirects. Cheers.0 -
Do you see any SEO risk here?
Hi, I’m seeking your opinion regarding the issue we are facing during rebranding
Intermediate & Advanced SEO | | EnglishtownSEO0 -
Woocommerce SEO & Duplicate content?
Hi Moz fellows, I'm new to Woocommerce and couldn't find help on Google about certain SEO-related things. All my past projects were simple 5 pages websites + a blog, so I would just no-index categories, tags and archives to eliminate duplicate content errors. But with Woocommerce Product categories and tags, I've noticed that many e-Commerce websites with a high domain authority actually rank for certain keywords just by having their category/tags indexed. For example keyword 'hippie clothes' = etsy.com/category/hippie-clothes (fictional example) The problem is that if I have 100 products and 10 categories & tags on my site it creates THOUSANDS of duplicate content errors, but If I 'non index' categories and tags they will never rank well once my domain authority rises... Anyone has experience/comments about this? I use SEO by Yoast plugin. Your help is greatly appreciated! Thank you in advance. -Marc
Intermediate & Advanced SEO | | marcandre1 -
Stolen website content
Hello, recently we had a lot of content written for our new website. Unfortunately me and my partner have went separate ways, and he has used all my unique content on his own website. All our product descriptions, about us etc, he simply changed the name of the company. He has agreed to take the content down, so that i can now put this content on our new website which is currently being designed. Will google see this as duplicate content as it has been on a website before? Even though the content has been removed from the original website. I was worried as the content is no longer "fresh" so to speak. Can any one help me with this,
Intermediate & Advanced SEO | | Alexogilvie0 -
SEO Checklist for eCommerce?
Over the past week or so, my eCommerce site has experienced a large drop in organic traffic from Google.I have used analytics to determine that the pages that have seen the biggest losses in traffic are product pages. I know there are so many seo checklists/site audit checklists out there recommending different things, but does anyone have a good checklist (or link to a good checklist) that would fit our situation best? Thank you!
Intermediate & Advanced SEO | | Gordian0 -
Can a website be punished by panda if content scrapers have duplicated content?
I've noticed recently that a number of content scrapers are linking to one of our websites and have the duplicate content on their web pages. Can content scrapers affect the original website's ranking? I'm concerned that having duplicated content, even if hosted by scrapers, could be a bad signal to Google. What are the best ways to prevent this happening? I'd really appreciate any help as I can't find the answer online!
Intermediate & Advanced SEO | | RG_SEO0 -
How to create SEO budget?
If I see the site which has 8,000 links from unique domains (RDD in Market Samurai). How much could it cost to build such amount of links? How long does it take? What SEO budget will be based on these data?
Intermediate & Advanced SEO | | rboychuk0 -
BEING PROACTIVE ABOUT CONTENT DUPLICATION...
So we all know that duplicate content is bad for SEO. I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source .... so if anyone else copies it they get penalised and not me... Would appreciate your answers... 🙂 regards,
Intermediate & Advanced SEO | | TopGearMedia0