How to solve JavaScript paginated content for SEO
-
In our blog listings page, we limit the number of blogs that can be seen on the page to 10. However, all of the blogs are loaded in the html of the page and page links are added to the bottom.
Example page: https://tulanehealthcare.com/about/newsroom/
When a user clicks the next page, it simply filters the content on the same page for the next group of postings and displays these to the user. Nothing in the html or URL change. This is all done via JavaScript.
So the question is, does Google consider this hidden content because all listings are in the html but the listings on page are limited to only a handful of them?
Or is Googlebot smart enough to know that the content is being filtered by JavaScript pagination?
If this is indeed a problem we have 2 possible solutions:
- not building the HTML for the next pages until you click on the 'next' page.
- adding parameters to the URL to show the content has changed.
Any other solutions that would be better for SEO?
-
thanks for the thorough response. I was leaning toward leaving it alone for the time being and this helps affirm my decision. I don't think we are going to see much benefit from tampering with it to make it more Googlebot-friendly
-
It will be strongly de-valued and the links may or may not even be noticed / seen at all. Googlebot can leverage headless browsers (something similar to Selenium or Windmill in Python, with targeting handled via XPath maybe). The only thing is, this takes ages longer than basic source-code scraping. To scrape the modified source with a headless browser can take, 5-10 seconds instead of less than 1 second
Since Google's mission is the 'index the web', you have to fathom that they wouldn't take this colossal efficiency hit all the time, or for everyone. Certainly looking at the results of many sites and their different builds, that's exactly what I see. Just because 'Google can' that doesn't mean that 'Google will' on all crawls and websites
Some very large websites rely on such technologies, but usually they're household name sites which offer a unique value-proposition of cultural trust signals for the specified audience. If you're not a titan of industry, then you're likely not one of the favoured few who gets such special treatment from Googlebot so regularly
This is an interesting post to read:
https://medium.com/@baphemot/whats-server-side-rendering-and-do-i-need-it-cb42dc059b38
... you may also have the option of building the HTML on the server side and then serving it in different URLs to the user. To me it sounds like a case where SSR might be the best option. That way you can still use your existing technologies (which are FAST) to render the modified HTML, but render it on the server side and then serve the static HTML (after the render) to users using SSR. That's personally what I would start looking at as it will keep the best of both worlds
Implementation could be costly though!
I don't think you'd get accused of cloaking but that doesn't change the fact, part of your site's architecture will 90% become invisible to Google 90% of the time which is not really very good for SEO (at all)
Another option, instead of building all the post listings on page-load (which will cause stutter between pages), just load all of them at once in the source code and use the JavaScript to handle the visual navigation (from page to page) only. Let JS handle the visual effect, but keep all listings in the HTML right from the get-go. That can work fine too, but maybe SSR would be better for you (I don't know)
...
after looking at your source code, it seems you have already done this. The only real problem would be if the links themselves were 'created' through the JS, which they are not (they all start visible in your non-modified source code). Yes, things which begin hidden, are slightly de-valued (but not completely). This might impact you slightly, but to be honest I don't think separating them out and making the pages load entirely separately would be much better. It would help architectural internal-indexation slightly, but likely would hamper content-loading speeds significantly
Maybe think about the SSR option. You might get the best of both worlds and you might be able to keep the JS intact whilst also allowing deep-linking of paginated content (which currently is impossible, can't link to page 2 of results)
Let me know if you have previously thought about SSR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Copied Content - Who is a winner
Someone copied the content from my website just I publish the article. So who is the winner? and I am in any problem? What to do? Please check Image. RkJ0p9l.jpg
Intermediate & Advanced SEO | | varunrupal0 -
How to solve this issue and avoid duplicated content?
My marketing team would like to serve up 3 pages of similar content; www.example.com/one, www.example.com/two and www.example.com/three; however the challenge here is, they'd like to have only one page whith three different titles and images based on the user's entry point (one, two, or three). To avoid duplicated pages, how would suggest this best be handled?
Intermediate & Advanced SEO | | JoelHer0 -
Dealing with negative SEO
Interested to know people strategies for detecting and mitigating negative SEO. Previously I've used link monitoring tool and kept an eye on all new back links coming in to any page on the site. I have then manually assessed each one again using some tools and actually visiting the website. However, this always leaves me with one dilemma. Regardless of my assessment how do search engines see that link? I run three lists a white list, grey list and blacklist. White list - very relevant and have a lot of authority. I.e. leading industry blogs and forums. Grey list - out of topic/industry, directories Blacklist - sites de-indexed by Google, illegal content or absolute spam (i.e. one page filled with hundreds of links to different domains) Do you have any thoughts? How do you assess if link is bad?
Intermediate & Advanced SEO | | seoman100 -
How do you archive content?
In this video from Google Webmasters about content, https://www.youtube.com/watch?v=y8s6Y4mx9Vw around 0:57 it is advised to "archive any content that is no longer relevant". My question is how do you exactly do that? By adding noindex to those pages, by removing all internal links to that page, by completely removing those from the website? How do you technically archive content? watch?v=y8s6Y4mx9Vw
Intermediate & Advanced SEO | | SorinaDascalu1 -
Is this link SEO-Friendly?
Hi Mozzers, Was wondering if someone could tell me if this link is SEO-friendly? class = "sl">name="sc" type="checkbox" value="1449"><a <span="">href</a> <a <span="">="</a>http://www.example.com/" onclick = "Javascript: return dosc(2);">src="imsd/coff.gif" id="cbsc2"/>Keyword It has some Javascript that makes the link work like a filter. Cheers, Carlos
Intermediate & Advanced SEO | | Carlos-R0 -
Is this Negative SEO?
Hello Everyone, I have just spent the past 9 months designing, engineering, and manufacturing our first product. We just opened our web store and started selling product. http://miveu.com. I have spent zero time doing any kind of SEO. We haven't even put up a sitemap yet or any redirects. I'm just now starting to take a look at things. As soon as I start digging, I find that it appears that someone is at least attempting to do some kind of negative SEO against us. It seems to have started about a month ago. Check this out. https://www.google.com/search?q=miveu&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-beta#q=miveu&hl=en&client=firefox-beta&hs=bo2&tbo=1&rls=org.mozilla:en-US:official&prmd=imvns&source=lnt&tbs=qdr:d&sa=X&psj=1&ei=AGgBUJfJNK650QHW8YW-Bw&ved=0CE0QpwUoAg&bav=on.2,or.r_gc.r_pw.r_qf.,cf.osb&fp=335379d2f3ac2208&biw=993&bih=637 At first I was thinking this isn't so good, but it seems they are just trying to build crap content about our keywords and make it relevant to us. After taking a closer look, I'm thinking maybe this isn't all bad. They have targeted all of our exiting YouTube videos and created new videos that use all of our keywords, titles, people, etc in an effort to make our existing videos irrelevant. They have have also done the same thing with articles that were written about us, awards we have won as well as started negative campaigns about us and people who have said good things about us. Here are my thoughts. While the content is really crappy, it seems like they are actually building keyword relevance to us and our products. They have all the right keywords, the content is just crappy. "There is no such thing as bad press". I don't know if anyone has ever said this before, but I'm going to refer to their effort as "White-Hate SEO" because it doesn't appear to be a real dark effort. Am I missing something here, am I way off base? My bigger worry is that their campaign may include some much darker efforts that I just haven't found yet. I'm pretty sure I know who is responsible for this. They have made it clear that they really do hate us. Frankly, I'm not interested in retaliation, I just want to get my own house in order with some good old-school whit-hat SEO. I'm really curious to hear your thoughts on this. Thanks!
Intermediate & Advanced SEO | | dmac
David0 -
Duplicate Content on Wordpress b/c of Pagination
On my recent crawl, there were a great many duplicate content penalties. The site is http://dailyfantasybaseball.org. The issue is: There's only one post per page. Therefore, because of wordpress's (or genesis's) pagination, a page gets created for every post, thereby leaving basically every piece of content i write as a duplicate. I feel like the engines should be smart enough to figure out what's going on, but if not, I will get hammered. What should I do moving forward? Thanks!
Intermediate & Advanced SEO | | Byron_W0 -
SEO question
Hi i changed my page titles for a competitive keyword last week and noticed it has dropped 9 search engine ranking positions. Was ranking 37 and now it 46. Would you guys leave it and see if it starts creeping back up or change again? the page title i used was across my pages for example was Primary keyword | secondary keyword | Heading on page thanks for you help
Intermediate & Advanced SEO | | wazza19850