How I can deal with ajax pagination?
-
Hello!
I would like to have your input about how I can deal with a specific page in my website
As you can see, we have a list of 76 ski resort, our pagination use ajax, wich mean we have only one url, and just below the list, we have a simple list of all the ski resort in this mountain, which show all the 76 ski resorts..
I know it's quite bad, since we can reach the same ski resort with two différents anchors links.
Thanks you very much in advance,
Simon
-
Hi again,
I still have a question
If all my content is accessible without javascript (my ajax pagination), do you think Google with crawl all my content?
Search Engine cannot read js no?
-
Thanks a lots, I think we'll go for option 2.
Thanks again -
Hi,
There are 3 ways:
1. Use the nofollow rel to pagination links (that will promote your first page only)
2. Exlude ajax pagination and change the title of your page adding the page number at the end.
3. Verify if the crawler bot is (google, yahoo, bing) then put a variable to exclude ajax when the page is crawled
There is allways a problem using ajax on website (it makes duplicates)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How can i safely eliminate wordpress unused tags?
Hi Over several years I used many tags ( more then 1000 ) on my wordpress website 😞 but most of them haven't any view and haven't any clicks on google search . now I want delete this old - useless - unused tags but I'm worried about seo problem like many 404 pages and problems like this . Does anyone have safe way to delete these wordpress tags? how can i safely remove them?
Technical SEO | | markdoel0 -
Can I Block https URLs using Host directive in robots.txt?
Hello Moz Community, Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website. Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website. Host: http://www.example.com I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website. Thanks for all of the great responses! Regards,
Technical SEO | | TJC.co.uk
Ramendra0 -
URL Structure for Deal Aggregator
I have a website that aggregates deals from various daily deals site. I originally had all the deals on one page /deals, however I thought that maybe it might be more useful to have several pages e.g. /beautydeals or /hoteldeals. However if I give every section it's own page that means I have either no current deals on the main /deals page or I will have duplicate content. I'm wondering what might be the best approach here? A few of the options that come to mind are: 1. Return to having all the deals on one page /deals and linking internally to content within that page
Technical SEO | | andywozhere
2. Have both a main /deals page with all of the deals plus other pages such as /beautydeals, but add re="canonical" to point to the main /deals page
3. Create new content for the /deals page... however I think people will probably want to see at least some deals straight away, rather than having to click through to another page.
4. Display some sub-categories on the main /deals page, but have separate URLs for other more popular sub-categories e.g. /beautydeals (this is how it works at the moment) I should probably point out that the site also has other content such as events and a directory. Any suggestions on how best to approach this much appreciated! Cheers, Andy0 -
What can I do if my reconsideration request is rejected?
Last week I received an unnatural link warning from Google. Sad times. I followed the guidelines and reviewed all my inbound links for the last 3 months. All 5000 of them! Along with several genuine ones from trusted sites like BBC, Guardian and Telegraph there was a load of spam. About 2800 of them were junk. As we don't employ any SEO agency and don't buy links (we don't even buy adwords!) I know that all of this spam is generated by spam bots and site scrapers copying our content. As the bad links have not been created by us and there are 2800 of them I cannot hope to get them removed. There are no 'contact us' pages on these Russian spam directories and Indian scraper sites. And as for the 'adult book marking website' who have linked to us over 1000 times, well I couldn't even contact that site in company time if I wanted to! As a result i did my manual review all day, made a list of 2800 bad links and disavowed them. I followed this up with a reconsideration request to tell Google what I'd done but a week later this has been rejected "We've reviewed your site and we still see links to your site that violate our quality guidelines." As these links are beyond my control and I've tried to disavow them is there anything more to be done? Cheers Steve
Technical SEO | | SteveBrumpton0 -
600+ Visitors a day after 6 months, can you do it?
So since the Penguin update the clients of the company I work for have gradually been losing traffic and money. Noone (except for me) has noticed this yet and connected the dots. Yesterday we all get called in to have a bollocking and the manager asks the head of our department if he would be confident of being able to get 600+ visitors a day to an 'average website' that has just started up, to which he replied 'yes'. Since I started here back in February there has not been a single new client that has been able to gain that many visitors (many have not gained even 25% of that figure), which in the post-panda and post-pegnuin world, I find completely understandable. When I first started here they were using SEO 'tactics' which people used to employ 5+ years ago and didn't even use exact match keyword data.I have had a few talks with them about how SEO has changed over the last few years and they still don't seem to understand that it is now significantly more difficult to gain traffic using SEO than it once was. If you were asked about the same question, thinking about the 'average' client you might get, would you be confident enough to guarantee that at the 6 month mark they would be getting 600+ visitors a day?
Technical SEO | | Kinsel0 -
How does your crawler treat ajax links?
Hello! It looks like the seomoz crawler (and google) follows ajax links. Is this normal behavior? We have implemented the canonical element and that seems to resolve most of the duplicate content issues. Anything else we can do? Example: Krom
Technical SEO | | AJPro0 -
Can Google read onClick links?
Can Google read and pass link juice in a link like this? <a <span="">href</a><a <span="">="#Link123" onClick="window.open('http://www.mycompany.com/example','Link123')">src="../../img/example.gif"/></a> Thanks!
Technical SEO | | jorgediaz0 -
How can I prevent sh404SEF Anti-flood control from blocking SEOMoz?
I'm using sh404SEF on my Joomla 1.5 website. Last week, I activated the security functions of the tool, which includes an anti-flood control feature. This morning when I looked at my new crawl statistics in SEOMoz, I noticed a significant drop in the number of webpages crawled, and I'm attributing that to the security configurations that I made earlier in the week. I'm looking for a way to prevent this from happening so the next crawl is accurate. I was thinking of using sh404SEFs "UserAgent white list" feature. Does SEOMoz have a UserAgent string that I could try adding to my white list? Is this what you guys recommend as a solution to this problem?
Technical SEO | | JBradySD0