Pagination Issues
-
Hi,
My press release section is showing pagination issues (duplicate pages)
http://www.me-by-melia.com/press/
I know they are showing 27 duplicate page titles which would be a easy fix. Do you recommend pagination?
Let me know if u see any other issues.
-
Clcik start capture, then when youload page it will list all requests and their status codes
-
what you recommend on how to change the url structure from http://www.me-by-melia.com/index3.html to www.melia-by-melia,com/London?
I try to F12 on I8, and click network. How do you distingiush how many redirects there were?
-
Excellent point as I whole heartily agree!
-
do you mean pass a title in the query string so they have indervidual titles? yes that would be a good idea. Duplicate titles is a waste of prime SEO real estate
if when you choose a different page, the content changes significantly then of cause DO NOT use canonical tags.
-
Your right. I was referring to duplicate titles. Something as simple as that can be fix updating the title tags. Do you recommend I changing the titles of the page in the URL string instead of showing /press26 or /press27?
Good suggestions though!
-
sorry i may of mis-understood.
Are they duplicate content? if so i would do as i suggested,
If they are indervidual press releases, then why are they being reported as duplicate. You need to add enouth content to make sure thet they are seen as indervidual.
I can not load page its seems to be offline or some problem, so i dont understand how you are using pagnation,
-
Will the individual press release pages still get indexed and ranked individually in the search engines?
-
Add http://www.me-by-melia.com/press/"/>
this will tell the SE that no matter what the parameters give credit to http://www.me-by-melia.com/press/
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ranking issue for new website
Hi all, I have got a specific SEO challenge. 6 months ago, we started to build an eCommerce site (located in the UK). In order to speed up the site launch, we copied the entire site over from an existing site based in Ireland. Now, the new UK site has been running for 5 months. Google has indexed many pages, which is good, but we can't rank high (position: between 20-30 for most pages). We thought it was because of content duplication in spite of different regions. So we tried to optimize the pages for the UK site to make them more UK-related and avoid content duplication. I've also used schema to tell google it's a UK-based site and set up Google my business and got more local citations. Besides, If you could give me any suggestions, it'd be perfect.
Intermediate & Advanced SEO | | Insightful_Media
Thank you so much for your time and advice.1 -
DeepCrawl Calls Incomplete Open Graph Tags and Missing Twitter Cards An Issue. How important is this?
Hi, Let me first say that I really like the tool DeepCrawl. So, not busting on them. More like I'm interested in the relative importance of two items they call as "Issues." Those items are "Incomplete Open Graph Tags" and "No Valid Twitter Cards." They call this out on every page. To define it a bit further, I'm interested in the importance as it relates to organic search.I'm also interested in there's some basic functionality we may have missed in our Share42 implementation. To me, it looks like the social sharing buttons work. Also, we use Share42 social sharing buttons, which are quite functional. If it would help, I could private message you an example url. Thanks! Best... Mike
Intermediate & Advanced SEO | | 945011 -
Website Indexing Issues - Search Bots will only crawl Homepage of Website, Help!
Hello Moz World, I am stuck on a problem, and wanted to get some insight. When I attempt to use Screaming Spider or SEO Powersuite, the software is only crawling the homepage of my website. I have 17 pages associated with the main domain i.e. example.com/home, example.com/sevices, etc. I've done a bit of investigating, and I have found that my client's website does not have Robot.txt file or a site map. However, under Google Search Console, all of my client's website pages have been indexed. My questions, Why is my software not crawling all of the pages associated with the website? If I integrate a Robot.txt file & sitemap will that resolve the issue? Thanks ahead of time for all of the great responses. B/R Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
Rel=Alternate on Paginated Pages
I've a question about setting up the rel=alternate & rel=canonical tags between desktop and a dedicated mobile site in specific regards to paginated pages. On the desktop and mobile site, all paginated pages have the rel=canonical set towards a single URL as per usual. On the desktop site though, should the rel=alternate be to the relevant paginated page on the mobile site (ie a different rel=alternate on every paginated page) or to a single URL just as it is vice versa. Cheers chaps.
Intermediate & Advanced SEO | | eventurerob1 -
Wordpress to HubSpot CMS - I had major crawl issues post launch and now traffic is down 400%
Hi there good looking person! Our traffic went from 12k visitors in july to 3k visitors in july. << www.thedsmgroup.com >>When we moved our site from wordpress to the hubspot COS (their CMS system), I didnt submit a new sitemap to google webmaster tools. I didn't know that I had to... and to be honest, I've never submitted or re-submitted a sitemap to GWT. I have always built clean sites with fresh content and good internal linking and never worried about it. Yoast kind of took care of the rest, as all of my sites and our clients' sites were always on wordpress. Well, lesson learned. I got this message on June 27th in GWT_http://www.thedsmgroup.com/: Increase in not found errors__Google detected a significant increase in the number of URLs that return a 404 (Page Not Found) error. Investigating these errors and fixing them where appropriate ensures that Google can successfully crawl your site's pages._One month after our site launched we had 1,000 404s on our website. Ouch. Google thought we had a 1,200 page website with only 200 good pages and 1,000 error pages. Not very trust worthy... We never had a 404 ever before this, as we added a plugin to wordpress that would 301 any 404 to the homepage, so we never had a broken link on our site, which is not ideal for UX, but as far as google was concerned, our site was always clean. Obviously I have submitted a new sitemap to GWT a few weeks ago, and we are moving in the right direction... **but have I taken care of everything I need to? I'm not sure. Our traffic is still around 100 visitors per day, not 400 per day as it was before we launched the new site.**Thoughts?I'm not totally freaking out or anything, but a month ago we ranked #1 and #2 for "marketing agency nj", now we aren't in the top 100. I've never had a problem like this. _I added a few screen grabs from Google Webmaster Tools that should be helpful.__Bottom line, have I done everything I need to or do I need to do something with all of these "not found" error details that I have in GWT?_None of these "not found" pages have any value and I'm not sure how Google even found them... For example: http://www.thedsmgroup.com/supersize-page-test/screen-shot-2012-11-06-at-2-33-22-pmHelp! -JasonuhLLtou&h4QmGCW#0 uhLLtou&h4QmGCW#1
Intermediate & Advanced SEO | | Charlene-Wingfield0 -
HTTPS Certificate Expired. Website with https urls now still in index issue.
Hi Guys This week the Security certificate of our website expired and basically we now have to wail till next Tuesday for it to be re-instated. So now obviously our website is now index with the https urls, and we had to drop the https from our site, so that people will not be faced with a security risk screen, which most browsers give you, to ask if you are sure that you want to visit the site, because it's seeing it as an untrusted one. So now we are basically sitting with the site urls, only being www... My question what should we do, in order to prevent google from penalizing us, since obviously if googlebot comes to crawl these urls, there will be nothing. I did however re-submitted it to Google to crawl it, but I guess it's going to take time, before Google picks up that now only want the www urls in the index. Can somebody please give me some advice on this. Thanks Dave
Intermediate & Advanced SEO | | daveza0 -
Pagination and links per page issue.
Hi all, I have a listings based website that just doesn't seem to want to pass rank to the inner pages. See here for an example: http://www.business4sale.co.uk/Buy/Hotels-For-Sale-in-the-UK I know that there are far too many links on this page and I am working on reducing the number by altering my grid classes to output fewer links. The page also displays a number of links to other page numbers for these results. My script adds the string " - Page2" to the end of the title, description and URL when the user clicks on page two of these results. My question is: Would an excessive amount(200+) of links on a page result in less PR being passed to this page(looking spammy)? And would using rel canonical on page numbers greater than 1 result in better trust/ranking? Thanks in advance.
Intermediate & Advanced SEO | | Mulith0