Photogallery and Robots.txt
-
Hey everyone
SEOMOZ is telling us that there are to many onpage links on the following page:
http://www.surfcampinportugal.com/photos-of-the-camp/
Should we stop it from being indexed via Robots.txt?
best regards and thanks in advance...
Simon
-
Hey Ryan
Thanks allot for your help, and suggestions, i will try to get more links from rapturecamps.com to this domain. Also your idea about the link adding is not bad, dont know why i didnt came up with that one
Thanks again anyway...
-
Hi Joshua. Since the domain is so new, the tool is basically telling you that you don't have much "link juice" to go around, so you're easily going to have more links on page than Google will consider important. This is natural and as your new domain gains links from around the web you'll be fine. I noticed that www.rapturecamps.com is well established so sending a few more relevant links directly from there will help with the situation.
Also, this is a clever offer that you could post to surfcampinportugal.com as well:
Add a Link and Get Discount
Got your own website, blog, forum?
If you add a link to Rapture Camps website you will receive a discount for your next booking.
Please contact us for further information. -
Hey Aran
Thanks for you fast reply, and nice to hear you like the design.
best regards
-
Personally, I wouldn't stop it being indexed.Its not like your being spammy with the onpage links.
P.s. awesome website, really love the photography on the background images.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I robots.txt an entire site to get rid of Duplicate content?
I am in the process of implementing Zendesk and will have two separate Zendesk sites with the same content to serve two separate user groups (for the same product-- B2B and B2C). Zendesk does not allow me the option to changed canonicals (nor meta tags). If I robots.txt one of the Zendesk sites, will that cover me for duplicate content with Google? Is that a good option? Is there a better option. I will also have to change some of the canonicals on my site (mysite.com) to use the zendesk canonicals (zendesk.mysite.com) to avoid duplicate content. Will I lose ranking by changing the established page canonicals on my site go to the new subdomain (only option offered through Zendesk)? Thank you.
On-Page Optimization | | RoxBrock0 -
Robot.txt file issue on wordpress site.
I m facing the issue with robot.txt file on my blog. Two weeks ago i done some development work on my blog. I just added few pages in robot file. Now my complete site seems to be blocked. I have checked and update the file and still having issue. The search result shows that "A description for this result is not available because of this site's robots.txt – learn more." Any suggestion to over come with this issue
On-Page Optimization | | Mustansar0 -
Description tag not showing in the SERPs because page is blocked by Robots, but the page isn't blocked. Any help?
While checking some SERP results for a few pages of a site this morning I noticed that some pages were returning this message instead of a description tag, A description for this result is not avaliable because of this site's robot.s.txt The odd thing is the page isn't blocked in the Robots.txt. The page is using Yoast SEO Plugin to populate meta data though. Anyone else had this happen and have a fix?
On-Page Optimization | | mac22330 -
How to exclude URL filter searches in robots.txt
When I look through my MOZ reports I can see it's included 'pages' which it shouldn't have included i.e. adding filtering rules such as this one http://www.mydomain.com/brands?color=364&manufacturer=505 How can I exclude all of these filters in the robots.txt? I think it'll be: Disallow: /*?color=$ Is that the correct syntax with the $ sign in it? Thanks!
On-Page Optimization | | neenor0 -
Site Maps / Robots.txt etc
Hi everyone I have setup a site map using a Wordpress pluggin: http://lockcity.co.uk/site-map/ Can you please tell me if this is sufficient for the search engines? I am trying to understand the difference between this and having a robots.txt - or do I need both? Many thanks, Abi
On-Page Optimization | | LockCity0 -
How do we handle sitemaps in robots.txt when multiple domains point to same physical location?
we have www.mysite.net, www.mysite.se, www.mysite.fi and so on. all of these domains point to the same physical location on our webserver, and we replace texts given back to client depending on which domain he/she requested. My problem is this: How do i configure sitemaps in robots.txt when robots.txt is used by multiple domains? If I for instance put the rows Sitemap: http://www.mysite.net/sitemapNet.xml
On-Page Optimization | | nordicnetproducts
Sitemap: http://www.mysite.net/sitemapSe.xml in robots.txt, would that result in some cross submission error?0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0