Will putting a one page site up for all other countries stop Googlebot from crawling my UK website?
-
I have a client that only wants UK users to be able to purchase from the UK site.
Currently, there are customers from the US and other countries purchasing from the UK site.
They want to have a single webpage that is displayed to users trying to access the UK site that are outside the UK.
This is fine but what impact would this have on Google bots trying to crawl the UK website?
I have scoured the web for an answer but can't find one. Any help will be greatly appreciated. Thanks
-
Thank you Kindly
-
If you're an SEO / marketing person, the likelihood is that you'll need a developers help. They should know what you mean when you ask to exempt a certain user-agent (Googlebot) from the redirects
-
Is there some guidance on how to do this? I have looked on the web but am struggling to find anything. Thanks
-
No impact if you exempt Google's user agent ('googlebot') from the redirects. That way Google can crawl properly without it turning into a fiasco (which it will if you don't exempt them, as they sometimes crawl from different data centres - but only one at once - which does have a fixed location)
If you fail to exempt googlebot then they will only see the page which is rendered to the location of the data centre they are crawling from, so they will think your site keeps going up and down, one page then many pages (which will cause a huge mess)
Remember: exempt Google from the redirects or things will get messy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Utilizing one robots.txt for two sites
I have two sites that are facilitated hosting in similar CMS. Maybe than having two separate robots.txt records (one for every space), my web office has made one which records the sitemaps for the two sites, similar to this:
Technical SEO | | eulabrant0 -
My Website stopped being in the Google Index
Hi there, So My website is two weeks old, and I published it and it was ranking at about page 10 or 11 for a week maybe a bit longer. The last few days it dropped off the rankings, which I assumed was the google algorithm doing its thing but when I checked Google Search Console it says my domain is not in the index. 'This page is not in the index, but not because of an error. See the details below to learn why it wasn't indexed.' I click request indexing, then after a bit, it goes green saying it was successfully indexed. Then when I refresh the website it gives me the same message 'This page is not in the index, but not because of an error. See the details below to learn why it wasn't indexed.' Not sure why it says this, any ideas or help is appreciated cheers.
Technical SEO | | sydneygardening0 -
"Moz encountered an error on one or more pages on your site" Error
I have been receiving this error for a while: "Moz encountered an error on one or more pages on your site" It's a Multi-Lingual Wordpress website, the robots.txt is set to allow crawlers on all links and I have followed the same process for other website I've done yet I'm receiving this error for this site.
Technical SEO | | JustinZimri0 -
Increase in pages crawled per day
What does it mean when GWT abruptly jump from 15k to 30k pages crawled per day? I am used to see spikes, like 10k average and a couple of time per month 50k pages crawled. But in this case 10 days ago moved from 15k to 30k per day and it's staying there. I know it's a good sign, the crawler is crawling more pages per day, so it's picking up changes more often, but I have no idea of why is doing it, what good signals usually drive google crawler to choose to increase the number of pages crawled per day? Anyone knows?
Technical SEO | | max.favilli1 -
Some pages on my site are not linked - should I add a Visual SiteMap?
Hello, I have a site that does not have a blog feed.
Technical SEO | | NikitaG
And unless it is done Manually there is no way to see the blog links.
www.MigrationLawyers.co.za Now, I submit the the Sitemap to google, but will it be a good Idea to include an actual sitemap of the site (for example in the footer of the site)
http://migrationlawyers.co.za/sitemap-immigration-south-africa and should i Make the "sitemap" link a follow or nofollow? Thanks so much in advance
Nikita0 -
Discontinuing a site & Redirecting Traffic to an Internal Page
We are wondering the best way to redirect the traffic from a site that will no longer exist. The Scenario:
Technical SEO | | TopFloor
Our client wants to discontinue this website http://www.animalcarepackaging.com/. We’d like to redirect the traffic from this site to an internal page on our client's other website: http://www.glenroy.com/packaging/. This internal page is the most appropriate to the content that appears on animalcarepackaging.com (as opposed to just the entire site glenroy.com). Possible Options We Are Considering:
Option 1: Keep hosting animalcarepackaging.com and add a 301 redirect for all pages to glenroy.com/packaging/. Our concern with this option is that Google/Bing will see animalcarepackaging.com as a gateway, which could hurt glenroy.com. Option 2: Keep hosting animalcarepackaging.com and add a 301 redirect so all pages are sent to glenroy.com/packaging/; AND file a change of address with Google and Bing. We believe this will allow people who have bookmarked animalcarepackaging.com to go to glenroy.com/packaging/; while people searching for animalcarepackaging.com will go to glenroy.com's home page. We would augment this by posting a message on the homepage of animalcarepackaging.com notifiying users that the site will be discontinued and info will be found at glenroy.com/packaging. Option 3: Do a change of address with Google/Bing and send all traffic to glenroy.com (rather than an internal page). Post information on the homepage of animalcarepackaging.com that the site will be discontinued on X-date, and info about animalcarepackaging.com will be able to be found at glenroy.com/packaging. Looking for feedback on our options and suggestions on how this can be handled.0 -
Duplicate content on the one domain (related to country targetting)
Hi - We have a client who has a TLD that they wish to target several markets using .com/au .com/us .com/sg Each will use duplicate content with slight variations to cater for the local market (spelling, industry jargon). They seem reluctant to register a TLD for each target market (which was our suggestion) and I am wondering what SEO penalties would apply for having a majority of duplicate content on the same domain – perhaps using subdomains would be better? Thanks!
Technical SEO | | E2E0 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0