How to test a geo tagged homepage?
-
The e-commerance system we have has a geo tagged hompage system so you can set up different homepages based on the user country IP. But I want to test what the default homepage is, if the system can not get the user IP, does anyone know of a way to do this?
Also does googles bot, not give an IP for this, or is it always an American IP (even if your site is set to a different country)?
Thanks
-
Unfortunately it seems that the e-commerce (which is closed sourced) is built around this geo IP location system.
When I checked the cached version of the site it was indeed the American version, which was just the default simple version of the homepage (no keyword text)
I then google searched stings of texts from our UK / Irish homepages, and no results found
So I then created an American version of the homepage (just a dup of the uk/irish homepages)
A week later did my search test, and got a hit.
Now we are starting to rank for a few more keywords on the homepage
-
I do not recommend redirecting people to different content based on IP. Googlebot may change IP addresses but it's always from the US. This makes it impossible for Googlebot to see any of your international content. You can use the IP address to ask the user if they want to set their settings to a different country and be placed there every time, but do not assume.
-
There are many proxy services out there so you can do Geo Testing. Basically they have a server in that country and using their settings you funnel your requests through that server and it's just like you were in that country. I know there's Wonder Proxy and I'm sure you could find others.
-
I'm not sure about the 'testing tool' thing, but Google says that Googlebot's IP Address changes from 'time to time' but that same explanation says you can look in your website's logs and do a reverse DNS lookup.
In terms of a 'default' page... in my opinion you should ensure you have a standard (unmodified/non-redirected) home page. So if your primary market is the USA then that's the default and then only redirect non-US based folk. That way you determine the default home page and aren't reliant on Googlebot's IP (or any other crawler's for that matter).
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
Non-standard HTML tags in content
I had coded my website's article content with a non-standard tag <cnt>that surrounded other standard tags that contained the article content, I.e.</cnt> , . The whole text was enclosed in a div that used Schema.org markup to identify the contents of the div as the articleBody. When looking at scraped data for stories in Webmaster Tools, the content of the story was there and identified as the articleBody correctly. It's recently been suggested by someone else that the presence of the non-standard <cnt>tags were actually making the content of the article uncrawlable by the Googlebot, this effectively rendering the content invisible. I did not believe this to be true, since the content appeared to be correctly indexed in Webmaster Tools, but for the sake of a test I agreed to removing them. In the last 6 weeks since they were removed, there have been no changes in impressions or traffic from organic search, which leads me to believe that the removal of the <cnt>tags actually had no effect, since the content was already being indexed successfully and nothing else has changed.</cnt></cnt> My question is whether or not an encapsulating non-standard tag as I've described would actually make the content invisible to Googlebot, or if it should not have made any difference so long as the correct Schema.org markup was in place? Thank you.
Technical SEO | | dlindsey0 -
Use of Multiple Tags
Hi, I have been monitoring some of the authority sites and I noticed something with one of them. This high authority site suddenly started using multiple tags for each post. And I mean, loads of tags, not just three of four. I see that each post comes with at least 10-20 tags. And these tags don't always make sense either. Let's say there is a video for "Bourne Legacy", they list tags like bourne, bourney legacy, bourne series, bourne videos, videos, crime movies, movies, crime etc. They don't even seem to care about duplicate content issues. Let's say the movie is named The Dragon, they would inclue dragon and the-dragon in tags list and despite those two category pages(/dragon and /the-dragon) being exactly the same now, they still wouldn't mind listing both the tags underneath the article. And no they don't use canonical tag. (there isn't even a canonical meta on any page of that site) So I am curious. Do they just know they have a very high DA, they don't need to worry about duplicate content issues? or; I am missing something here? Maybe the extra tags are doing more good than harm?
Technical SEO | | Gamer070 -
Worpress Tags Duplicate Content
I just fixed a tags duplicate content issue. I have noindexed the tags. Was wondering if anyone has ever fixed this issue and how long did it take you to recover from it? Just kind of want to know for a piece of mind.
Technical SEO | | deaddogdesign0 -
Source code structure: Position of content within the tag
Within the section of the source code of a site I work on, there are a number of distinct sections. The 1st one, appearing first in the source code, contains the code for the primary site navigation tabs and links. The second contains the keyword-rich page content. My question is this: if i could fix the layout so that the page still visually displayed in the same way as it does now, would it be advantageous for me to stick the keyword-rich content section at the top of the , above the navigation? I want the search engines to be able to reach the keyword-rich content faster when they crawl pages on the site; however, I dont want to implement this fix if it wont have any appreciable benefit; nor if it will be harmful to the search-engine's accessibilty to my primary navigation links. Does anyone have any experience of this working, or thoughts on whether it will make a difference? Thanks,
Technical SEO | | Tinhat0 -
Does Google follow a text based URL with no anchor tags?
I am seeing with Webmaster Tools that Google is trying to follow the text based truncated URL from SuperPages despite the fact that they are not in an anchor tag. The net result for Google is a 404 error as they try to access pages that do not exist. has anyone seen this issue before and any suggestions on how to prevent these errors? A Superpage listing: The first link works fine, but the text based link shown below is cut off and as a result Google gets to a 404 page. http://swbd-out.superpages.com/webresults.htm?qkw=dr+hylton+lightman&qcat=web&y=0&x=0& Dr. Hylton Lightman, MD - Pediatrician, Allergist, - Far Rockaway ... Dr. Hylton Lightman, MD, Far Rockaway, NY, Rated 4/4 By Patients. 26 Reviews, Patients' Choice Award Winner, Phone Number & Practice Locations www.vitals.com/doctors/Dr_Hylton_Lightma... [Found on Bing]
Technical SEO | | irvingw0 -
Alt and Title Attributes in Anchor Tags
Does it hurt to use alt and title attributes inside an anchor tag? Example: view my website article
Technical SEO | | donationtown0 -
Meta description tag missing in crawl diagnostics
Each week I've been looking at my crawl diagnostics and seomoz still flags a few pages with missing meta description although they are definitely in there. Any ideas on why this would be happening?
Technical SEO | | British_Hardwoods0