Problem crawling a website with age verification page.
-
Hy every1,
Need your help very urgent. I need to crawl a website that first has a page where you need to put your age for verification and after that you are redirected to the website. My problem is that SEOmoz, crawls only that first page, not the whole website. How can I crawl the whole website?, do you need me to upload a link to the website?
Thank you very much
Catalin
-
Hello Catalin,
Our crawler will not be able to get past an age verification page. You will need to find or unlock a subfolder or subdomain to bypass this if you would like our crawlers to be able to get through. Luckily, Google's crawlers are a bit more thorough a will be able to index your site properly. We are hoping to add this ability soon and I hope you can find a way for us to get through in the meantime.
-
the problem is that the pages are not in a subfolder. I have to pass the verification page every time :(. SEOMoz is crawling only the first page.
-
Well that's a small side note to your problem ;-), are you able to just set up a crawl for a sub folder? Or do you have to pass the verification at all times?
-
OK, thank you for your short answer, but the thing is I didn't understand anything from what you wrote :).
I want to add that I do not own the website. I dont have acces to back-end, cms, etc. The client just wants me to crawl the whole website to see if something is wrong. I can see with my own eyes that the website has duplicate content, but seomoz doesnt crawls the website, because of that first page with verification.
-
Hi Catalin,
The best way do to this is of course to include a link to the rest of the Web site (you could remove the link of course when Roger came by). But what you also could is redirect the user based on the user agent when linking wouldn't be an option.
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I'm facing website direct traffic-related issue on my website, how to solve it?
I'm having a huge number of direct traffic on my website. But, I'm not able to know that from where traffic is approaching, like which source or website.
Moz Pro | | RoseAlvina
I want to know the source of traffic generating on my website? _Can anyone help on how to track this traffic? _ Can you recommend any Tool or some other way to know the source of traffic? Due to direct traffic, the stats of my website are here: https://www.screencast.com/t/p3wxkLdJAL0 -
404 : Errors in crawl report - all pages are listed with index.html on a WordPress site
Hi Mozers, I have recently submitted a website using moz, which has pulled up a second version of every page on the WordPress site as a 404 error with index.html at the end of the URL. e.g Live page URL - http://www.autostemtechnology.com/applications/civil-blasting/ Report page URL - http://www.autostemtechnology.com/applications/civil-blasting/index.html The permalink structure is set as /%postname%/ For some reason the report has listed every page with index.html at the end of the page URL. I have tried a number of redirects in the .htaccess file but doesn't seem to work. Any suggestions will be strongly appreciated. Thanks
Moz Pro | | AmanziDigital0 -
Clearing our on-page ranking reports?
Is there a way to "bulk delete" on-page ranking reports which are no longer relevant? I know we can delete them one at a time, but the reason I ask is that I've done a fair bit of work changing URL's, so the reports are often for old URL's which no longer exist. (yes, I made sure to do 301 redirects to the new ones!) Thanks in advance for any help!
Moz Pro | | koalatm0 -
Duplicate page report
We ran a CSV spreadsheet of our crawl diagnostics related to duplicate URLS' after waiting 5 days with no response to how Rogerbot can be made to filter. My IT lead tells me he thinks the label on the spreadsheet is showing “duplicate URLs”, and that is – literally – what the spreadsheet is showing. It thinks that a database ID number is the only valid part of a URL. To replicate: Just filter the spreadsheet for any number that you see on the page. For example, filtering for 1793 gives us the following result: | URL http://truthbook.com/faq/dsp_viewFAQ.cfm?faqID=1793 http://truthbook.com/index.cfm?linkID=1793 http://truthbook.com/index.cfm?linkID=1793&pf=true http://www.truthbook.com/blogs/dsp_viewBlogEntry.cfm?blogentryID=1793 http://www.truthbook.com/index.cfm?linkID=1793 | There are a couple of problems with the above: 1. It gives the www result, as well as the non-www result. 2. It is seeing the print version as a duplicate (&pf=true) but these are blocked from Google via the noindex header tag. 3. It thinks that different sections of the website with the same ID number the same thing (faq / blogs / pages) In short: this particular report tell us nothing at all. I am trying to get a perspective from someone at SEOMoz to determine if he is reading the result correctly or there is something he is missing? Please help. Jim
Moz Pro | | jimmyzig0 -
On-Page SEO Fixes - Are They Relative?
So, I'm implementing on-page fixes for a site that my company runs SEO services for (www.ShadeTreePowersports.com). However, I was wondering if there was a way to rank a pages' SEO quality, in general? As of now, it seems like the only way your recommendations can be consumed and altered is on a keyword basis. However, this seems be the reason I have a good amount of my F-Grades. Since my website sells powersports apparel and accessories, we cover a variety of applicable (but different) keywords like 'Motorcycle parts' or 'snow tubes,' because we sell so many different types of products. But, when I look at my F-Grades - SEOMoz is telling me my homepage is ranking poorly for a multitude of those pertinent keywords - but only because my page isn't catered specifically to each of them (IE: 'Snowmobile Parts' - 'Water Sport Apparel') But, with so many different types of products, catering to a specific one is impossible and would be detrimental. Is there a way to see how a page ranks, without factoring in those keywords? Or a better way that I can use these recommendations more efficiently? Thanks guys!
Moz Pro | | BrandLabs0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Crawl Diagnostics - unexpected results
I received my first Crawl Diagnostics report last night on my dynamic ecommerce site. It showed errors on generated URLs which simply are not produced anywhere when running on my live site. Only when running on my local development server. It appears that the Crawler doesn't think that it's running on the live site. For example http://www.nordichouse.co.uk/candlestick-centrepiece-p-1140.html will go to a Product Not Found page, and therefore Duplicate Content errors are produced. Running http://www.nhlocal.co.uk/candlestick-centrepiece-p-1140.html produces the correct product page and not a Product Not Found page Any thoughts?
Moz Pro | | nordichouse0 -
Set crawl frequency
Current crawl frequency is weekly, is it possible for me to set this frequency our-self?
Moz Pro | | bhanu22170