Crawling issue
-
Hi,
I have to set up a campaign for a webshop. This webshop is a subdomain itself.
First question: The two subfolders I need to track are /nl_BE and /fr_BE. What is the best way to handle this? Shall I set up two different campaigns for each subfolder, or shall I just make one campaign and add tags to keywords?
**Second question: **it seems like Moz can't crawl enough pages. There are no disallows in the robots.txt. Should I try putting the following at the top into my robots.txt?
User-agent: rogerbot
Disallow:Or is it because I want to crawl only a subdomain that it doesn't work?
Thanks
-
Hey,
Thanks for reaching out to us!
You can create a Campaign solely for a subdomain or subfolder by selecting the +Advanced setting in the Campaign set-up; just click the check box there and it will limit our Campaign audit to the pages on that specific subdomain or subfolder. From there, you can see in your Campaign Setting if you've set it up for just that chunk of your site, or for the entire root domain.
I've got a guide to this process that I think may help.
With regard to your second question, feel free to reach out to [email protected] so that we can take a closer look
Looking forward to hearing from you,
Eli
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do I have to manually mark as 'fixed' or 'Ignore' manually on Metadata Issues when completed?
When I have amended the missing description Metadata issue on an individual page do I have to manually mark as 'fixed' or 'Ignore'? I have attended to several pages regards Missing Description Metadata Issues and not manually marked them as fixed. However they still appear in metadata missing description issues after a second on demand crawl? Will they continue to appear until I manually mark them as fixed?
Getting Started | | LM_Marketing_Solutions_Ltd0 -
Moz site crawl doesn't work
The Moz site crawl isn't working for my campaign, but works for the site's on demand crawl. The search should not be disallowed by robots.txt or the headers. I'd like to be able to track the website for the campaign so I can see SEO gains / losses and increases / decreases in indexing.
Getting Started | | DrainKing0 -
Attempts to fix MOZ recommended issues resulted in drastic ranking drop.
I have a website built in SquareSpace. https://www.ruffhaus.com/ I recently started working with MOZ to track and improve organic SEO. After my initial site crawl, search visibility was reported at a waping 2.38%. Moz showed several critical crawler issues. Most were redirect, 4xx and long URL. So I started working on fixing the redirect and 4xx issues first. I thought this was a good thing. But now my already sad search visibility has dropped to .07% (-95.97%!). I also went from #1 on keyword, brand implementation plan (and 3 variations), to #27. What? Wondering where I went wrong and how to remedy? This is all new to me so I am sure I'm not providing all the info you need to answer my question. Hoping providing the site URL will help. Fire away!
Getting Started | | RuffHaus0 -
Standard Syntax in robots.txt doesn't prevent Moz bot from crawling
A client is getting many false positive site crawl errors for things like duplicate titles and duplicate content on pages that include /tag/ in the URL. An example is https://needquest.com/place_tag/autism-spectrum-disorder/page/4/ To resolve this we have set up a disallow statement in the robots.txt file that says
Getting Started | | btreloar
Disallow: /page/ For some reason this appears not to work, as the site crawl errors continue to list pages like this. Does anyone understand why that would be and what we need to do to properly disallow crawling these pages?0 -
Crawl test
Can anyone give me an idea how to use the MOZ crawl test results...I'm a little confused on how to read it? I have a lot of "no's"...I think this is good?
Getting Started | | sdwellers0 -
MOZ Starter Crawl not happeneing
Hi I added a new site 48hours + ago and the starter crawler has not even begun collecting data. Any help would be appreciated. cheers Isaac
Getting Started | | sodafizz0 -
Can MOZ crawl our website twice in a week?
I want to generate MOZ crawl errors report twice in a week. Is it possible to do that.
Getting Started | | chandman0 -
What are the solutions for Crawl Diagnostics?
Hi Mozers, I am pretty new to SEO and wanted to know what are the solutions for the various errors reported in the crawl diagnostics and if this question has been asked, please guide me in the right directions. Following are queries specific to my site just need help with these 2 only: 1. Error 404: (About 60 errors) : These are for all the PA 1 links and are no longer in the server, what do i do with these? 2. Duplicate Page Content and Title ( About 5000) : Most of these are automatic URL;s that are generated when someone fills any info on our website. What do I do with these URL;s. they are for example: _www.abc.fr/signup.php?_id=001 and then www.abc.fr/signup.php?id=002 and so on. What do I need to do and how? Plzz. Any help would be highly appreciated. I have read a lot on the forums about duplicate content but dont know how to implement this in my case, please advise. Thanks in advance. CY
Getting Started | | Abhi81870