Is there a way to take notes on a crawled URL?
-
I'm trying to figure out the best way to keep track of there different things I've done to work on a page (for example, adding a longer description, or changing h2 wording, or adding a canonical URL. Is there a way to take notes for crawled URLs? If not what do you use to accomplish this?
-
Hey! Dave here from the Help Team,
There are a couple different things you can do to mark items that you have done. One of the new features we have implemented into Site Crawl is the ability to mark items as "Fixed". This tool can be handy if you know that you have fixed issues with your site but are still waiting for your next update. Another trick you might want to do is download your "all crawled pages" CSV and then create a "notes" column. It wont live in the Moz dashboard but at least you would have a good record! Hopefully those options help you out!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Server blocking crawl bot due to DOS protection and MOZ Help team not responding
First of all has anyone else not received a response from the help team, ive sent 4 emails the oldest one is a month old, and one of our most used features on moz on demand crawl to find broken links doesnt work and its really frustrating to not get a response, when we're paying so much a month for a feature that doesnt work. Ok rant over now onto the actual issue, on our crawls we're just getting 429 errors because our server has a DOS protection and is blocking MOZ's robot, im sure it will be as easy as whitelisting the robots IP, but i cant get a response from MOZ with the IP. Cheers, Fergus
Feature Requests | | JamesDavison0 -
Errors for URLS being too long and archive data is duplicate
I have hundreds of errors for the same three things. The URL is too long. Currently the categories are domain.com/product-category/name main product/accessory product/
Feature Requests | | PacificErgo
-Do I eliminate somehow the product category? Not sure how to fix this. 2) It has all of my category pages listed as archives showing duplicates. I don't know why, as they are not blog posts, they hold products on them. I don't have an archived version of this. How do I fix this? 3. It is saying my page speed is slow. I am very careful to optimize all my photos in PhotoShop. Plus I have a tool on the site to further compress. I just went with another host company that is supposed to be faster. Any ideas/ I would so appreciate your help and guidance. All my best to everyone, be safe and healthy.0 -
MOZ Site Crawl - Ignore functionality question
Quick question about the ignore feature found in the MOZ Site Crawl. We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it. Will it clear the item from the current list until the next Site Crawl takes place. If Roger finds the issue again, it will relist the error? Will it clear the item from the list permanently, regardless if it has not been properly corrected?
Feature Requests | | StickyLife1 -
Is there any way to filter by relevancy first and then volume second? Right now I just export the results of keyword explorer and do it offline. It would be great if I could do it online
I'm trying to filter the results of a keyword search in keyword explorer by relevancy first and volume second. But the minute I select volume the relevancy is completely lost. I know I can export them and manipulate it in excel but is there a feature that allows me to do this in Moz?
Feature Requests | | Anerudh0 -
Crawl error : 804 https (SSL) error
Hi, I have a crawl error in my report : 804 : HTTPS (SSL) error encountered when requesting page. I check all pages and database to fix wrong url http->https but this one persist. Do you have any ideas, how can i fix it ? Thanks website : https://ilovemypopotin.fr/
Feature Requests | | Sitiodev0 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0 -
Competitor changed URL. What data do I lose if I update Moz settings?
One of my competitor's has changed their domain URL. I want to update this in the Moz campaign settings but I get a warning that all historical data will be lost if I do. Is that really true? I replaced one of my competitors with another for a week and didn't seem to lose historical data. Incidentally, could this be a feature request to allow more competitors to be tracked? eqEhEHz.jpg
Feature Requests | | justin-brock0