Difference between urls and referring urls?
-
Sorry, nit new to this side of SEO
We recently discovered we have over 200 critical crawler issues on our site (mainly 4xx)
We exported the CSV and it shows both a URL link and a referring URL. Both lead to a 'page not found' so I have two questions?
What is the difference between a URL and a referring URL?
What is the best practice/how do we fix this issue? Is it one for our web developer?
Appreciate the help.
-
No. The referring URL is a page on your site that has a broken link on it. These are damaging your rankings so so fix ASAP. Go to all the referring pages and fix or remove the links with the URL in.
-
I believe "URL" is the page on your website that is 404ing/Broken, the "Referring URL" is the website someone found your URL on and clicked through to. For example, if you had a broken link on a Facebook post you did, it would show the URL as "yourwebsite.com/examply" (broken link) and the Referring URL would be "facebook.com/yourprofile".
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question on changing URL structures
We have a lot of "Long URL" errors in moz, and our URLs have no helpful format to them. For example, our blog URLs currently have a date and the title so they end up looking like this: blog/year/month/day/long title that never ends. If I were to setup a new URL structure using MOZ best practices, can I just make the change going forward and redirect a few high trafficked links to this new structure? Or do I really need to make the change for the website (specifically blog) as a whole to see a positive impact? I know this means there may be an initial drop in traffic which I'd like to avoid.
Moz Pro | | ETerika0 -
Why is MOZ crawl is returning URLs with variable results showing Missing Meta Desc? Example: http://nw-naturals.net/?page_number_0=47
Can you help me dive down into my website guts to find out why the MOZ crawl is returning URLs with variable results? And saying this is missing a description when it's not really a page? Example: http://nw-naturals.net/?page_number_0=47. I've asked MOZ but it's a web development issue so they can't help me with it. Has anyone had an issue with this on their website? Thank you!
Moz Pro | | lewisdesign0 -
WWW used in research URL, or not to WWW
Long time user, infrequent poster.... thanks for taking my question... When I go to gather a series of data elements on a company's URL, the data changes (sometime dramatically) depending on whether the 'www.' is added to the URL & it seems related more to Page data than Domain. My question is about which data I should be using to assess the real strength of the site / page? Is there a 'best practice' question here, a personal preference or is there an actual difference in the performance of the www vs the non-www version? aquGYdz
Moz Pro | | SWGroves0 -
Rogerbot crawls my site and causes error as it uses urls that don't exist
Whenever the rogerbot comes back to my site for a crawl it seems to want to crawl urls that dont exist and thus causes errors to be reported... Example:- The correct url is as follows: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/ But it seems to want to crawl the following: /vw-baywindow/cab_door_slide_door_tailgate_engine_lid_parts/cab_door_seals/genuine_vw_brazil_cab_door_rubber_68-79_10330/?id=10330 This format doesn't exist anywhere and never has so I have no idea where its getting this url format from The user agent details I get are as follows: IP ADDRESS: 107.22.107.114
Moz Pro | | spiralsites
USER AGENT: rogerbot/1.0 (http://moz.com/help/pro/what-is-rogerbot-, [email protected])0 -
The keyword ranking report takes into account all my website urls? Can I specify the URLs where I want to track the keywords?
I don't know if my weekly reports are reporting the ranking of my keywords correctly. I have added some new keywords, since that all my reports are in red numbers. I don't know if this is happening because I did something wrong, or if is because my rankings are really falling down.
Moz Pro | | hockerty0 -
Tool for tracking actions taken on problem urls
I am looking for tool suggestions that assist in keeping track of problem urls, the actions taken on urls, and help deal with tracking and testing a large number of errors gathered from many sources. So, what I want is to be able to export lists of url's and their problems from my current sets of tools (SEOmoz campaigns, Google WM, Bing WM,.Screaming Frog) and input them into a type of centralized DB that will allow me to see all of the actions that need to be taken on each url while at the same time removing duplicates as each tool finds a significant amount of the same issues. Example Case: SEOmoz and Google identify urls with duplicate title tags (example.com/url1 & example.com/url2) , while Screaming frog sees that example.com/url1 contains a link that is no longer valid (so terminates in a 404). When I import the three reports into the tool I would like to see that example.com/url1 has two issues pending, a duplicated title and a broken link, without duplicating the entry that both SEOmoz and Google found. I would also like to see historical information on the url, so if I have written redirects to it (to fix a previous problem), or if it used to be a broken page (i.e. 4XX or 5XX error) and is now fixed. Finally, I would like to not be bothered with the same issue twice. As Google is incredibly slow with updating their issues summary, I would like to not important duplicate issues (so the tool should recognize that the url is already in the DB and that it has been resolved). Bonus for any tool that uses Google and SEOmoz API to gather this info for me Bonus Bonus for any tool that is smart enough to check and mark as resolved issues as they come in (for instance, if a url has a 403 error it would check on import if it still resolved as a 403. If it did it would add it to the issue queue, if not it would be marked as fixed). Does anything like this exist? how do you deal with tracking and fixing thousands of urls and their problems and the duplicates created from using multiple tools. Thanks!
Moz Pro | | prima-2535090 -
Moz points a difference in totals
I am not quite sure why this is happening but the Moz point s total on my logo when I hover over it is around 6 times more than the one on my main section. Can someone explain why ? Screen-Shot-2011-11-23-at-01.21.22.png
Moz Pro | | onlinemediadirect0 -
Metrics from Linkscape - DJ Passed, URL mozRank Passed and funny numbers
Hello, Hoping someone can help me understand the difference between the Domain Juice Passed and some interesting numbers found in the exported CSV file. I ran the Advanced Link Intelligence Report and focusing on the Links to Domain metrics. It looks like the report is sorted by mozRank passed but next to each link we are given the DJ Passed instead. Why is that? My confusion is compounded by the fact that when I export the CSV of this report it no longer includes the DJ Passed numbers but does show URL mozRank Passed instead. For Example, on the web version of the Advanced Link Intelligence Report the top link is: http://www.holdenouterwear.com/shop.php with mozRank: 5.56 mozTrust: 5.95 and DJ Passed: 4.49 In the CSV file we don't get the DJ passed but get the URL mozRank Passed of: 0.00051 Looking at the CSV file further some links have URL mozRank Passed of 4.00E-05 Anyone has a clear explanation of why DJ Passed is not in the CSV file and how the mozRank passed is calculated? And what the 4.00E-05 mean? Thank you.
Moz Pro | | miloszpekala0