Why do old URL format are still being crawled by Rogerbot?
-
Hi,
In the early days of my blog, I used permalinks with the following format:
http://www.mysitesamp.com/2009/02/04/heidi-cortez-photo-shoot/
I then decided to change this format using .htaccess to this format:
http://www.mysitesamp.com//heidi-cortez-photo-shoot/
My question is, why do rogerbot still crawls my old URL format since these urls' no longer exists in my website or blog.
-
Thanks Alan,
That solved my problem...
-
-
Hi Alan,
After disallowing the directory in robots.txt, Rogerbot still includes the non-existing URLs. Here is a sample URL that is being reported by Rogerbot
www.lugaluda.com/2009/08/05/chase-online-banking-chase-checking-bonus/
-
If you give me the url, i can crawl it fior you if you like.
-
Thanks Alan, I really appreciate your help. Gave me an idea since all the old URLs are coming from a virtual 2009 directory, I tried to add a disallow statement for that directory in the robots.txt section. Hopefully this will help solve the problem.
I will let you know the results after rogerbot finishes recrawling my site...
Thanks Dude....
-
You need to search your site, but bots start on a page and follow the links, if the report them then they must of found them, bots like googlebot or bingbot can find them on other sites, but rogerbot is only crawling within your site.
-
How will I know if they still exists on my site? If I tried to access the specific URLs, they are no longer active.
-
The old format must still exist in your site somewhere, bots follow links from your home page though your site.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why a certain URL ( a category URL ) disappears?
the page hasn't been spammed. - links are natural - onpage grader is perfect - there are useful high ranking articles linking to the page...pretty much everything is okay.....also all of my websites pages are okay and none of them has disappeared only this one ( the most important category of my site. )
Intermediate & Advanced SEO | | mohamadalieskandariii0 -
If I own a .com url and also have the same url with .net, .info, .org, will I want to point them to the .com IP address?
I have a domain, for example, mydomain.com and I purchased mydomain.net, mydomain.info, and mydomain.org. Should I point the host @ to the IP where the .com is hosted in wpengine? I am not doing anything with the .org, .info, .net domains. I simply purchased them to prevent competitors from buying the domains.
Intermediate & Advanced SEO | | djlittman0 -
Page URL keywords
Hello everybody, I've read that it's important to put your keywords at the front of your page title, meta tag etc, but my question is about the page url. Say my target keywords are exotic, soap, natural, and organic. Will placing the keywords further behind the URL address affect the SEO ranking? If that's the case what's the first n number of words Google considers? For example, www.splendidshop.com/gift-set-organic-soap vs www.splendidshop.com/organic-soap-gift-set Will the first be any less effective than the second one simply because the keywords are placed behind?
Intermediate & Advanced SEO | | ReferralCandy0 -
My site still out of rank
Hello, I am working on a site for past 3 months, here are the problems with this site, 1. It had a forum full of spam becuase initially captcha was not included, 10000 spam backlinks 2. Affiliate page was also hit by spam about 4000 spam backlinks which were either not existing or porn etc.... 3. Too many internal links which were indexed, these additional links were generated due to tags, ids, filters etc. Existing SEO team decided to remove the forum and after 30 days they blocked it in robots. But within 30 days the site moved from 3rd page to no where. Now after few days lator internal links are also cleaned by putting following in the robots, Dissallow: / *? Dissallow: / *id Dissallow: / *tag Links are now cleaning up, all the spam and bad links are now put into disavow file and sent to google via disavow tool. On daily bases good quality links are been produced such as through content, article submission, profile linking, Bookmarks etc. The site is still not any where on top 50 results. The impressions are decreasing, traffic also do not rise as much. How do you see all this situation. What do you suggest and how long do you think it will take to return to top 10 when good linking is being done and all preventive measures are being taken. I would appreciate any feedback on it. Thank you. Site URL: http://www.creativethemes.net keywords: magento themes, magento templates
Intermediate & Advanced SEO | | MozAddict0 -
Could this URL issue be affecting our rankings?
Hi everyone, I have been building links to a site for a while now and we're struggling to get page 1 results for their desired keywords. We're wondering if a web development / URL structure issue could be to blame in what's holding it back. The way the site's been built means that there's a 'false' 1st-level in the URL structure. We're building deeplinks to the following page: www.example.com/blue-widgets/blue-widget-overview However, if you chop off the 2nd-level, you're not given a category page, it's a 404: www.example.com/blue-widgets/ - [Brings up a 404] I'm assuming the web developer built the site and URL structure this way just for the purposes of getting additional keywords in the URL. What's worse is that there is very little consistency across other products/services. Other pages/URLs include: www.example.com/green-widgets/widgets-in-green www.example.com/red-widgets/red-widget-intro-page www.example.com/yellow-widgets/yellow-widgets I'm wondering if Google is aware of these 'false' pages* and if so, if we should advise the client to change the URLs and therefore the URL structure of the website. This is bearing in mind that these pages haven't been linked to (because they don't exist) and therefore aren't being indexed by Google. I'm just wondering if Google can determine good/bad URL etiquette based on other parts of the URL, i.e. the fact that that middle bit doesn't exist. As a matter of fact, my colleague Steve asked this question on a blog post that Dr. Pete had written. Here's a link to Steve's comment - there are 2 replies below, one of which argues that this has no implication whatsoever. However, 5 months on, it's still an issue for us so it has me wondering... Many thanks!
Intermediate & Advanced SEO | | Gmorgan0 -
Duplicate URL home page
I just got a duplicate URL error on by SEOMOZ report - and I wonder if I should worry about it Assume my site is named www.widgets.com I'm getting duplicate url from http://www.widgets.com & http://www.widgets.com/ Do the search engines really see this as different on the home page? The general drift on the web is that You site should look like Home page = http://www.widgets.com And subpages http://www.widgets.com/widget1/ Of course it seems as though the IIS7 slash tool will rewrite everything Including the home page to a slash.
Intermediate & Advanced SEO | | ThomasErb0 -
Is it fine to use an iframe for video content? Will it still be indexed on your URL?
If we host a video on a third party site and use an iframe to display it on our site, when the video is indexed in SERPs will it show on our site or on the third party site?
Intermediate & Advanced SEO | | nicole.healthline0 -
Crawl questions
My first website crawl indicating many issues. I corrected the issues, requested another crawl and received the results. After viewing the excel file I have some questions. 1. There are many pages with missing Titles and Meta Descriptions in the Excel file. An example is http://www.terapvp.com/threads/help-us-decide-on-terapvp-com-logo.25/page-2 That page clearly has a meta description and title. It is a forum thread. My forum software does a solid job of always providing those tags. Why would my crawl report not show this information? This occurs on numerous pages. 2. I believe all my canonical URLs are properly set. My crawl report has 3k+ records, largely due to there being 10 records for many pages. These extra records are various sort orders and style differences for the same page i.e. ?direction=asc. My need for a crawl report is to provide actionable data so I can easily make SEO improvements to my site where necessary. These extra records don't provide any benefit. IF the crawl report determined there was not a clear canonical URL, then I could understand. But that is not the case. An example is http://www.terapvp.com/forums/news/ If you look at the source you will clearly see Where is the benefit to including the 10 other records in the Crawl report which show this same page in various sort orders? Am I missing anything? 3. My robots.txt appropriately blocks many pages that I do not wish to be crawled. What is the benefit to including these many pages in the crawl report? Perhaps I am over analyzing this report. I have read many articles on SEO, but now that I have found SEOmoz, I can see I will need to "unlearn what I have learned". Many things such as setting meta keyword tags are clearly not helpful. I wish to focus my energy and I was looking to the crawl report as my starting point. Either I am missing something, or the report design needs improvement.
Intermediate & Advanced SEO | | RyanKent0