Stock lists - follow of nofollow?
-
a bit of a catch 22 position here that i could use some advice on please!
We look after a few Car dealership sites that have daily (some 3 times a day) stock feeds that add and remove cars form the site, which in turn removes/creates pages for each vehicle.
We all know how much search engines like sites that have content that is updated regularly but the frequency it happens on our sites means we are left with lots of indexed pages that are no longer there.
now my question is should i nofollow/disallow robots on all the pages that are for the details of the vehicles meaning the list pages will still be updated daily for "new content" or allow google to index everything and manage the errors to redirect to relevant pages?
is there a "best practice" way to do this or is it really personal preference?
-
I would take the "aggregation" route.
Instead of having lots of pages for each make and model of vehicle, I would make single pages that list all of the vehicles of a single make and model. These pages would be more substantive, permanent, impressive, useful, competitive, than a lot of skimply single pages that appear and disappear from your website.
Competitors are probably not doing this because it is difficult instead of easy. Put checkboxes down the side of the page that visitors can "check to compare".
-
The idea of redirecting a user to a car that might not match their search does not seem like a very user friendly option, if they wanted a mustang and clicked a search listing for it but none are available and were then redirected to a camaro page the user would not be happy, and that flow is built only for the site and not the customer, IMO.
-
Hi Ben,
Is it possible to create a basic sold page with some dynamic info about the vehicle. After the vehicle becomes sold or no longer available then 301 the old page to the sold page populated with the vehicle info with parameters and some possible other buying choices.
For example:
siteblah.com/Make/Moldel/Year/short-car-desciption
When sold, 301 page to:siteblah.com/Vehicles/Sold/Vehicle-Sold.php?listing-id='12222190'
The benefit here is the old page sends the new page the link juice so you don't lose that. With content the customer understands the car is sold, and providing them with actionable options. The search engines learn about the new page and can treat as such. Additionally you'd only have to create one new page and plugin the parameters. Every 3 months or so you can probably remove the old pages and the 301 redirect depending on server performance.
-
Thanks for the response.
The two routes i was looking at are both for the user. i'm looking at either not allowing search engines to serve the content that can expire, or redirecting them to similar vehicles/relevant content within the site.
i was purely wondering which would have additional benefits with google, as the first option is the easier of the two development wise.
-
My thoughts are instead of worrying about what is best for Google, think of what will give a user the best experience and go with that. While it is nice to have a lot of pages index, if by the time they get to Google they are gone, what good does that do a visitor who was searching for a specific term that your site no longer offers? They are much more likely to leave which will effect your whole site negativity as bounces from search go up.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No Index No follow instead of Rel canoncical on product pages
Hi all, we handle our product pages no with rel canonical now, we have 1 url that is indexed http://www.prams.net/cam-combi-family the other colours have different urls like http://www.prams.net/cam-combi-family-3-in-1-pram-reversible-seat-car-seat-grey-d which canonicalize to the indexed page. Google still crawls all those pages. For crawl budget reasons we want to use "no index, no follow" instead on these pages (the pages for the other colours)? Google would then crawl fewer pages more often? Does this make sense? Are their any downsides doing it? Thanks in advance Dieter
Intermediate & Advanced SEO | | Storesco1 -
Cleaning up user generated nofollow broken links in content.
We have a question/answer section on our website, so it's user generated content. We've programmed all user generated links to be nofollow. Over time... we now have many broken links and some are even structurally invalid. Ex. 'http:///.'. I'm wanting to go in and clean up the links to improve user experience, but how do I justify it from an SEO standpoint and is it worth it?
Intermediate & Advanced SEO | | mysitesrock0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Do Local Search Efforts (Citations, NAP, Reviews) have an impact on traditional organic search listings (without the A, B, C mapping icons), but rather the traditional listings?
Are citations, NAP, Reviews, and other local search efforts impact traditional SEO listings? Can one elaborate?
Intermediate & Advanced SEO | | JQC0 -
Manage Ranking for " Out of Stock" pages
Hi, I own an e-commerce marketplace where the products are sold by 3rd party sellers and purchased by end users. My problem is that whenever a new product is added the search engine crawls the website and it ranks the new page on 4th page. when I start optimizing it to gain better rankings in search engines the product goes out of stock and the rankings drop to below 100. To counter that I started showing other related products on the "Out of Stock" pages but even then the rankings are dropping. Can someone help me with this problem?
Intermediate & Advanced SEO | | RuchiPardal0 -
Triple listing in rankings
We have a triple listing for the keyword vca cursus (Dutch keyword). But the page we optimized for that keyword ranks lower than our vca examen page. First we had the number one spot but now we rank 2 to 5. In opensite explorer i compared the competiton and we scored on almost all factors better. Do you know the causes why the vca examen page ranks higher then the vca cursus page. Is it possible to change that or reverse the triple listing somehow. And why does our competitor ranks higher?
Intermediate & Advanced SEO | | PlusPort0 -
De-indexing search results noindex, follow or noindex, nofollow
If search results were not originally blocked with robots.txt, and need to be de-indexed, is it better to use noindex, nofollow or noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0 -
Additional page listings under main listing in google
Hi, I'm working on the SEO for a site and I'd love to get the additional page links under our main site link in google, any ideas as to how we can achieve this ? Thanks,
Intermediate & Advanced SEO | | Prongo0