PDF Optimization Question: Does URL Structure Matter?
-
Hi Mozzers:
I am optimizing a bunch of PDF brochures within a client's website. Besides the typical optimization tactics I'm applying, (like these) I have a question regarding the file/url structure of the PDFs themselves. By default, the client is locating PDFs in an 'uploads' folder of their Wordpress site. So, a typical PDF might have a URL such as: https://www.Xyzinsurance.com/xyz-content/uploads/2015/06/Brochure-XYZ-Connect.pdf
My question: is there any advantage in eliminating all these sub-directories and moving the files into a main folder, simply titled '/brochures' ??
Any insights or conjecture would be welcome!
-
Sorry for the late reaction to this response. Thank you so much, Nigel. We'll bear your comment in mind as we update this site's folder and file structure!
-
Hi David
I honestly would do exactly as you suggest.
The problem with the structure as it is is that it contains the date. If you want to present evergreen content then having the date in the URL is probably a bad choice. Presumably, if they are being presented in PDF form it means they are a useful download which is not restricted to the year/month they were published in.
I hope that helps
Regards
Nigel
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
302 Redirect Question
After running a site crawl. I found two 302 redirects. The two redirects go from: site.com to www.site.com & site.com/products to www.site.com/products How do I fix the 302 redirect and change it to a 301 redirect? I have no clue where to start. Thanks.
Technical SEO | | Ryan_1320 -
.htaccess Question
Hi,I have a website www.contractor-accounts.co.uk that has an .htaccess file that strips .php and forces a closing brace /. The site is now over 6 months old and still has a very low ranking with MOZ also rating the site as DA/PA = 1 which seems to indicate some sort of issue with the website. Can anyone offer any suggestions as to why this site is ranking poorly as much of the onpage SEO has been completed to a level of 90%+ for specific keyterms so I'm probably either looking at routing of the framework of so other technical SEO issues possibly? Any help much apreciated... <ifmodule mod_rewrite.c=""><ifmodule mod_negotiation.c="">Options -MultiViews</ifmodule> RewriteEngine On # Redirect Trailing Slashes...
Technical SEO | | ecrmeuro
# RewriteRule ^(.)/$ /$1 [L,R=301]
RewriteCond %{REQUEST_URI} /+[^.]+$
RewriteRule ^(.+[^/])$ %{REQUEST_URI}/ [R=301,L]
# Redirect non-WWW to WWW...
RewriteCond %{HTTP_HOST} ^contractor-accounts.co.uk [NC]
RewriteRule ^(.)$ http://www.contractor-accounts.co.uk/$1 [L,R=301] # Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]</ifmodule>0 -
Folder Hierarchy Structure Theory
Hi, I was wondering if search engines, in particular, Google, actually use folder hierarchy to determine how important a particular page on a website might be for ranking purposes, or is on-site page inter-linking only taken into consideration. I know that external and internal links help to support the authority or 'page rank' of a particular webpage on a website. In a typical Wordpress installation, for example, it is easy to create a page and assign child-pages to support it. These sub-pages would naturally link to their parent pages via menu and/or body links, so they would theoretically 'support' the authority of the parent folder/page. My question is... would search engines see the parent folder page as more authoritative than a child-page, even without a lot of on-site interlinking of child and parent pages, just because it is higher up in the folder structure? For example, I have a client who has a Wordpress website, but is using a plugin to make all pages have a .htm ending. The site is fairly 'flat', hierarchally speaking and does not use any /folders/, but the pages are inter-linked. In the following scenario, there are 4 testimonial pages... 1 main one and 3 supporting pages. The 3 supporting pages are linked to from the parent page and vice versa. /testimonials.htm /testimonials-quality.htm /testimonials-price.htm /testimonials-ease.htm I was wondering if it is worth suggesting to my client that we remove that plugin so that we can more easily employ the natural folder hierarchy functions of Wordpress, such as this scenario: /testimonials/ /testimonials/quality/ /testimonials/price/ /testimonials/ease/ Would the loss of 'link juice' due to redirects and the work that would be involved would be worth the possible ranking increases of potentially structuring the website better... or are we fine just relying on the existing page interlinking to show the search engines what are the important parent pages?
Technical SEO | | OrionGroup0 -
User Reviews Question
On my e-commerce site, I have user reviews that cycle in the header section of my category pages. They appear/cycle via a snippet of code that the review program provided me with. My question is...b/c the actual user-generated content is not in the page content does the google-bot not see this content? Does it not treat the page as having fresh content even though the reviews are new? Does the bot only see the code that provides the reviews? Thanks in advance. Hopefully this question is clear enough.
Technical SEO | | IOSC0 -
Questionable Referral Traffic
Hey SEOMozers, I'm working with a client that has a suspicious traffic pattern going on. In October, a referral domain called profitclicking.com started passing visits to the site. Almost, in parallel the overall visits decreased anywhere from 35 to 50%. After checking out profitclicking.com more, it promises more traffic "with no SEO knowledge". The client doesn't think that this service was signed up for internally. Regardless, it obviously smells pretty fishy, and I'm searching for a way I can disallow traffic from this site. Could I simply just write a simple disallow statement in the robots.txt and be done with it? Just wanted to see if anyone else had any other ideas before recommending a solution. Thanks!
Technical SEO | | kylehungate0 -
URL Structure: When to insert keywords?
I read the SEOmoz beginers guide and it said that it's beneficial to place keywords in the URL as long as you don't overdo it. However, this seems awkward for common pages, such as "Home", "About", "Contact" etc.... I've currently targeted a specific keyword for each page on my site, as follows: Home: "Green Screen" Work: "Greenscreen" About: "Event Photography" Pricing: "Green Screen Photography" Should I rename the URLs as: Home: ...com/green-screen-home.html Work: ...com/greenscreen-work.html About:...com/about-event-photography.html Pricing:...com/green-screen-photography-pricing.html
Technical SEO | | pharcydeabc0 -
How to find original URLS after Hosting Company added canonical URLs, URL rewrites and duplicate content.
We recently changed hosting companies for our ecommerce website. The hosting company added some functionality such that duplicate content and/or mirrored pages appear in the search engines. To fix this problem, the hosting company created both canonical URLs and URL rewrites. Now, we have page A (which is the original page with all the link juice) and page B (which is the new page with no link juice or SEO value). Both pages have the same content, with different URLs. I understand that a canonical URL is the way to tell the search engines which page is the preferred page in cases of duplicate content and mirrored pages. I also understand that canonical URLs tell the search engine that page B is a copy of page A, but page A is the preferred page to index. The problem we now face is that the hosting company made page A a copy of page B, rather than the other way around. But page A is the original page with the seo value and link juice, while page B is the new page with no value. As a result, the search engines are now prioritizing the newly created page over the original one. I believe the solution is to reverse this and make it so that page B (the new page) is a copy of page A (the original page). Now, I would simply need to put the original URL as the canonical URL for the duplicate pages. The problem is, with all the rewrites and changes in functionality, I no longer know which URLs have the backlinks that are creating this SEO value. I figure if I can find the back links to the original page, then I can find out the original web address of the original pages. My question is, how can I search for back links on the web in such a way that I can figure out the URL that all of these back links are pointing to in order to make that URL the canonical URL for all the new, duplicate pages.
Technical SEO | | CABLES0 -
Meta tags question - imagetoolbar
We inherited some sites from another vendor & they have these tags in the head of all pages. Are they of any value at all? Thanks for the help! Wick Smith
Technical SEO | | wcksmith0