Source code structure: Position of content within the tag
-
Within the section of the source code of a site I work on, there are a number of distinct sections.
The 1st one, appearing first in the source code, contains the code for the primary site navigation tabs and links. The second contains the keyword-rich page content.
My question is this: if i could fix the layout so that the page still visually displayed in the same way as it does now, would it be advantageous for me to stick the keyword-rich content section at the top of the , above the navigation?
I want the search engines to be able to reach the keyword-rich content faster when they crawl pages on the site; however, I dont want to implement this fix if it wont have any appreciable benefit; nor if it will be harmful to the search-engine's accessibilty to my primary navigation links.
Does anyone have any experience of this working, or thoughts on whether it will make a difference?
Thanks,
-
That will work, but don't try to game it too much. I optimized a site where there was a ton of account info before any of the content. So I used Absolute Positioning and moved all of that content to the bottom of the page, (but still displayed correctly). This allowed the most valuable content and links to be crawled first.
Not sure if I would try to move content about the nav.
Your most important links will be in order of appearance to search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Inconsistency between content and structured data markup
Hi~ everyone What does Google think about the inconsistency between content and structured data markup? Is this kind of a cheating way ? Is hurt my SEO?
Technical SEO | | intern2020120 -
Doubts about the technical URL structure
Hello, first we had this structure Categorie: https://www.stoneart-design.de/armaturen/ Subcategory: https://www.stoneart-design.de/armaturen/waschtischarmaturen/ Oft i see this https://www.xxxxxxxx.de/badewelt/badmoebel/ But i have heard it has something to do with layers so google can index it better, is that true ? "Badewelt" is an extra layer ? So i thought maybe we can better change this to: https://www.stoneart-design.de/badewelt/armaturen/ https://www.stoneart-design.de/badewelt/armaturen/waschtischarmaturen/ and after seeing that i thought we can do it also like this so the keyword is on the left, and make instead "badewelt" just a "c" and put it on the back https://www.stoneart-design.de/armaturen/c/ https://www.stoneart-design.de/armaturen/waschtischarmaturen/c/ I dont understand it anymomre which is the best one, to me its seems to be the last one The reason was about this: this looks to me keyword stuffing: Attached picture Google indexed not the same time the same url, so i thougt with this we can solve it Also we can use only the word "whirlpools" in de main category and the subs only the type without "whirlpools" in text thanks Regards, Marcel SC9vi60
Technical SEO | | HolgerL0 -
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
Duplicate Content
Hi, we need some help on resolving this duplicate content issue,. We have redirected both domains to this magento website. I guess now Google considered this as duplicate content. Our client wants both domain name to go to the same magento store. What is the safe way of letting Google know these are same company? Or this is not ideal to do this? thanks
Technical SEO | | solution.advisor0 -
Fix duplicate content caused by tags
Hi everyone, TGIF. We are getting hundreds of duplicate content errors on our WP site by what appears to be our tags. For each tag and each post we are seeing a duplicate content error. I thought I had this fixed but apparently I do not. We are using the Genesis theme with Yoast's SEO plugin. Does anyone have the solution to what I imagine is this easy fix? Thanks in advance.
Technical SEO | | okuma0 -
Status code 404??!
among other things... I'm getting this error: http://worldvoicestudio.com/blog/"http://worldvoicestudio.com/" http://worldvoicestudio.com/blog/"http://worldvoicestudio.com/" Any ideas on how to fix this? many thanks!!
Technical SEO | | malexandro0 -
How to tell if PDF content is being indexed?
I've searched extensively for this, but could not find a definitive answer. We recently updated our website and it contains links to about 30 PDF data sheets. I want to determine if the text from these PDFs is being archived by search engines. When I do this search http://bit.ly/rRYJPe (google - site:www.gamma-sci.com and filetype:pdf) I can see that the PDF urls are getting indexed, but does that mean that their content is getting indexed? I have read in other posts/places that if you can copy text from a PDF and paste it that means Google can index the content. When I try this with PDFs from our site I cannot copy text, but I was told that these PDFs were all created from Word docs, so they should be indexable, correct? Since WordPress has you upload PDFs like they are an image could this be causing the problem? Would it make sense to take the time and extract all of the PDF content to html? Thanks for any assistance, this has been driving me crazy.
Technical SEO | | zazo0 -
Site Structure question
when deciding the Site structure for a e-commerce site Is it better to keep everything mysite.com/widget.html or use categories like mysite.com/Gifts/widget.html
Technical SEO | | DavidKonigsberg0