Divs Vs Table for styled data\
-
Hello,
We're in the process of launching MultipleSclerosis.net and are a bit confused with how to present some specific information.
Looking at pages such as http://multiplesclerosis.net/symptoms/, http://multiplesclerosis.net/what-is-ms/courses-patterns/ and http://multiplesclerosis.net/treatment/prescription-nonprescription-medications/ is it better to keep this data structured as divs, and style them as tables or to keep them as tables and style them accordingly.
Though not technically "tabular" data, i'm not too sure how to handle this. The text to code ratio is quite high with the divs in the markup, which though i'm not overly worried about, it could cause some issues with the site's indexability.
Thanks I appreciate any feedback.
-
My opinion would be that DIV-based markup is the better choice here. As you said yourself, it's not really tabular data, so in using DIVs you can use semantic markup which is a positive for SEO.
You could improve/cleanup the markup of that data though, by:
-
Use
,
,
tags. Even the bolded text in the lefthand column are basically headers for the text in the righthand column.
-
You should remove the empty class="hr">tags, which I assume are in there to create the horizontal lines. It's nit picky, as if you remove them, you'll need to add a 'wrapper DIV' surrounding each row, so you won't really be cutting down on the code used that much. But having empty tags that are only there for presentation purposes is generally frowned upon. You could create the same visual effect by using a border or by using a background image (if you want the line to not fully extend across the row).
That's all pretty nitpicky coding stuff though. For SEO purposes, I think the only thing that might have an affect is using the <hx>tags.</hx>
-
-
Hey Oliver
Looking at those tables I can't see that you would have any problems with how you have done it. We do have a lot of mark up but it is all seemingly well structured with divs, unordered lists, list items etc.
I certainly would not worry about it in this case.
Marcus
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
ExampleSite.com vs ExampleSite.com.br
What would you say to a client who is concerned he'd have to run around buying his .com.??? in alot of other countries. Thanks!
Intermediate & Advanced SEO | | 945010 -
Bad Domain Links - Penguin? - Moz vs. Search Console Stats?
I've been trying to figure out why my site www.stephita.com has lost it's google ranking the past few years. I had originally thought it was due to the Panda updates, but now I'm concerned it might be because of the Penguin update. Hard for me to pinpoint, as I haven't been actively looking at my traffic stats the past years. So here's what I just noticed. On my Google Search Console - Links to your Site, I discovered there are 301 domains, where over 75% seem to be spammy. I didn't actively create those links. I'm using the MOZ - Open site Explorer tool to audit my site, and I noticed there is a smaller set of LINKING DOMAINS, at about 70 right now. Is there a reason, why MOZ wouldn't necessarily find all 300 domains? What's the BEST way to clean this up??? I saw there's a DISAVOW option in the Google Search Console, but it states it's not the best way, as I should be contacting the webmasters of all the domains, which is I assume impossible to get a real person on the other end to REMOVE these link references. HELP! 🙂 What should I do?
Intermediate & Advanced SEO | | TysonWong0 -
Possible problem with new site (GWT no queries/very low index vs. submitted)
Hi everyone, I recently launched a new website for a small business loan company in the Dallas area. The site has been live for roughly a month and a half. I submitted everything to GWT as usual, including my sitemap. I am not sure what's going on with the site, as there is no activity from GWT in the impressions or queries. The submit vs. index is 24/3 (and hasn't moved). Also the queries graph on the overview stops at 3/18/2015... On another note, when I go to Crawl > Sitemaps, it shows that there were pages indexed during the month of march and then on April 3 it drops from 17 to 2 and never increases. Google says there are no errors or issues found, but I feel like there's something wrong. When I do site:, my URLs do pop up which makes me believe there's just a problem with my GWT. With that being said, I'm not happy THINKING there's something wrong. I need to actually know what the problem is. The only thing I can think of that I have done is purchase SSL for the site, but when I search what pages are indexed using www. it shows all the HTTPS URLS, so that would tell me that the site is getting indexed without a problem? Does anyone have a clue as to what might be happening? I will attach some screen shots so that you can get a better idea... KQ2366i D5xBNZf mF7kkgW
Intermediate & Advanced SEO | | jameswesleyhunt0 -
Website Structured data in Google
Can anyone help me to show website structure data in Google when someone search my website in Google. I already added my website in Google and Google webmaster tool. Thanks in adv.
Intermediate & Advanced SEO | | talkinnetventure0 -
Bypassing Google, Data Highlighter and Webmaster tools
eLLo! Has anyone used Data Highlighter? I've had colleagues mentioning a jump in CTR after using the data highlighter on pages. Thought I'll do the same and went into my webmaster tools but I've hit a brick wall. Whenever I highlight a product page, my country selector pops up and I'm unable to highlight a product page. A colleague of mine mentioned to bypass google by basing it on user agent, this will allow you to avoid the country selector. But if I bypass Google, wouldn't it affect Google Analytics, Indexing etc?
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Removing Content 301 vs 410 question
Hello, I was hoping to get the SEOmoz community’s advice on how to remove content most effectively from a large website. I just read a very thought-provoking thread in which Dr. Pete and Kerry22 answered a question about how to cut content in order to recover from Panda. (http://www.seomoz.org/q/panda-recovery-what-is-the-best-way-to-shrink-your-index-and-make-google-aware). Kerry22 mentioned a process in which 410s would be totally visible to googlebot so that it would easily recognize the removal of content. The conversation implied that it is not just important to remove the content, but also to give google the ability to recrawl that content to indeed confirm the content was removed (as opposed to just recrawling the site and not finding the content anywhere). This really made lots of sense to me and also struck a personal chord… Our website was hit by a later Panda refresh back in March 2012, and ever since then we have been aggressive about cutting content and doing what we can to improve user experience. When we cut pages, though, we used a different approach, doing all of the below steps:
Intermediate & Advanced SEO | | Eric_R
1. We cut the pages
2. We set up permanent 301 redirects for all of them immediately.
3. And at the same time, we would always remove from our site all links pointing to these pages (to make sure users didn’t stumble upon the removed pages. When we cut the content pages, we would either delete them or unpublish them, causing them to 404 or 401, but this is probably a moot point since we gave them 301 redirects every time anyway. We thought we could signal to Google that we removed the content while avoiding generating lots of errors that way… I see that this is basically the exact opposite of Dr. Pete's advice and opposite what Kerry22 used in order to get a recovery, and meanwhile here we are still trying to help our site recover. We've been feeling that our site should no longer be under the shadow of Panda. So here is what I'm wondering, and I'd be very appreciative of advice or answers for the following questions: 1. Is it possible that Google still thinks we have this content on our site, and we continue to suffer from Panda because of this?
Could there be a residual taint caused by the way we removed it, or is it all water under the bridge at this point because Google would have figured out we removed it (albeit not in a preferred way)? 2. If there’s a possibility our former cutting process has caused lasting issues and affected how Google sees us, what can we do now (if anything) to correct the damage we did? Thank you in advance for your help,
Eric1 -
One Way Links vs Two Way Links
Hi, Was speaking to a client today and got asked how damaging two way links are. i.e. domaina.com links to domainb.com and domainb.com links back to domaina.com. I need a nice simple layman's explanation of if/how damaging they are compared to one way links. And please don't answer with you lose link juice as I have a job explaining link juice.... I am explaining things to a non techie! Thank you!!
Intermediate & Advanced SEO | | JohnW-UK0 -
Subdirectory vs. Subdomain
I work for a large franchise organization that is weighing the pros and cons of using subdomains versus subdirectories for our franchisee locations. What are the pros and cons of each approach?
Intermediate & Advanced SEO | | Glassdoctordfw0