Scientific Researchers Are Using Social Media All Wrong, Probably Should’ve Done More Research

Well, they're using it for research and not cat videos, so...
This article is over 10 years old and may contain outdated information

Recommended Videos

Hey, you know how every day a new study seems to pop up that analyzes social media data to form its conclusion? It turns out those little eggs of knowledge that the Twitter bird is laying might not be all they’re cracked up to be.

A study published in Science says that while scientists are so busy concerning themselves with what all the data culled from social media means, they tend to overlook the importance of understanding the nuances of the social media platforms themselves. It’s like how your parents’ social media use seems erratic and bizarre, and you can’t get them to understand what a hashtag is.

Or how your little sibling can’t seem to explain the point of HappyChatFuntime to you. Wait. You don’t even know what HappyChatFuntime is? God, you’re so old. Get with the times.

So the big fountain of social media knowledge might not be more accurate than anything else you read on the Internet. Luckily, the problem is just that the use of such data in scientific research is still in its infancy, so with some methodological adjustment, the data really can give important insights into human behavior.

PCMag says Derek Ruths and Jürgen Pfeffer, computer scientists at Carnegie Mellon and authors of the new study, outlined several avoidable pitfalls in social media research:

  • Different social media platforms attract different users—Pinterest, for example, is dominated by females aged 25-34 —yet researchers rarely correct for the distorted picture these populations can produce.
  • Publicly available data feeds used in social media research don’t always provide an accurate representation of the platform’s overall data—and researchers are generally in the dark about when and how social media providers filter their data streams.
  • The design of social media platforms can dictate how users behave and, therefore, what behavior can be measured. For instance, on Facebook the absence of a “dislike” button makes negative responses to content harder to detect than positive “likes”.
  • Large numbers of spammers and bots, which masquerade as normal users on social media, get mistakenly incorporated into many measurements and predictions of human behavior.
  • Researchers often report results for groups of easy-to-classify users, topics, and events, making new methods seem more accurate than they actually are. For instance, efforts to infer political orientation of Twitter users achieve barely 65 percent accuracy for typical users—even though studies (focusing on politically active users) have claimed 90 percent accuracy.

So go forth, researchers of the future, and make sense of this “big data” mess. You are now armed with the knowledge that it’s not how big your big data is. It’s how you use it that matters.

(via PCMag, image via hackNY.org)

Previously in probably flawed social media research

Are you following The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?


The Mary Sue is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author
Image of Dan Van Winkle
Dan Van Winkle
Dan Van Winkle (he) is an editor and manager who has been working in digital media since 2013, first at now-defunct Geekosystem (RIP), and then at The Mary Sue starting in 2014, specializing in gaming, science, and technology. Outside of his professional experience, he has been active in video game modding and development as a hobby for many years. He lives in North Carolina with Lisa Brown (his wife) and Liz Lemon (their dog), both of whom are the best, and you will regret challenging him at Smash Bros.