In Search of a Social Media Truth Meter
Like me, you’ve probably seen news items, blog posts, and web pages that are cringe worthy for a number of reasons – including their lack of truthfulness. If we hold a “truth meter” next to every college course, Wikipedia article, or television newscast, how many of them would pass that test? Yes, truth is more important than ever since we’re consuming so much news and information online and unverified.
But “truth” is a tricky thing.
Truth is in the eye of the beholder
Except for the most basic statements, many “facts” are anything but provable or definitive. Often, their accuracy depends on the personal opinion of each reader, which, in turn, is informed by their education, status, culture, location (e.g., country of residence), upbringing, gender, race, and other factors.
Our individual perceptions of truth are also influenced by what we expect or want the truth to be. Does a narrative fit our preconceived notions? Does it validate a trend or cause that’s important to us? If so, we’re more likely to view it as true … and share it widely.
Truth can be deceptive – words matter
“Experts say the colonization of Mars could begin by 2040.”
Could that statement be considered factual? Well, using the word “could” certainly gives the writer some leeway. And you can find a person (and call them an “expert”) who will espouse pretty much anything you want to promote – including readership.
On its face, though, I’d say that that statement is not un-factual, but it certainly can be a misleading headline when our tendency as readers and social media consumers is to:
- Discount the fine print (words like “could”).
- Assign relatively equal credibility to anyone claiming to be an expert.
Thus, determining “truth” requires careful consideration of the statement and the source. (One must also decide if truth is the only standard that should be used when administering social media content. Well-intentioned censorship is a topic for another day.)
Social media administration
Social media companies like Facebook and Twitter have decided to become their own arbitrators of what’s true and what’s not, promising to remove lies and liars. But doing so in a way that takes into account every country, culture, religion, and language is simply not possible – from a logistical standpoint if nothing else. All that these companies can do, in my opinion, is remove the most egregious falsehoods, leaving in place many potentially possibly deceptive gray area posts for readers to weigh for themselves.
It’s important to keep in mind how most people use social media – especially Facebook and Twitter. First there’s commercial promotion (“Try our new menu”), entertainment (videos from Bob Dylan’s newest album), and personal life sharing (“Yay, Dwayne finally graduated from high school!”).
Then there’s the potentially darker side of social media: advocating positions and policies (and/or criticizing those of others) based on supposed sources or statements of truth.
Who’s to judge?
No doubt, posts of this type are the ones Facebook and Twitter will and should focus on. But as long as humans are making the decisions, there will be content curation biases similar to the personal biases noted (what do we want to be true, for example) plus perhaps a tendency to avoid controversial posts lest they upset a majority of readers, advertisers, or policymakers. Monitoring social media with this kind of objectivity is a tall order for a human being, but with the right training and guidance, maybe it’s possible these human editors can protect truth or move us in that direction.
What about automating the content monitoring process using artificial intelligence (AI) to separate fact from fiction? Someone will need to program the algorithms of course, and if that can be done objectively and thoroughly, this could conceivably work too – at least as well today’s automated spellcheck and grammar programs that don’t always get it right.
No doubt social media companies are using AI to some extent and will likely take bigger steps in that direction. It’s an interesting experiment in allowing machines to control societal discourse. What possibly could go wrong with that!
As always, if there is a point I missed, I’d love to hear your thoughts.