The world is increasingly revolving around data with each year that passes.
Every technological advancement or new iteration of the current methods of producing and communicating data, sees the information economy develop at a breathtaking pace. Some of the information produced and communicated will inevitably be wrong for a myriad of reasons; human fallibility and human willingness to deceive others to pursue a goal might be argued to reign chief among these reasons.
‘Misinformation’ was declared ‘word of the year’ in 2018 by Dictionary.com and defined by them as ‘false information that is spread, regardless of whether there is intent to mislead’.
Other definitions, formal or otherwise, abound on the internet. These definitions share the commonality that they are framed in opposition to ‘truthful’ information, similarly to how something ‘false’ is defined in opposition to something ‘true’ – to know what is ‘false’ one must therefore know what is ‘true’.
Defining ‘truth’ is a daunting and philosophically complex task that escapes the confines of this piece and easily defies the scope of any legislation and code for communication. The curious reader can have a taste of this complexity by reading one of the several associated entries for ‘truth’ at Stanford University’s Encyclopedia of Philosophy.
The convolutedness and the apparent impossibility of the quest to define ‘truth’ inevitably morphs into the more practical and manageable quest of who gets to say what is true, in which the authority for ‘truth’ is not truth itself but whoever gets appointed as arbiter of truth.
From here,, the quest for truth devolves into a power struggle to manage the flow of information, the limits of acceptable discourse, and an Inquisitional hunt for intellectual dissenters.
The reader might think this is the material of conspiracy theorists, but remember, for example, social and legacy media companies branded the so-called ‘lab leak hypothesis’ (that is, that sars-cov-2 escaped from a laboratory instead of arising in nature) as ‘false information’ only to later retract themselves after pressure mounted following the publication of an article on Medium.com by Nicholas Wade laying out all the ‘clues’ that strengthened the hypothesis.
The mention of the Inquisition in the preceding paragraph is no coincidence: the struggle for managing information and what is and what is not ‘true’ spans the whole history of humanity and escapes the confines of religious zealotry, and the Inquisition is one of many examples.
Large companies (think tobacco, oiling, and sweets companies) have joined the fight in the past to push against claims that their products cause cancer, addiction, neurological effects, obesity, and more. We would mistaken if we thought it is only profit-seeking and religious people who seek to control what is ‘true’: even scientists and experts at some point in history rejected things we now take for granted: evolution, quantum physics, universal gravity, DNA, and more.
Experts can fall prey of ‘misinformation’.
But I am being terribly unfair to some well-meaning people. It is not that humans know what is ‘true’ in the ultimate sense, meaning a truth that will never change no matter how much evidence mounts in whatever is left of human history. Knowledge, as far as my inexperience goes and by way of a functional reduction, is the construction of purposeful information as an extension of, and also limited by, our capacity to obtain prior information, be this a priori or a posteriori (that is, independent or dependent on experience).
In this light, it is wise to remember that we did not begin our journey as humanity knowing that the Earth was round, that it was not the centre of the universe, or that we orbited the Sun instead of the Sun orbiting us.
We did not start our earthly voyage as a species knowing we were mammals evolved from a common ancestor we shared with chimpanzees, and that we were not an empirical creation of an unknown celestial entity we decided to call God.
It even wiser to remember that the Earth being flat and the centre of the universe and that humans were empirically created by God were the mainstream positions of their time, considered ‘true’, and that it was only further progress and inquiry – often opposed by people of all walks of life, experts included – what has allowed us to know what we now hold as ‘true’, some of which might not hold in the future.
The darts of my criticism are understandably directed towards those who are more interested in flexing their authoritarian muscles to control the flow of information for self-serving purposes than to those who are simply limited by their current abilities to obtain information.
Ramping up our efforts to curtail discourse, dialectic, scientific inquiry, and communication in general in the name of what is ‘true’ and against ‘misinformation’ is also an exercise in philosophical short-sightedness: it is important to note that science is just one frame of reference among many to understand the world in its expansive sense. Many other frames of reference are better equipped to understand what lies beyond the grasp of the scientific method.
Am I incurring in ‘misinformation’ if I claimed with the power of truth that God exists? A scientist and an atheist might agree to say ‘yes’, but they would have missed the fact that the claim is better analysed from a teleological or moral perspective, rather than an empirical one, and that I might have uttered the claim from the former standpoints instead of the latter.
This highlights that the frame of reference is of ultimate importance when analysing what is ‘true’, and that the frame of reference from which information springs is often unknown to the receptor of the message. Claims to ‘misinformation’ often have more to do with the perceptual limitations of the one interpreting the truthfulness of the message received than with the hidden motivations and frame of reference of the emitter.
Even within the scientific frame of reference (the allegedly safe realm of objectivity and things we can accurately and reproducibly measure, where ‘misinformation’ could easily be argued about), things can get complicated.
The implications of Simpson’s Paradox in statistics perfectly illustrates this problem, and things can get even more complicated when the definitions for the constructs under study allow for some elasticity, for instance when assessing ‘violence’.
It is imperative that people engage in critical thinking to parse information, rather than outsourcing it to media, authorities, and other self-proclaimed arbiters of truth which might be as misguided as any other person or might not be acting in good faith.
How do you evaluate whether someone has a rightful claim that a given piece of information is ‘misinformation’?
A good start is asking the person why it is ‘misinformation’.
The more this person appeals to authority (be it a consensus, or a high-profile figure) or uses borrowed ideas and opinions he did not arrive to himself (because when asked he cannot explain them in detail), the less likely this person is a good evaluator of the truthfulness of said claim.
Truth is authoritative, rather than authoritarian: it stands on the strength of the evidence and reasoning that backs it, not on the loudness or pervasiveness of the voices supporting it. A good evaluator of information will use logic and evidence, as well as honesty, transparency, patience, and nuance to show you why a claim is false. Importantly, a good evaluator will take the time to explain it.
The push for ‘managing misinformation’ is not only an authoritarian endeavour masquerading with good intentions in the name of ‘safety’ and ‘truth’, it is also an exercise in futility: information is generated and spread much, much faster than it can be managed.
I offer a better, more sustainable alternative.
The biggest threat to truthful information is not ‘misinformation’, but lack of trust in the person uttering this supposedly truthful information.
Authorities (governments, medical bodies, the police, etc.), news outlets, and other institutions derive their legitimacy from the trust that people bestow on them, trust that is nowadays severely compromised.
The rise of the information economy and social media have helped erode this trust, but to lay all the fault at technology’s feet is to turn the blind eye to the many ways in which all these institutions have lied overtly or covertly in public or in private, have manipulated information or have tried to shift blame when caught to try to save face, or have simply not been honest and transparent about their motives and procedures. The problem of ‘misinformation’ is also theirs to blame. Humans are extremely sensitive to deceit, and the badge of authority can only do so much.
It is in this sea of mistrust in authorities that trust in supposedly ‘untrustworthy’ actors breeds.
Contrary to public commentary, however, these actors are trusted not because of the undeniability and factuality of the information they communicate, but because they are honest, transparent, familiar, and offer explanations that ‘make sense’ while supposedly trustworthy institutions dismiss the vulgus in intellectual and social contempt without even attempting to address their concerns in a manner required to build trust.
And when no explanations or answers are provided, other explanations and answers, as erroneous as they may be, will inevitably fill the void.
‘These people should just shut up and do as they’re told’, some mutter, while completely forgetting that there is nothing as untameable as the human spirit for inquiry and for intellectual freedom and forgetting also that these dismissals not only breed resentment, but also damage the trust they so desperately crave to remain legitimate and effective.
The results of this iterative game of dishonesty and intellectual dismissal are painfully obvious and can easily be argued as unsustainable.
The solution I champion relies on building trust once more, this time from the top-down, even if it can be amply considered wishful thinking. Trust might be argued as a two-way street, but my experience in this world tells me that you must give it first because the other person will often be reluctant to trust you. This is especially true if you are the one interested in being trusted, or you require to be trusted again after people have lost trust in you due to past wrongdoings.
How do we start healing?
The institutions who have lost trust are encouraged to come out clean (a mea culpa goes a long way, even if it’s just the first step), uphold the promises they once made but they did not uphold (at least some of them, as an act of good faith), pay the price required to heal (might involve dismissing or prosecuting government officials or the CEO of a company to show they are willing to atone for their sins), and show that they will actively work to avoid repeating their mistakes (for example getting rid of laws that allowed them to act wrongfully, changing the business model to avoid the incentives that encouraged the wrongdoing, and inviting public scrutiny and feedback).
In the context of ‘misinformation’, encouraging people to parse information themselves and helping them develop critical thinking skills – instead of acting in a paternalistic fashion of wanting to curate their social media feeds or trying to censor popular podcasts they listen to – can also be perceived as an act of trust towards the intelligence, autonomy, and conscience of the layperson. Acts of trust do not go unnoticed.
Moving forward, governments and both public and private institutions can also do with a bit more libertarianism if they seek to gain the people’s trust and remain legitimate.
Why? Because libertarianism is an act of trust in the people. Instead of micromanaging them through never-ending red tape, mandates and policy, letting the people make their own decisions within reasonable limits – limits held in place by the reality of well-meaning, well-rounded people, rather than by the constant risk aversion that pervades some institutions hellbent on mitigating liabilities – will give the people the capacity to flourish, to seek out the best alternatives to manage their goals and projects away from the prying and controlling eyes of institutional busybodies, and the people will repay their officials in kind. This understandably also applies to finding and generating truthful information and keeping ‘misinformation’ in check.
We do not need to bend over backwards managing ‘misinformation’. This will only make the problem worse.
What is needed is trust in authorities and officials, so the truthful information they convey is trusted by association. But now it is time these institutions gave the people reasons to trust them once more. Only then ‘misinformation’ will naturally be kept at bay.
Got something to add? Join the discussion and comment below.