Post-Post-Truth: The future of media and disinformation after Trump

Written by

Four years of the Trump presidency gave rise to the term ‘post-truth’, due to the President’s constant exaggerations, obfuscations, and outright lies. When questioned on these, no admission of fault was ever made by the administration, who would deny any version of events not espoused by the President. Even after lies, conspiracies, and attempted bribes, it took an armed insurrection encouraged by the President for Republicans to turn on him.

Sadly, the extent to which his narratives have been peddled by supporters, sycophants, and true believers may mean that we’re not done with Trumpism even after he’s swiftly removed from the White House.

The infamous ‘alternative facts’ he offered, while ridiculed by the media, have been widely believed by Trump’s loyal base of supporters, and have managed to confuse or convert surprisingly large numbers to share in his worldview, or if not that, then whatever thing may have caught Trump’s attention on that particular day.

Journalists have been undermined, belittled, slandered, and attacked, all labelled as ‘enemies of the people’ by the President himself, while conspiracy theorists, grifters, and racist bloggers have been promoted, gaining thousands or millions of followers. This has led to a dangerous level of disinformation being shared on social media, much of which is absorbed and taken to be true before anything can be done to challenge it.

With Biden incoming as the next President and Trump being impeached, it’s tempting to believe this era of fake news and disinformation is over.

Sadly, it is not.

Disinformation has been shown to be incredibly successful at driving large-scale change, and bad actors are not going to stop using it to sow dissent, undermine faith in institutions, or to win political power. It works. They know it works. And it’s very hard to stop.

So, what can the media do to curb disinformation? First, they must look at what it is, how it works, and why it is so widely believed.

What is the difference between disinformation and misinformation?

While misinformation is merely a factually incorrect statement, disinformation is far more insidious, actively and deliberately sowing false information, usually for a specific purpose.

Disinformation has a context, but even when the media points out disinformation for what it is, this often only adds to the carefully-constructed narrative of a hostile media landscape trying to protect itself and the status quo, and that of course any challenge to that gets branded as lies or disinformation.

It’s a no-win scenario that we have been grappling with for years. But why do people believe it?

Why do people believe disinformation?

Disinformation has flourished online thanks to be shared by people’s extended network of friends and family on social media. If someone you know and trust shares a post that seems to ‘expose’ fraud, you’re more likely to believe it because of the connection you have with the person who shared it.

Beyond that, often the disinformation itself is designed to elicit an emotive response, which is more likely to be accepted as fact, as long as that anger is already in line with your worldview.

The following quote from Brian E. Weeks neatly summarises why disinformation is so often designed to make people angry, and why that anger reinforces belief:

‘Angry individuals are more likely to process information — including false information — in a way that is consistent with their existing political attitudes or beliefs, leaving them more susceptible to believing misinformation that is damaging to political opponents. Given that much of the political misinformation in circulation during the 2020 election was designed to elicit anger, it is unsurprising that so much misinformation was taken as true.’

Of course, individual pieces of disinformation aren’t always believed. But when they form part of a larger narrative, one that broadly conforms to what people already believe, they’re far more likely to accept it and assimilate it into their worldview. It then becomes cemented into how they view the world, as this quote from Saif Shahin expands upon:

‘Believing in and acting upon a piece of (dis)information, therefore, has little to do with truth and lies, right and wrong. Instead, it is closely related to people’s partisan identities and has become a form of identity performance — a ritual of who you are and where you belong in the increasingly fragmented body politic. But identities are always constructed in opposition to an “other”: distrust of and antipathy toward the “other” is fundamental to the conception of the “self.” That is the reason why so much of disinformation is accusatory of the “other” side […].’

We’ve seen this played out at scale over the last four years. Political factionalism has turned every single issue into a signifier, one that denotes which side you’re on. While the broader culture wars have always been raging, it’s now been co-opted into political discourse to such an extent that almost every issue seems to have wider significance attached to it, even something as simple as wearing a mask to try to stop the rampant progression of a deadly disease.

This means publications or groups that win followers through content and disinformation on one topic are then more likely to convert those followers to follow and care about other topics, because everything is part of the political spectrum. It’s why people who believe Trump’s lies about immigration are more likely to end up believing in chemtrails or Qanon, or won’t get a vaccine because they believe it contains a microchip the deep state will use to control them. This all has a real-world impact, and it’s why we can’t risk letting disinformation spread.

It’s clear that fighting disinformation will be an uphill battle. But it’s one we can’t afford to give up on.

How important is the fight against disinformation?

With Trump out of the picture, only time will tell if the Republicans move away from the ‘post-truth’ rhetoric that was emblematic of his speeches and approach to politics, or, having seen how successful candidates can be when they ignore the truth, lean in to the disinformation and blatant lies that enabled them to win power in 2016.

Don’t forget that more people voted for Trump in this election than any other Republican candidate in history, in large part thank to disinformation and lies. What he does works, so it seems unlikely they’ll completely abandon the techniques that have earned such a loyal (in some cases fanatical) following. Even with his back against the wall following the insurrection at Capitol Hill, some Republicans are still defending him, and continuing the narrative that the election was stolen.

If they decide to continue as they have, social media and news sites have an obligation to fight disinformation however they can – because we’ve seen the damage it can do.

So, what can the media do to fight back?

How social media companies and news sites can fight disinformation

Social media sites like Facebook and Twitter have recently tried to tackle disinformation by tagging posts that may contain disinformation, or making people click on links before sharing them. However, this isn’t likely to make much of an impact, as the content is still widely accessible and shareable, ensuring it will spread.

Thankfully, they have since gone further, banning accounts that spread extreme content.

However even tech giants must tread carefully, as new messaging apps like Parler can spring up overnight to provide alternative spaces for discussion entirely geared around conspiracy, which only makes the spread of disinformation more rampant. We’ll likely see copycat apps being developed in the future, with their own terms of use that allow or even encourage this kind of behaviour.

Small changes in how we use the internet could have a big impact on how we think about news in general. Sites like Facebook and Twitter should encourage users to read articles, check sources, and think critically about who stands to gain from you reading or sharing that article.

Banning and removing bad actors who create or share disinformation is another option. While this may draw cries of censorship from those who believe the fake news, stopping disinformation at the source means it can’t reach people who may be easy to convert.

Biden has pledged to bring high-speed broadband and 5G to every American, but if nothing is done to protect new internet users from malicious and misleading content, that may end up doing more harm than good.

Social media’s role in spreading disinformation

Of course, we can’t forget that social media played a huge role in the rise of disinformation, not just through making it easier than ever for users to share misleading content, but by the very design of the sites themselves.

Social media content is run through an algorithm with the sole purpose of driving engagement. Content which is emotive, divisive, or inflammatory will always garner more views, likes, comments, and shares, whether that’s through people agreeing with the content or arguing against it. Platforms like Facebook, Twitter, and Youtube serve billions of people with billions of pieces of content, all creating a tapestry of what they think a given users will want to see and click on next.

Someone who spends a lot of time online can quickly get sucked into a never-ending stream of content thanks to recommended videos, groups, and livestreams, which are all engineered to keep users engaged and on the platform. This is how communities get started, entirely based around a single idea, which soon spiral out into connected beliefs which begin to form ideologies.

There’s an argument to be made that social media cannot stop the spread of disinformation because it’s part of their business model. That’s why Twitter and Facebook waited until the last possible moment to remove Trump, and start deleting Qanon accounts. Conspiracy theorists are some of their best users. They live on the platform, they post regularly, and they use virtually all of their features.

Once the algorithm starts serving you disinformation it is very hard to stop, because it is chasing your engagement. Most users have no real understanding of this, and that all social platforms, including many news websites, are personalised to their browsing habits. Contrary to what many pundits believe, this doesn’t create an echo chamber. It creates an alternate reality for the user.

Social media’s purpose is to take individual thoughts, opinions, beliefs, ideas, and messages and share at scale. Tagging content as disinformation won’t stop those ideas from being shared. Radical action is needed, and it is unlikely to come from the very people who got us into this mess in the first place.

Disinformation has already done damage to our politics, our economy, and our culture. It’s resulted in people taking risks with their health, even leading to death. It led to an armed uprising against the US government, which, if successful, would have resulted in politicians’ executions being livestreamed on the same sites which enabled this in the first place.

It can encourage violence, embolden racists, and tear apart families. Stamping it out is the first step towards returning to a shared reality, instead of the strange and polarized world we find ourselves in now.

Recent events have shown social media companies that the power they wield could determine the future of the world’s democratic systems. All eyes are on what they do next, as well as the transition of power from Trump to Biden, and what happens to Trump and QAnon over the coming days, weeks, and months.

To change the speed and scale at which disinformation flourishes, social media companies have to look at how they operate, and decide how to change for the better. This is not going to be a quick fix, but in the last few weeks we’ve already seen progress. Let’s hope it continues.

  • Jack Terry,
    Content Manager