Contact Me By Email

Contact Me By Email

Wednesday, October 21, 2020

How COVID-19 infected the world with lies

How COVID-19 infected the world with lies

"Misinformation has exploded during the coronavirus pandemic, spreading faster and further than ever before. How do we slow the deception down?

How fast does a lie travel? Cordell Hull, the longest-serving US Secretary of State and "father of the United Nations," thought he'd worked it out. "A lie will gallop halfway round the world," he proclaimed in 1948, "before the truth has time to pull its breeches on." 

Hull shared his adage in a time before social media, before satellites and smartphones. There were no tweets. No Facebook posts. He couldn't have known the rise of the internet and a worldwide pandemic would expose a critical flaw in his aphorism some 70 years later.

In 2020, a lie circles the world countless times before the truth has a chance to hit "Post." 

At no time has that been more obvious than during coronavirus pandemic. Since it emerged in December 2019, COVID-19 has infected 33 million people and killed more than 1 million. It's also revealed significant failures in the way we consume and share information. At the center of this fight: Facebook, TwitterYouTube -- the most popular digital platforms in the world. "There's been this explosion of mis- and disinformation spreading via social media," says Axel Bruns, a digital media researcher at the Queensland University of Technology in Australia. 

On one front, we've battled a virus. On the other, we've battled misinformation.

Efforts by social media giants to manage the deluge of misinformation have largely fallen short. Coronavirus conspiracy theories infect every corner of the web, driven by frenzied Facebook posts and fatalistic tweets. YouTube has struggled to contain the spread of misleading videos about vaccination, microchips and Bill Gates. The science we rely on to inform the pandemic response has, at times, been distorted by rushed reporting. Incremental updates to public health information have muddied messaging across all of the biggest social networks. 

We live in the Misinformation Age. 

Misinformation is not a new problem. Some predicted the risk of viral misinformation long before COVID-19 emerged. But the biggest health crisis in a century has underscored the ease with which doubt can be sown online. "It's an order of magnitude bigger than anything we've seen before," Bruns says. Digital media researchers, psychologists and informatics specialists are beginning to grapple with the extent of our misinformation problem. With a presidential election looming in the US, there's now a heightened sense of urgency. We must learn to slow down a lie.

On science

During the pandemic, the pace of scientific research has accelerated dramatically. 

As scientists were just starting to grapple with the severity of the coronavirus that causes COVID-19, they began probing its genome for clues to where it originated and why it was so infectious. At the end of January, an alarming paper appeared online. A team of researchers suggested the genetic code of SARS-CoV-2 showed similarities to HIV, the virus that causes AIDS. 

The study was a "preprint," scientific literature that has not been peer-reviewed, posted to a server known as bioRxiv that houses preliminary research. Preprints don't generally make a huge splash in the media or online. But shortly after being posted, it was shared by Eric Feigl-Ding, a Harvard public health researcher who became a prominent coronavirus commentator on Twitter. He tweeted the HIV study to around 60,000 followers, calling it "very intriguing." 

Except it wasn't intriguing. It was junk. Feigl-Ding's tweet and bioRxiv were flooded with comments pointing out the study's flaws. Jason Weir, a biological scientist at the University of Toronto, said it only took "10 minutes to determine this was not serious science." But the study hit social media just as discredited conspiracy theories about the virus being a "bioweapon" first appeared. The two stories became entangled. A brief panic ensued. A day after the study appeared, the authors withdrew it, but it remains the most downloaded preprintever, with almost 1 million downloads.

Coronavirus preprints have made headlines but nothing had the reach and impact of the HIV study tweeted about by Feigl-Ding.

Rxivist

Science is self-correcting, slow and methodical. Studies are repeated multiple times before they're accepted as fact. Accumulated evidence leads to widely accepted conclusions. That process worked with the HIV study, but it also exposed a significant blindspot: Social media could send shoddy research viral before researchers can adequately review it. 

The rapid sharing of COVID-19 study results, preprints, news reports and press releases has enabled preliminary research to spread further than ever before, even when it's misleading or overtly false. This kind of science is "simply not ready for prime-time consumption," according to Gary Schwitzer, a health journalist and founder of medical consumer watchdog site HealthNewsReview. 

Science isn't failing, but scientists are "drowning" in COVID-19 papers, making it difficult to dedicate time to adequately examine new research and counter false claims. Over 30 studies related to COVID-19 have been retracted in the past 10 months. Preprints, like the HIV study, make up 11 of those retractions. Other controversial studies, some of which include questionable data and have informed public health decisions in the pandemic, have not been withdrawn.

When slipshod claims spread on social media, they get distorted further, making it "harder for scientists to control their messages," says Naomi Oreskes, a science historian at Harvard University. The HIV study has been scrubbed from the academic literature, but six months later it still gets shared on Twitter and Facebook as if it appeared yesterday. 

On conspiracy

Sometimes, a lie can start a fire.

Fears about phone radiation date back to early rollouts of wireless technology at the turn of the century. When wireless carriers announced the next-gen mobile technology 5Gpanic over the potential health concerns reignited. But the coronavirus pandemic helped 5G fears mutate into something more sinister. 

The convergence of two confusing, unknown entities -- a new virus and a new technology -- created a new myth. "There was already a distrust of the technology and, as COVID-19 emerged, social media users slowly started to link the two together," says Wasim Ahmed, a social media researcher at Newcastle University in the UK. 

Some falsely claimed 5G was weakening people's immune systems. Others suggested lockdowns were a cover for the installation of 5G towers, allowing governments to wirelessly control the public's minds. Ahmed, and other researchers, found that every time you cut one head off the conspiracy Hydra, two more grew back. 

The 5G conspiracy resulted in the deliberate destruction of mobile towers across the globe. Telco workers were subject to verbal and physical abuse by those who viewed them as complicit in 5G's spread. In Birmingham, England, one of the 5G masts providing services to a COVID-19 hospital was ruined, preventing communication between the sick and their family members.

An investigation by the Australian Broadcasting Corporation traced the 5G conspiracy back to a tweet posted on Jan. 19. A week later, notorious right-wing conspiracy channel Infowars boosted the false claims. On April 1, actor Woody Harrelson posted a video to his more than 2 million Instagram followers showing a communications tower ablaze and claiming Chinese citizens were "bringing 5G antennas down." Harrelson had been fooled. The video originated from the Hong Kong protests of 2019. It had nothing to do with 5G. 

Celebrities like Harrelson became super-spreaders, sharing various forms of 5G misinformation on personal social media pages to huge audiences. On April 4, rapper Wiz Khalifa shared a tweet that simply asked "Corona? 5G? Or both?" with 36 million followers. Google Trends show searches for "5G coronavirus" peaked in the week following the pair's posts.

On April 6, Facebook and YouTube began removing misinformation regarding 5G and COVID-19. But the myths had been seeded as early as February. Ahmed suggests social media networks were "a bit slow" in dealing with misleading posts. It was too late. 

On politics

One drug has dominated the increasingly polarized discourse during the pandemic: hydroxychloroquine. The antimalarial, in use for over 50 years, has been championed widely as a coronavirus quick fix but remains an enigmatic compound.

"Its exact mechanism of action isn't completely understood," says Ian Wicks, a clinician and rheumatologist at the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia. 

Hydroxychloroquine was thrust into the limelight when President Donald Trump touted the drug as having the potential "to be one of the biggest game changers in the history of medicine." Later, on May 18, he admitted he had been taking it as a preventative. The scientific consensus is at odds with Trump. "We have so many trials showing that it does not work for the prevention or treatment of COVID-19," says Jinoos Yazdany, a rheumatologist at Zuckerberg San Francisco General Hospital. It didn't matter. Hydroxychloroquine had become a political ideology. 

Hydroxychloroquine is not an effective coronavirus treatment.

George Frey/Getty

And it continued to be championed. In July, a group of lab coat-clad doctors promoted hydroxychloroquine as a COVID-19 "cure" in a Facebook livestream. The event, covered predominantly by right-wing news publications like Breitbart, led to a second wave of misinformation more potent and widespread than the first. Trump himself retweeted a short clip of the doctors, doubling down on his earlier comments. Pro-Trump accounts on social media networks like Twitter and Facebook quickly spread it further. 

Wicks, who is evaluating hydroxychloroquine's potential as a preventative against COVID-19 infection, notes his clinical trials have "been made more difficult by the politicization of the issue." Politicization has become a common theme across social media. A study in the journal Science Advances in Julyshowed "a substantial partisan divide" in how the pandemic got communicated by Republicans and Democrats on Twitter. Trump has publicly downplayed the need for face coverings, for instance, while many prominent Democrats made sure to wear them in public.

The doubt-mongering surrounding hydroxychloroquine followed an old pattern seen in previous health controversies, such as the bans on tobacco smoke and pesticide use. Political agendas were placed above public health concerns. Misinformation was rampant and, at times, used to deceive and disorient. Social media made it much easier to spread the confusion, Oreskes notes. 

On harmful BS

It's impossible to single out one aspect of the pandemic as the root cause for our disordered relationship with truth. Traditional media has helped propagate some of the most outrageous conspiracy theories, extreme outlets polarize the public discourse and President Trump himself has been blamed as the major cause of health misinformation during the pandemic.

But in all of the examples above, and dozens more, social media is a pervasive thread, the horse that gallops lies around the world before truth has time to pull its breeches on.

This isn't a revelatory conclusion. The 2016 US presidential election demonstrated how social networks could be used to deliver hoaxes and falsehoods to potentially millions of people at the click of a mouse. Platforms like Facebook and Google said they'd clamp down on misinformation, but it's only gotten worse. 

"Technology enables the spread of misinformation in a way that wasn't possible before," says Sander van der Linden, a social psychologist at the University of Cambridge. News doesn't come from a TV station or a local paper anymore -- now it comes from your ill-informed uncle.

Data from the Pew Research Center study, conducted in June.

Pew Research Center

On July 30, the Pew Research Center suggested US adults who get their news via social media are less likely than other news consumers to follow major news stories. They also are more exposed to unproven claims and conspiracies and less likely to get the facts right about the coronavirus. That's concerning when you look at other Pew research showing that 26% of US adults consider YouTube an important source of news. It becomes problematic when we decide to share information without adequately vetting it.

"There have been some experiments to show that as the rate of information we are exposed to increases, the likelihood that we will share low credibility information also increases," says Adam Dunn, head of biomedical informatics and digital health at the University of Sydney. 

The major platforms have tried to keep misinformation at bay, particularly in regard to conspiracy theories. Reddit removed subreddits related to the QAnon conspiracy theory in 2018. Facebook has taken extensive action recently, and Twitter banned 150,000 accounts related to QAnon in July. But there has been a reluctance to remove misinformation outright, with the likes of Facebook falling back on the "free speech excuse" to eschew responsibility. 

"The inability or refusal of some online social media giants to enforce adequate policing of harmful BS is an ongoing, serious problem," says Schwitzer, the editor of HealthNewsReview.

Facebook doesn't actively remove false or misleading content unless it causes immediate physical harm. Instead, it alerts users with labels explaining Facebook's fact-checking team has rated the content as false. Erroneous claims still slip through. "Facebook can and should do a better job of screening out false claims that present a clear and present danger to its customers," Oreskes says. "They promised they would do so on climate change, but they really have not lived up to that promise."

A Facebook spokeswoman said the company has removed around 7 million posts and labeled 98 million as misleading since the beginning of the pandemic. Twitter said it's continuing to explore ways of reporting misleading health content and it's deploying warnings users must tap through if they wish to reshare information deemed misleading.

A YouTube spokesperson did not respond to a request for comment.

Facebook, Twitter and YouTube have also moved to elevate authoritative content in timelines and feeds, changing what users see when they search for problematic information. But this may not actually help. "This doesn't match how people actually use most social media platforms," Dunn says. "Modifying search results is really a poorly targeted solution."

Users are more likely to let information come to them, rather than seek it out, so information hubs may have little to no effect on stemming the spread of erroneous information. "If I follow people and organizations who share misinformation, then not only am I going to see it without searching for it, but I am more likely to trust it or find it salient," says Dunn.

Almost every researcher suggested the major platforms have taken steps to curb the spread of misinformation, but they could -- and should -- be doing more. "The focus is often on technological solutions and fact-checking, which we know isn't sufficient," says van der Linden.

On a world without social media

Throughout July and August, I posed a thought experiment to over a dozen researchers: What would the world look like without social media? 

Many pointed to the positive effects Facebook, Twitter and YouTube have on communication. "Never before in history have people been so well informed," says Sora Park, a digital media researcher at the University of Canberra in Australia.

Park's research has shown social media users can be highly skeptical of what they see online. In an April survey of over 2,000 Australians aged 18 and older, her team found social media users were more likely to undertake "verification activities," including using a fact-checking website or using established news sources, than those who got their news from politicians or TV. However, they also were more likely to share and forward misinformation with other people -- increasing its spread. 

Social media has also fundamentally changed our access to scientists. 

Traditionally, scientific studies might be covered sporadically by traditional media, but now scientists are discussing the minutiae of a discovery directly with their followers. During the pandemic, these experts have worked to inform audiences via social media, and their follower counts have often swelled by tens of thousands.

"I am impressed by how many intelligent physicians, researchers and other academics have found time in their hectic lives to help people understand complex topics," Schwitzer says.

We shouldn't "demonize" social media, Axel Bruns suggests. "What we should demonize is what people do with social media, if anything," he says. Bruns notes that the platforms are only amplifying the underlying distrust of government, science and traditional news media, not causing it. Social media can help rapidly debunk misleading content, too, he argues. He gives the example of tennis star Pat Cash getting slammed after publishing pandemic conspiracy theories on Twitter.

We must accept misinformation as part of the fabric of our ultra-connected world, says Dunn, who notes that without the likes of Facebook, Twitter or YouTube, "the rich and powerful could more easily control information." We'd be in a worse situation when it comes to equality and justice, too, because social media is undoubtedly a powerful tool to unify marginalized groups.

If the focus shifted from critiquing platforms to educating users, we may be able to slow a lie down more effectively. "I would rather see us spend more time supporting people with the tools they need to assess what they see online," Dunn says, noting that we must reconcile ourselves with the fact that what people see online is shaped by the communities they choose, rather than international interference or bots.

On the speed of a lie

There's an obvious conflict of interest for the social media giants. There's an ethical and social responsibility to deal with misinformation but their business models aim to trap users in the doom-scroll, engaging with post after post: liking, retweeting, reacting and sharing content endlessly. In this ecosystem, posts don't have to be true, they just have to inspire enough of an emotional response to keep users on the page. 

Campaigns to deactivate or detox from social media have failed to drive users away, self-regulation has put content moderators at risk and governmental oversight has struggled to get off the ground -- so what do we do?

The short, sobering answer: We're not entirely sure.

Misinformation is an increasingly complex problem that crosses many disciplines, from digital research to human behavior and psychology. The increasing number of theories describing how to deal with misinformation do not always overlap with the praxis. As with the coronavirus pandemic itself, there's no simple solution. 

Researchers recognize the urgent need to immunize ourselves against misinformation. The social media giants must use their platforms to help users separate fact from fiction. "More investment needs to be made in media literacy to equip the public with better ways of identifying misinformation," says Caroline Fisher, deputy director of the news and media research center at Australia's University of Canberra. 

"The problem is usually that people don't have the basic skills or training to know what to look for, or the motivation to seek the truth," notes Douglas MacFarlane, a psychology Ph.D. candidate at the University of Western Australia studying health misinformation. We're enamored by listicles and emotionally engaging posts, which we consume and share more readily. Sometimes, when users share misinformation knowingly, they may be doing so as a form of social endorsement. "They are motivated to fly the flag of their worldview and group identity," says MacFarlane.

Bruns says controlling misinformation can only occur by "getting a greater number of people to be far more cautious about the information they encounter and pass on." He suggests we must build a greater awareness of where news is coming from so when we see misinformation shared by our friends, we aren't so prone to spreading it further. 

"Stop seeing this as a technological problem that has technological solutions, and start treating it as a social and societal problem," he says.

In late July, Margaret Sullivan at the Washington Post suggested America had lost the war against misinformation. It's true the scale of our misinformation problem is immense. It extends far beyond the pandemic, but we can't concede defeat. This is a critical juncture in the battle. The patchwork solutions provided by our social media overlords have clearly been insufficient.

Lies will always spread faster and further than the truth. Cordell Hull understood this in 1948. The pandemic hammered the point home. We can't doom-scroll past the problem any longer."

How COVID-19 infected the world with lies

No comments:

Post a Comment