Friday, April 01, 2022

DETECTING BULLSHIT: Studying the spread of misinformation should become a top scientific priority, says biologist Carl Bergstrom. (By Kai Kupferschmidt, SCIENCE Magazine)

It is April Fool's Day.  Good article from Germany based SCIENCE Magazine correspondent Kai Kupferschmidt about detecting "bullshit" -- disinformation and misinformation making fools of us all. 

Public policy must be based on good science, not junk science or no science at sll. 

Our dysfunctional gooberish Flori-DUH Governor RONALD DION DeSANTIS, our former Congressman from St. Johns County is a dupe and devotee of junk science, as evidenced by his hiring a malfeasant California physician as Florida Surgeon General. 

Enough flummery, dupery and nincompooopery.

From SCIENCE Magazine, published by the American Association for the Advacement of Science (a scientific magazine where the late David Brian Wallace worked for thirteen years.  I was inspired in representing him in a discrimination and wrongful termination case against AAAS, winning a settlement of his NLRB and D.C. Human Rights Commission charges): 


Illustration of a scientist observing a nest full of Twitter birds.

DETECTING BULLSHIT

Studying the spread of misinformation should become a top scientific priority, says biologist Carl Bergstrom

Go to content
DAVIDE BONAZZI/SALZMANART
issue cover image
Table of contents
A version of this story appeared in Science, Vol 375, Issue 6587.

When Carl Bergstrom worked on plans to prepare the United States for a hypothetical pandemic, in the early 2000s, he and his colleagues were worried vaccines might not get to those who needed them most. “We thought the problem would be to keep people from putting up barricades and stopping the truck and taking all the vaccines off it, giving them to each other,” he recalls.

When COVID-19 arrived, things played out quite differently. One-quarter of U.S. adults remain unvaccinated against a virus that has killed more than 1 million Americans. “Our ability to convince people that this was a vaccine that was going to save a lot of lives and that everyone needed to take was much, much worse than most of us imagined,” Bergstrom says.

He is convinced this catastrophic failure can be traced to social media networks and their power to spread false information—in this case about vaccines—far and fast. “Bullshit” is Bergstrom’s umbrella term for the falsehoods that propagate online—both misinformation, which is spread inadvertently, and disinformation, designed to spread falsehoods deliberately.

An evolutionary biologist at the University of Washington (UW), Seattle, Bergstrom has studied the evolution of cooperation and communication in animals, influenza pandemics, and the best ways to rank scientific journals. But over the past 5 years, he has become more and more interested in how “bullshit” spreads through our information ecosystem. He started fighting it before COVID-19 emerged—through a popular book, a course he gives at UW’s Center for an Informed Public, and, ironically, a vigorous presence on social media—but the pandemic underscored how persuasive and powerful misinformation is, he says.

“Misinformation has reached crisis proportions,” Bergstrom and his UW colleague Jevin West wrote in a 2021 paper in the Proceedings of the National Academy of Sciences(PNAS). “It poses a risk to international peace, interferes with democratic decision-making, endangers the well-being of the planet, and threatens public health.” In another PNAS paper, Bergstrom and others issued a call to arms for researchers to study misinformation and learn how to stop it.

That research field is now taking off. “You have scholars from so many different disciplines coming together around this common theme”—including biology, physics, sociology, and psychology—says Philipp Lorenz-Spreen, a physicist who studies social media networks at the Max Planck Institute for Human Development.

ADVERTISEMENT

But the influx has yet to coalesce into a coherent field, says Michael Bang Petersen, a political scientist at Aarhus University. “It’s still spread out in different disciplines and we are not really talking that much together.” There’s also disagreement about how best to study the phenomenon, and how significant its effects are. “The field is really in its infancy,” Lorenz-Spreen says.

BERGSTROM GREW UP in Michigan, but as a child twice spent almost a year in Australia, where his father, an economist, was on a sabbatical. “That’s where I fell in love with birds,” he says. In his bedroom he had a poster of all the parrots of Australia: “It’s like you handed an 8-year-old a bunch of crayons and a bunch of outlines of parrots and just said, ‘Make them pretty.’” Corvids are his favorite birds because of their cognitive abilities. On Twitter, his avatar is a crow.

Carl Bergstrom birdwatches while holding binoculars and a camera.
Carl Bergstrom, a birding enthusiast who in the past studied animal communication, looks at social media through an evolutionary lens.MEGAN FARMER

As a biology student at Stanford University, Bergstrom grew fascinated by communication—an evolutionary puzzle because of the potential for deception. “If I listen to you, I give you a handle over my behavior,” Bergstrom says. “I might do the things you want, even if they’re not in my interest.” In his Ph.D. thesis, he tackled the question of how communication can stay useful when there is so much to be gained from misusing it.

In nature, he concluded the answer is often that lies are costly. Begging for food makes baby birds vulnerable, for example, so they have an incentive to do it only when necessary. “If you’re just a defenseless ball of meat sitting in a nest and can’t go anywhere, yelling at the top of your lungs is amazingly stupid,” Bergstrom says. “If they’re not really hungry, they’ll just shut up.” On social media, such repercussions barely exist, he says: Liars have little incentive to shut up.

In the early 2000s, while a postdoc in the lab of population biologist Bruce Levin at Emory University, Bergstrom awoke to the threat of infectious disease. He collaborated with a fellow postdoc, Marc Lipsitch—now an epidemiologist at the Harvard T.H. Chan School of Public Health—on papers about pandemic preparedness. Then he delved into network theory, which aims to mathematically describe the properties of networks, including those that spread disease. “It appealed to multiple aspects of my intellectual interests,” he says.

Network theory in turn led Bergstrom to the spread of information. Together with Martin Rosvall, a physicist at Umeå University, he found a way to use citation data to create “maps” of the scientific enterprise that showed, for example, how fields cluster and which scientists are most influential. “The algorithms we came up with turned out to work much, much better than I expected,” Bergstrom says. “I still don’t really understand why they are so good.”

Bergstrom’s path through different disciplines is a testament to his curiosity and creativity, West says: “You’d be hard-pressed to find someone that really has moved around in such disparate fields and had impacts in all these different areas.”

Around 2017, Bergstrom’s interests started to coalesce around the topic of misinformation. The debate about Russian misinformation and the role it played in the 2016 election of Donald Trump as U.S. president, and an inspiring 2017 meeting about misinformation organized by biologist Joe Bak-Coleman, then at Princeton University, made him realize “this is actually a huge problem and one that’s going to take all these different approaches to deal with it, many of which I personally found very interesting,” he says.

BERGSTROM SEES SOCIAL MEDIA, like many other things in life, through an evolutionary lens. The popular platforms exploit humanity’s need for social validation and constant chatter, a product of our evolution, he says. He compares it to our craving for sugar, which was beneficial in an environment where sweetness was rare and signaled nutritious food, but can make us sick in a world where sugar is everywhere. Facebook exploits humans’ thirst for contact, in his view, like a Coca-Cola for the mind, allowing people to connect with others in larger numbers during a single day than they might have over a lifetime in humanity’s past.

And whereas Coca-Cola cannot tweak its formula on a weekly basis, social media platforms can constantly change their algorithms and test out new strategies to keep us engaged. “The social media companies are able to run the largest scale psychological experiments in history by many orders of magnitude, and they’re running them in real time on all of us,” Bergstrom says.

Carl Bergstrom bird-watching.
Carl Bergstrom is fascinated by communication—an evolutionary puzzle because of the potential for deception. “If I listen to you, I give you a handle over my behavior,” he says.MEGAN FARMER

Often, engagement comes from crass conflict: “In a schoolyard people cluster around fights, and the same thing happens on Twitter,” he says. Zeynep Tufekci, a sociologist at Columbia University, agrees. “Social connection is ingroup/outgroup,” Tufekci says. That promotes polarization and tribalism as well as exaggeration and misinformation, she says.

Online networks also undermine traditional rules of thumb about communication. Before the advent of the internet, for example, hearing the same information from multiple people made it more trustworthy. “In the physical world, it would be almost impossible to meet anyone else who thinks the world is flat,” Stephan Lewandowsky, a psychologist at the University of Bristol, wrote in an email. “But online, I can connect with the other .000001% of people who hold that belief, and may gather the (false) impression that it is widely shared.”

Social media companies have little incentive to change their practices because they make money selling ads. “The network structures along which we share information have changed radically in the last 20 years, and they’ve changed without any kind of stewardship,” Bergstrom says. “They’ve changed basically just to help some tech startups sell ads.”

Seeing the problem does not mean he is immune to it. Bergstrom admits sometimes waking up at 4 a.m. and checking his Twitter mentions. “That’s the stupidest thing I could possibly do. Because an hour and a half later, I’m pissed off and can’t sleep,” he says. “It works on all of us, even those of us who know what they’re doing.”

In a perspective published in PNAS last year, Bergstrom and 16 other scientists from various fields argued that the study of how the information ecosystem influences human collective behavior needed to become a “crisis discipline,” much like climate science, that could also suggest ways to tackle the issue. “That paper was just pointing out that the building is on fire,” says Bak-Coleman, the lead author, who’s now also at UW’s Center for the Informed Public. The problem is we don’t know how to quench the fire, he says.

ONE KEY PROBLEM is that the way information spreads on social media is determined by the platforms’ proprietary algorithms, which scientists have not been able to study. “Even if there was a crisis discipline like Carl wants it, we simply do not have the data,” says Dietram Scheufele who studies science communication at the University of Wisconsin, Madison. (The algorithms have helped bring about a “tectonic shift in the balance of power in science information ecologies,” Scheufele and his colleague Dominique Brossard argued in a recent perspective in Science.) “The only way in which we can create that discipline is by policymakers forcing tech companies to provide data access,” Petersen says.

Researchers have tried to understand the flow of mis- and disinformation in other ways, but the results are often not clear-cut. Last year, a report by the Center for Countering Digital Hate claimed that just 12 people—which it dubbed the “disinformation dozen”—were the source of 73% of misinformation about COVID-19 on Facebook. Banning these “superspreaders” could reduce the amount of misinformation significantly, the authors suggested. But Meta, Facebook’s parent company, pushed back against what it called “a faulty narrative” in a blog post. The report was based on just 483 pieces of content from only 30 groups and “in no way representative of the hundreds of millions of posts that people have shared about COVID-19 vaccines in the past months on Facebook,” Meta said.

quotation mark
The social media companies are able to run the largest scale psychological experiments in history by many orders of magnitude, and they’re running them in real time on all of us.
  • CARL BERGSTROM
  •  
  • UNIVERSITY OF WASHINGTON, SEATTLE

In 2018, researchers from the Massachusetts Institute of Technology published a study in Science showing false news spreads “farther, faster, deeper, and more broadly than the truth.” The reason is that people like novelty, and false stories are likely to be more novel, the authors suggested. If true, this might allow false news to be identified automatically, simply through the way it spreads.

But the picture is complicated. The paper looked specifically at a subset of news that had been fact-checked by independent organizations such as Snopes, which meant the stories had to spread far and wide to merit that kind of attention. The researchers replicated their results with a larger news sample that was not fact-checked, but whether the conclusions apply to all fake news is not clear. And a reanalysis by other researchers late last year found that whereas the fact-checked news stories did spread further, the pattern of their spread was not different from true news that reached a similar number of people.

Bak-Coleman, who has studied how schools of fish suppress false alarms from fish in the periphery of the swarm, believes the density of connections on social media make it harder to filter out bad information. Bergstrom agrees. “If we actually cared about resisting disinformation, Twitter and Facebook might be better off if they said, ‘Look, you can have 300 friends and that’s it,’” he says.

But that, too, needs study, he says, as do strategic questions: “What would agents do if they wanted to try to inject misinformation into a network? Where would they want to be? And then the flip side: How do you try to counter that?”

TO MAKE PROGRESS, some researchers say, the budding field needs to focus less on the network’s properties and more on its human nodes. Like viruses, misinformation needs people to spread, Tufekci says. “So what you want to really do is study the people end of it,” including people’s reasons for clicking Like or Retweet, and whether misinformation changes their behavior and beliefs.

That’s also difficult to study, however. At UW’s Center for an Informed Public, billions of online conversations are captured every year. If a certain piece of misinformation is identified, “you can go about measuring how it’s amplified, how fast it grows, who’s amplifying it,” says West, who directs the center. “But it is very difficult to see whether that translates into behavior, and not just behavior, but beliefs.”

A review of 45 studies on misinformation about COVID-19 vaccines, recently published as a preprint by researchers in Norway, concluded that—although misinformation was rampant—there were few high-quality studies of its effects. “There is a need for more robust designs to become more certain regarding the actual effect of social media misinformation on vaccine hesitancy,” the authors concluded.

Scientists have tried to study the issue by isolating a very small part of the problem. A recent paper in Nature Human Behaviour, for example, reported the results of an experiment conducted in September 2020, before COVID-19 vaccines became available. Researchers asked 4000 people in both the United Kingdom and the United States whether they planned to get vaccinated, exposed them to either facts or false information about the vaccines in development, then measured their intent again. In both countries, exposure to misinformation led to a decline of six percentage points in the share of people saying they would “definitely” accept a vaccine.

Tufekci has no doubt social media has a major impact on society: “Just look around. It’s a complete shift in how the information ecology works at a social level. How can you not expect it to have an impact?” But small-scale lab studies simply can’t properly measure the problem, she says. “An ecological shift of this nature doesn’t lend itself to that kind of study.” People in the real world are likely exposed not to one piece of misinformation, but to a lot of it over time, often coming to them through friends, family, or other people they trust. And online misinformation cascades through the information ecosystem, pushing more traditional media to spread it as well. “Fox News is kind of competing with that stuff on Facebook,” Tufekci says. “Fox News doesn’t operate in a vacuum.”

On Twitter, Carl Bergstrom has battled against—and reflected on—the rise of disinformation.

But Scheufele isn’t so sure misinformation has a big impact. “There is a correlation of course between all this misinformation and the decision by so many people not to get vaccinated, but correlation does not mean causation,” he says. He believes people choose information to conform to their world view, not the other way around. In his view, misinformation is a symptom, but the real disease is polarization and a political system and societal climate that rewards it. Given the deep political polarization in the United States and Trump’s “unusual” presidency, the pandemic “was always going to be a shitshow” there, Scheufele says.

A recent review in Nature, however, argued that people do not fall for misinformation because of polarization. The authors cited studies suggesting true information is more likely to be believed than false information, even if it doesn’t align with one’s political views. “Politics does not trump truth,” they concluded. The real problem is people sharing information with little attention to whether it is true, the authors wrote. “Rather than being bamboozled by partisanship, people often fail to discern truth from fiction because they fail to stop and reflect about the accuracy of what they see on social media.”

Reining in such thoughtless sharing is the goal of two approaches to tackling misinformation. One, known in the field as “nudging,” includes anything from flagging suspicious information—for example because it’s based on few or anonymous sources—to making it harder to share something. A platform might force users to copy and paste material before sharing it, or put a limit on how often a post can be reshared. “It’s being shown again and again that it has some benefits when you give people a little time to think about their decision to share something,” Lorenz-Spreen says.

The other approach, “boostering,” is designed to improve users’ critical skills. This includes “prebunking”—teaching people how to spot misinformation—and “lateral reading,” verifying new information while you are reading it by looking for outside information. Those skills are what Bergstrom has tried to teach in a course he taught with West and in their book, Calling Bullshit: The Art of Skepticism in a Data-Driven World.

WHEN THE PANDEMIC STARTED in early 2020, Bergstrom initially thought it would be a great opportunity to study the spread of misinformation, drawing on his background in network theory. “I thought when this has all gone away in March, then I can go through these data sets and figure out what were the spread patterns that we were seeing.” But the pandemic did not go away and Bergstrom was sucked into the practical work of explaining the science and calling out misinformation. He does so primarily in long threads on Twitter, where he now has more than 150,000 followers.

In early 2020, for example, he took on Eric Feigl-Ding, a nutritional epidemiologist then at Harvard Chan who amassed a huge following with what many scientists felt were alarmist tweets. When Feigl-Ding tweeted about a preprint claiming that SARS-CoV-2 contained sequences from HIV and was likely engineered, Bergstrom called him an “alarmist attention-seeker.” (The preprint was withdrawn within days.)

But the spat showed that defining misinformation is difficult. Feigl-Ding rang the alarm many times—he is “very, very concerned” about every new variant, Bergstrom says, and “will tweet about how it’s gonna come kill us all”—but turned out to be right on some things. “It’s misinformation if you present these things as certainties and don’t adequately reflect the degree of uncertainty that we have,” Bergstrom says.

Feigl-Ding says using Twitter early in the pandemic was “a big learning curve” for most scientists and that Bergstrom and he “have long made amends and got along well since mid 2020.” Indeed, Bergstrom worries that his tone stoked unnecessary division and helped spread the sense that scientists are just fighting over their egos. “The overall lesson, whether it’s dealing with Eric or others, is that I would have done better to be a bit drier, a bit more dispassionate.”

Bergstrom realizes his battle against misinformation is a Sisyphean task. He likes to quote Brandolini’s law, which says “the amount of energy needed to refute bullshit is an order of magnitude larger than is needed to produce it.” Tufekci concurs. “I like Carl’s stuff. I benefit from following him and I’m sure the people who follow him benefit from following him,” she says. “But the societal solution is not to need Carl.”

Correction, 24 March, 3:45 p.m.: An earlier version of this story summarized the findings of a Science paper about the spread of false news—and a reanalysis of its data—incorrectly. And Eric Feigl-Ding's affiliation has been corrected. 

No comments: