What We Now Know About Russian Disinformation
The Senate gave my research team data from social media companies. The picture that emerges is grim.
Jenice Kim
The Russian disinformation operations that affected the 2016 United States presidential election are by no means over. Indeed, as two new reports produced for the Senate Intelligence Committee make clear, Russian interference through social media — contrary to the suggestion of many prominent tech executives — is a chronic, widespread and identifiable condition that we must now aggressively manage.
The Senate committee asked two research teams, one of which I led, to investigate the full scope of the recent multiyear Russian operation to influence American opinion executed by a company called the Internet Research Agency. The Senate provided us with data attributed to the agency’s operations given to the Senate by Facebook, Twitter and Alphabet (Google’s parent company), companies whose platforms were manipulated for that purpose.
Our report, announced by the committee on Monday, concludes that Russia was able to masquerade successfully as a collection of American media entities, managing fake personas and developing communities of hundreds of thousands, building influence over a period of years and using it to manipulate and exploit existing political and societal divisions. While Russia is hardly the only geopolitical actor with a well-thumbed disinformation playbook, a look at the data — which concerned the Internet Research Agency’s operation over the last three years — reveals its enthusiasm for and commitment to modern information warfare.
Regardless of what any tech executives may have said, the data indicate that this was not a small-scale problem fixable by tweaking a platform’s advertising purchase policy. Rather, it was a cross-platform attack that made use of numerous features on each social network and that spanned the entire social ecosystem.
ADVERTISEMENT
Tech executives have also stressed that Russian disinformation efforts were a small percentage of the total content on any individual platform during the years in question. That is correct. But it obscures the more important point: For subgroups of targeted Americans, such as African-Americans, the messaging was perhaps ubiquitous.
And disinformation does not stay within a subgroup. “Virality,” a term beloved by internet marketers, is apropos here: A vulnerable few contract an initial “infection” that then spreads exponentially through the broader population, ultimately enabling the infection to “jump” into entirely different populations (including offline populations, in this case).
In official statements to Congress, tech executives have said that they found it beyond their capabilities to assess whether Russia created content intended to discourage anyone from voting. We have determined that Russia did create such content. It propagated lies about voting rules and processes, attempted to steer voters toward third-party candidates and created stories that advocated not voting.
Our analysis underscores the fact that such influence operations are not specific to one platform, one malign actor or one targeted group. This is a global problem. The consolidation of the online social ecosystem into a few major platforms means that propagandists have ready audiences; they need only blanket a handful of services to reach hundreds of millions of people. And precision targeting, made possible by a decade of gathering detailed user behavior data (in the service of selling ads), means that it is easy and inexpensive to reach any targeted group.
Ultimately, the biggest lesson from the Senate committee’s request for our investigation of Russian interference is the troubling absence of adequate structures for collaboration among multiple stakeholders, private and public alike, to establish solutions and appropriate oversight.
The hard truth is that the problem of disinformation campaigns will never be fixed; it’s a constantly evolving arms race. But it can — and must — be managed. This will require that social media platforms, independent researchers and the government work together as partners in the fight. We cannot rely on — nor should we place the full burden on — the social media platforms themselves.
The landscape of disinformation is, frankly, a grim one. Russia has already signaled its intention to continue information operations. Terrorists strategically counter attempts to kick them off popular platforms. Domestic ideologues adopt the manipulative distribution tactics used by foreign propagandists.
But there is some cause for hope. With our report (and that of others) on the Internet Research Agency data, we now have a far more complete picture of what happened with Russian disinformation efforts from 2014 to 2017. There is heightened public interest in the topic, the social platforms are actively participating in trying to find solutions and the government’s investigation is fueling a conversation about regulation. Senator Mark Warner, Democrat of Virginia, has even proposed a comprehensive new cyberdoctrine.
With discipline, rigor and broad collaboration, we can meet this challenge, establishing standards, protocols and governance that will defend the integrity of our information.
Renée DiResta is the director of research at New Knowledge, a cybersecurity company that monitors disinformation.
No comments:
Post a Comment