Disrupting Disinformation

Prof. Chris Wiggins has six ways to understand and combat online disinformation

Nov 16 2020 | By Mindy FARABEE | Photo Credit: Courtesy of Chris Wiggins

A series of conversations on pioneering research.

As an applied mathematician, Chris Wiggins built his career applying machine learning to the basic sciences, designing computational tools that help illuminate biological processes like gene regulation and personalized medicine for cancer.

But these days he’s using that expertise to elucidate a very different social ill—the rapid proliferation of disinformation. And that means first answering broader questions about how to factor human experience into the equation.

Due to its digitized nature, the new disinformation ecosystem is vulnerable to detection and correction by the same computational tools that make it possible. But it isn’t enough to focus on technical tools, Wiggins argues; to create a truly healthy information environment, researchers need to design new rules of engagement.

How would you sum up the big idea animating your research?

I’ve long been interested in how we can adapt machine learning methods, designed largely for engineering and industrial applications, in ways that answer questions from the basic sciences and from health and medicine. A couple years ago, I was looking for new ways to bring understanding of these tools to more students, so I partnered with History Professor Matt Jones to co-design a course on the history of data.

We thought it would be a great way for students in the humanities to gain an understanding of how we make sense of the world through data while also encouraging technologists to understand the impact of these methods. Interestingly, the students pushed us to go further—they wanted not just to cover the history of data but also to explore the ethics of these methods and their human impact. One thing we really focused on is how massive data collection, paired with unprecedented computational power, has allowed social media platforms to conduct massive, online social experiments without any ethical oversight. Discussions in class pushed me to rethink the role technologists and researchers should play in our information ecosystem and was a main impetus for a paper I recently co-authored trying to sharpen what’s possible in disinformation research. In it, we have six specific recommendations for better detection at scale and measuring impact, new data infrastructure and ethical guidelines, and educational initiatives and workforce training.

Much of the conversation around disinformation focuses on what tech companies can and should do to counter false narratives. Your work makes the case that we aren’t focusing enough on the “demand side” of the disinformation ecosystem. What have we been missing about the social side of this socio-technical problem?

My initial reaction to disinformation was to think about technological solutions. However, it’s more and more clear to me that this is not just a tech problem, but also one that encompasses the norms around the way people use technology and the way society regulates technology. Disinformation is an existential threat to democracy, and there’s just so much we do not understand about the long-term impacts on belief systems and social norms. I don’t think the solution is going to be a quick tech fix, but instead a long-term investment in, among other things, changing the way academics do research, and changing the way we educate future technologists about the impact and ethics of their work.

Massive data collection, paired with unprecedented computational power, has allowed social media platforms to conduct massive, online social experiments without any ethical oversight.

Chris H. Wiggins
Associate Professor of Applied Mathematics

Speaking of, you and your colleagues note that research into disinformation can’t be effective until efforts there are much more coordinated. But right out of the gate, a major obstacle to that coordination lies in how universities conduct such human-centered research. Different institutions interpret the ethical guidelines governing human research in substantially different ways. Considering that these guidelines were designed long before the advent of social media, does tackling this problem first require a major update for the digital age?

The Belmont Report, which governs how academics conduct human-centered research, was issued in 1978. Our current disinformation ecosystem operates in a very different context, one in which the information platform companies now shape the landscape—in a way, their product decisions change the underlying effects we researchers wish to study.

Because of this, we can’t simply rely on observational data—to truly understand the impact disinformation has on individuals and on society, we need to design interventions and experiments and to collect data on real people and their communications. But that must be done in a way that balances the insights of careful research with an appropriate respect for rights, harms, and justice in the digital age. For instance, how do we respect privacy and consent in the context of using a public Twitter post? How do we assess justice and harms in research on facial recognition?

The information security community faced a similar dilemma a decade ago, which led to a new commission helping researchers understand how information security research can be conducted in a way consistent with society’s broader understanding of ethical research. This clears up confusion and doubt by illustrating to researchers, ethical review boards, funders, and policy makers alike how the field can be advanced in a way consistent with ethics. Our paper argues, in part, that a similar consensus would dispel much of the uncertainty that can dissuade disinformation researchers from working to answer crucial questions about effects and causality.

 

 

Stay up-to-date with the Columbia Engineering newsletter

* indicates required