The Human Consequences of Computational Propaganda
The Human Consequences of Computational Propaganda
Case Studies from the 2018 US Midterm Elections
New research from Institute for the Future (IFTF) reveals how social and issue-focused groups are particularly susceptible to disinformation campaigns and were targeted with computational propaganda during the 2018 mid-term elections. It also shows why the targeting of these groups will continue, and potentially worsen, in 2020. The research, The Human Consequences of Computational Propaganda, led by IFTF’s Digital Intelligence Lab, provides recommendations for fighting back.
The research includes case studies informed by qualitative field work and quantitative social media data analysis into the tactics used to attack and disenfranchise groups, including: Muslim Americans; Latinos; moderate Republicans; black women gun owners; environmental activists; pro-choice and pro-life campaigners; immigration activists.
The research found that a variety of disinformation tactics were used against these groups during the 2018 US midterm elections, including targeted political trolling campaigns, bot-driven censorship, and intra-group harassment. These groups were targeted at a disproportionally high level compared to other groups, and the efforts succeeded in damaging their political involvement and desire to be involved in public life.
“Computational propaganda does not simply aim to make unpopular opinions appear to be more popular in social media conversation, it silences and splinters under-represented groups integral to the functioning of democracy,” said Dr. Samuel Woolley, director of IFTF’s Digital Intelligence Lab. “It is important to quantify the magnitude of disinformation and bot-driven trolling, but to truly understand the repercussions on the democratic process we must also focus on the detrimental effects these tactics have on particular social groups and issue voters.”
The research lays the groundwork for technological, legal, and civil actions that will create a safer and more accessible online public sphere. Avenues for prevention and intervention include:
- Adapting current First Amendment prohibitions on hate speech to online platforms and technologies.
- Modification of the Communications Decency Act of 1996, which protects social media platforms from legal liability
- New user tools on Twitter that allow for greater control with regard to harassing and derogatory messaging—beyond muting, blocking, reporting, and unfollowing
- Standardization of social media platforms’ responses to onslaughts of trolling due to doxing, and other targeted attacks.
In the Media
- "Vulnerable Groups Could Be Targeted And Silenced Online Ahead Of 2020 Election, Researchers Warn"
by Craig Silverman and Jane Lytvynenko. Buzzfeed.
Read the executive summary and all reports in the series:
About Institute for the Future
Institute for the Future is the world’s leading futures thinking organization. For over 50 years, businesses, governments, and social impact organizations have depended upon IFTF global forecasts, custom research, and foresight training to navigate complex change and develop world-ready strategies. IFTF methodologies and toolsets yield coherent views of transformative possibilities across all sectors that together support a more sustainable future. Institute for the Future is a registered 501(c)(3) nonprofit organization based in Palo Alto, California.
About the Digital Intelligence Lab
This public-facing social data science lab was founded in Fall of 2017, continuing Director Samuel Woolley’s former work at the Computational Propaganda Project at the University of Oxford. Our team has been ahead of the curve in tracking the impact of bots, online disinformation, and algorithmic manipulation on democracy. The lab maintains a special focus on how these elements of manipulation relate to U.S. politics and works to translate findings to groups in Silicon Valley and beyond. Our evidence base includes large data-set analysis of social media content, network analysis of online communities, and fieldwork with both those who build the technology used for manipulation and those on the receiving end of these campaigns.
For more information
If you are interested in learning more about how to work with IFTF's Digital Intelligence Lab, please contact [email protected]
For media inquiries contact Jean Hagan.