Research
At the Zera Institute, our research examines how hate, extremism, and antisemitism spread through digital ecosystems—and how they erode the foundations of liberal democracy. We uncover what often goes unnoticed: the linguistic, visual, and algorithmic patterns through which radical ideologies become normalized, shared, and socially embedded.
Overview of
research reports
July 2025 – January 2026
Liberal democracy is struggling, with violence, especially political violence and violence against minorities, growing at an alarming rate, while trust in institutions plummets. Online extremism and radicalization are major drivers of this trend. And antisemitism is a crucial component of online radicalization and extremism.
To date, the interaction between these three phenomena – threats to liberal democracy, online extremism, and antisemitism – has received limited systematic analysis, particularly in influencer-centered social media environments. Our primary report addresses this gap through an empirical study of how influencers and their audiences jointly produce and intensify illiberal discourse. The analysis focuses on three interrelated processes that pose acute challenges to contemporary liberal democracy: the normalization of political violence; the legitimation of violence against minorities, especially Jewish people; and the progressive delegitimization of democratic institutions.
The research project underlying the following reports was completed at the end of January 2026.
Primary report at a glance:
- Analyzes more than 11,000 user comments from Twitter/X, Instagram, YouTube, TikTok, and other social media platforms
- Undertakes a cross-national comparative study of over two dozen German and American influencer accounts and their audiences, spanning the political spectrum
- Builds on the long-standing methodology of the Decoding Antisemitism project, led by Dr. Matthias J. Becker
- Sheds light on the role played by online discourse in normalizing violence, especially political violence and violence against minorities, and the erosion of trust in institutions
- Provides insights into context-specific manifestations of illiberalism across German and American digital ecosystems
- Takes foundational steps, translating qualitative findings into scalable, AI-assisted analytical prototypes
Second report
Our second report builds on the empirical findings of our first study by formulating hypotheses for future research and identifying four possible policy-level interventions and policy-relevant intervention pathways. We pay particular attention to the role of comment sections in amplifying, radicalizing, and legitimizing discourse beyond the initial framing provided by influencers.
- Social Media Policy & Regulation: Evidence-based criteria for platform governance; identification of amplification dynamics warranting intervention
- Security/National Security & Risk Assessment: Early-warning system indicators; improved detection of coordinated or foreign influence operations
- Education & Prevention: Empirically grounded insights into how radical frames normalize violence and foundations for counter-radicalization and resilience-oriented educational content
- Applied AI Development: Translation of qualitative discourse insights into monitoring and diagnostic systems, with an emphasis on prevention rather than purely reactive enforcement
Authors of the Study
Dr. Matthias J. Becker
Matthias ist Linguist mit Schwerpunkt auf Pragmatik, kognitiver Linguistik, Diskursanalyse und Social Media Studies. Seit mehr als einem Jahrzehnt forscht er zu Antisemitismus, Vorurteilen und Hass in medialen Kontexten. Als Leiter des internationalen Projekts Decoding Antisemitism an der University of Cambridge – dem bislang größten Forschungsprojekt zu Antisemitismus im Internet in Europa – sowie des New Yorker Thinktanks AddressHate bringt er Experten aus Geistes-, Sozial- und Datenwissenschaften zusammen, um antisemitische Diskurse in digitalen Räumen qualitativ, multimodal und KI-basiert zu analysieren.
Seine Arbeit verbindet wissenschaftliche Präzision mit gesellschaftlicher Relevanz und zielt darauf ab, die Mechanismen hinter implizitem Hass offenzulegen, das Potenzial LLM-gestützter Ansätze auszuschöpfen, Social-Media-Analysen zu skalieren und Strategien für Prävention und Aufklärung zu entwickeln.
Prof. Benjamin Folit-Weinberg
Ben ist Altphilologe und Experte für antikes griechisches Denken. Seine Forschung bewegt sich an der Schnittstelle von Philosophie, Dichtung und Ideengeschichte und untersucht, wie Begriffe, Sprache und Metaphern das Denken strukturieren – von der Antike bis zur Gegenwart.
Marcus Scheiber
Marcus Scheiber is a linguist with research interests in social semiotics, corpus linguistics, critical discourse analysis, and multimodality research. He received his MA from the University of Heidelberg in 2018 with a thesis about internet memes. Since 2020, he has been pursuing a joint PhD project at the University of Vechta and the University of Vienna entitled “The reality construction potential of multimodal communicative units in antisemitic communication, examining internet memes as communication formats in antisemitic communication strategies.
Benjamin C. Rouda
Benjamin Rouda holds a BA in Psychology and Middle Eastern Studies from Tel Aviv University.
His research centers on social psychology, particularly the psychology of extremism and intergroup conflict. He also investigates the radicalization of online communities, with a special focus on incel groups.
Suneela Maddineni
Suneela Maddineni is a data scientist and NLP researcher with a focus on computational analysis of online harmful and antisemitic discourse. She holds a Master’s degree in Data Science from American University, where her work centered on text classification, clustering, and mixed-method approaches to understanding online hate and psychological themes in social media content.
Oksana Stanevich
Oksana is a physician-epidemiologist and data scientist with extensive experience in public health research, computational analysis, and bioinformatics. She has worked on open science initiatives and projects that address misleading or fragmented information in high-risk and crisis contexts, such as the COVID-19 pandemic.