Publication: Intersectional hatred - an application of large language models to detect hate and offensive speech targeted at congressional candidates in the 2024 U.S. election
Program
KU Authors
Co-Authors
Finkel, Müge Kökten
Thakur, Dhanaraj
Finkel, Steven E.
Zaner, Amanda
Han, Jungmin
Publication Date
Language
Embargo Status
No
Journal Title
Journal ISSN
Volume Title
Alternative Title
Abstract
In this paper we take an intersectional approach to the problem of understanding hate and offensive speech targeted at all candidates who ran for Congress in the 2024 U.S. elections. We used a series of language models to analyze posts on X for instances of hate and offensive speech. This was based on a dataset of over 800, 000 posts on X collected between May 20 and August 23, 2024. We found that, on average, more than 1 in 5 tweets targeted at Asian-American and African- American women candidates contained offensive speech, a higher proportion than other candidates. We also found that, on average, African- American women candidates were four times more likely than others to be targeted with hate speech, three times as likely as white women, and more than 18 times as likely as white men. These findings support prior research that women of color political candidates are more likely to be targeted with online abuse, a pattern which has important implications for the quality of American democracy. © 2025 Elsevier B.V., All rights reserved.
Source
Publisher
Association For Computing Machinery, Inc
Subject
Computer science
Citation
Has Part
Source
Book Series Title
Edition
DOI
10.1145/3701716.3716880
