A fine line between irony and sincerity : identifying bias in transformer models for irony detection

Publication type
C1
Publication status
Published
Authors
Maladry, A, Lefever, E., Van Hee, C., & Hoste, V.
Editor
Jeremy Barnes, Orphée De Clercq and Roman Klinger
Series
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, and Social Media Analysis
Pagination
315-324
Publisher
Association for Computational Linguistics (ACL) (Toronto, Canada)
Conference
13th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, collocated with ACL 2023 (WASSA 2023) (Toronto, Canada)
Download
(.pdf)
View in Biblio
(externe link)

Abstract

In this paper we investigate potential bias in fine-tuned transformer models for irony detection. Bias is defined in this research as spurious associations between word n-grams and class labels, that can cause the system to rely too much on superficial cues and miss the essence of the irony. For this purpose, we looked for correlations between class labels and words that are prone to trigger irony, such as positive adjectives, intensifiers and topical nouns. Additionally, we investigate our irony model’s predictions before and after manipulating the data set through irony trigger replacements. We further support these insights with state-of-the-art explainability techniques (Layer Integrated Gradients, Discretized Integrated Gradients and Layer-wise Relevance Propagation). Both approaches confirm the hypothesis that transformer models generally encode correlations between positive sentiments and ironic texts, with even higher correlations between vividly expressed sentiment and irony. Based on these insights, we implemented a number of modification strategies to enhance the robustness of our irony classifier.