What Does {BERT} actually Learn about Event Coreference? Probing Structural Information in a Fine-Tuned {D}utch Language Model

Publication type
U
Publication status
Published
Authors
De Langhe, L., De Clercq, O., & Hoste, V.
Editor
Shabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers and Anna Rumshisky
Series
Proceedings of the Fourth Workshop on Insights from Negative Results in NLP
Pagination
103-108
Publisher
Association for Computational Linguistics (Dubrovnik, Croatia)
View in Biblio
(externe link)

Abstract

We probe structural and discourse aspects of coreferential relationships in a fine-tuned Dutch BERT event coreference model. Previous research has suggested that no such knowledge is encoded in BERT-based models and the classification of coreferential relationships ultimately rests on outward lexical similarity. While we show that BERT can encode a (very) limited number of these discourse aspects (thus disproving assumptions in earlier research), we also note that knowledge of many structural features of coreferential relationships is absent from the encodings generated by the fine-tuned BERT model.