It's absolutely divine! Can fine-grained sentiment analysis benefit from coreference resolution?

Publication type
Publication status
De Clercq, O., & Hoste, V.
Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference (CRAC 2020)
Association for Computational Linguistics (ACL)
CRAC workshop (Barcelona, Spain (online))
View in Biblio
(externe link)


While it has been claimed that anaphora or coreference resolution plays an important role in opinion mining, it is not clear to what extent coreference resolution actually boosts performance, if at all. In this paper, we investigate the potential added value of coreference resolution for the aspect-based sentiment analysis of restaurant reviews in two languages, English and Dutch. We focus on the task of aspect category classification and investigate whether including coreference information prior to classification to resolve implicit aspect mentions is beneficial. Because coreference resolution is not a solved task in NLP, we rely on both automatically-derived and gold-standard coreference relations, allowing us to investigate the true upper bound. By training a classifier on a combination of lexical and semantic features, we show that resolving the coreferential relations prior to classification is beneficial in a joint optimization setup. However, this is only the case when relying on gold-standard relations and the result is more outspoken for English than for Dutch. When validating the optimal models, however, we found that only the Dutch pipeline is able to achieve a satisfying performance on a held-out test set and does so regardless of whether coreference information was included.