Evaluating Automatic Cross-Domain Semantic Role Annotation

Publication type
P1
Publication status
Published
Authors
De Clercq, O., Hoste, V., & Monachesi, P.
Journal
Proceedings of the 8th Language Resources and Evaluation Conference (LREC'12)
Pagination
88-93
Download
(.pdf)
Projects
Stylene, SoNaR

Abstract

In this paper we present the first corpus where one million Dutch words from a variety of text genres have been annotated with semantic roles. 500K have been completely manually verified and used as training material to automatically label another 500K. All data has been annotated following an adapted version of the PropBank guidelines. The corpus’s rich text type diversity and the availability of manually verified syntactic dependency structures allowed us to experiment with an existing semantic role labeler for Dutch. In order to test the system’s portability across various domains, we experimented with training on individual domains and compared this with training on multiple domains by adding more data. Our results show that training on large data sets is necessary but that including genre-specific training material is also crucial to optimize classification. We observed that a small amount of in-domain training data is already sufficient to improve our semantic role labeler.