Towards shared datasets for normalization research

Publication type
P1
Publication status
Published
Authors
De Clercq, O., Schulz, S., Desmet, B., & Hoste, V.
Editor
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Hrafn Loftsson, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk and Stelios Piperidis
Series
LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
Pagination
1218-1223
Publisher
European Language Resources Association (ELRA)
Conference
Language Resources and Evaluation Conference (Reykjavik, Iceland)
Download
(.pdf)
Projects
PARIS, SubTLe, AMiCA
View in Biblio
(externe link)

Abstract

In this paper we present a Dutch and English dataset that can serve as a gold standard for evaluating text normalization approaches. With the combination of text messages, message board posts and tweets, these datasets represent a variety of user generated content. All data was manually normalized to their standard form using newly-developed guidelines. We perform automatic lexical normalization experiments on these datasets using statistical machine translation techniques. We focus on both the word and character level and find that we can improve the BLEU score with ca. 20% for both languages. In order for this user generated content data to be released publicly to the research community some issues first need to be resolved. These are discussed in closer detail by focussing on the current legislation and by investigating previous similar data collection projects. With this discussion we hope to shed some light on various difficulties researchers are facing when trying to share social media data.