Current limitations in cyberbullying detection : on evaluation criteria, reproducibility, and data scarcity

Publication type
A1
Publication status
Published
Authors
Emmery, C., Verhoeven, B., De Pauw, G., Jacobs, G.M., Van Hee, C., Lefever, E., Desmet, B., Hoste, V., & Daelemans, W.
Journal
LANGUAGE RESOURCES AND EVALUATION
Volume
55
Issue
3
Pagination
597-633
Download
(.pdf)
View in Biblio
(externe link)

Abstract

The detection of online cyberbullying has seen an increase in societal importance, popularity in research, and available open data. Nevertheless, while computational power and affordability of resources continue to increase, the access restrictions on high-quality data limit the applicability of state-of-the-art techniques. Consequently, much of the recent research uses small, heterogeneous datasets, without a thorough evaluation of applicability. In this paper, we further illustrate these issues, as we (i) evaluate many publicly available resources for this task and demonstrate difficulties with data collection. These predominantly yield small datasets that fail to capture the required complex social dynamics and impede direct comparison of progress. We (ii) conduct an extensive set of experiments that indicate a general lack of cross-domain generalization of classifiers trained on these sources, and openly provide this framework to replicate and extend our evaluation criteria. Finally, we (iii) present an effective crowdsourcing method: simulating real-life bullying scenarios in a lab setting generates plausible data that can be effectively used to enrich real data. This largely circumvents the restrictions on data that can be collected, and increases classifier performance. We believe these contributions can aid in improving the empirical practices of future research in the field.