Representativeness as a forgotten lesson for multilingual and code-switched data collection and preparation

Publication type
C1
Publication status
Published
Authors
Doğruöz, A.S., Sitaram, S., & Yong, Z.
Editor
Houda Bouamor, Juan Pino and Kalika Bali
Series
Findings of the Association for Computational Linguistics : EMNLP 2023
Pagination
5751-5767
Publisher
Association for Computational Linguistics
Conference
2023 Conference on Empirical Methods in Natural Language Processing Singapore (EMNLP 2023) (Singapore)
Download
(.pdf)
View in Biblio
(externe link)

Abstract

Multilingualism is widespread around the world and code-switching (CSW) is a common practice among different language pairs/tuples across locations and regions. However, there is still not much progress in building successful CSW systems, despite the recent advances in Massive Multilingual Language Models (MMLMs). We investigate the reasons behind this setback through a critical study about the existing CSW data sets (68) across language pairs in terms of the collection and preparation (e.g. transcription and annotation) stages. This in-depth analysis reveals that a) most CSW data involves English ignoring other language pairs/tuples b) there are flaws in terms of representativeness in data collection and preparation stages due to ignoring the location based, socio-demographic and register variation in CSW. In addition, lack of clarity on the data selection and filtering stages shadow the representativeness of CSW data sets. We conclude by providing a short check-list to improve the representativeness for forthcoming studies involving CSW data collection and preparation.