How 'open' are the conversations with open-domain chatbots? A proposal for Speech Event based evaluation

Publication type
P1
Publication status
Published
Authors
Doğruöz, A.S., & Skantze, G.
Editor
Haizhou Li, Gina-Anne Levow, Zhou Yu, Chitralekha Gupta, Berrak Sisman, Siqi Cai, David Vandyke, Nina Dethlefts, Yan Wu and Junji Jessy Li
Series
SIGDIAL 2021 : 22ND ANNUAL MEETING OF THE SPECIAL INTEREST GROUP ON DISCOURSE AND DIALOGUE (SIGDIAL 2021)
Pagination
392-402
Publisher
Association for Computational Linguistics (ACL)
Conference
22nd Annual Meeting of the Special-Interest-Group-on-Discourse-and-Dialogue (SIGDIAL) (Singapore, SINGAPORE)
Download
(.pdf)
View in Biblio
(externe link)

Abstract

Open-domain chatbots are supposed to converse freely with humans without being restricted to a topic, task or domain. However, the boundaries and/or contents of open-domain conversations are not clear. To clarify the boundaries of “openness”, we conduct two studies: First, we classify the types of “speech events” encountered in a chatbot evaluation data set (i.e., Meena by Google) and find that these conversations mainly cover the “small talk” category and exclude the other speech event categories encountered in real life human-human communication. Second, we conduct a small-scale pilot study to generate online conversations covering a wider range of speech event categories between two humans vs. a human and a state-of-the-art chatbot (i.e., Blender by Facebook). A human evaluation of these generated conversations indicates a preference for human-human conversations, since the human-chatbot conversations lack coherence in most speech event categories. Based on these results, we suggest (a) using the term “small talk” instead of “open-domain” for the current chatbots which are not that “open” in terms of conversational abilities yet, and (b) revising the evaluation methods to test the chatbot conversations against other speech events.