Publication: L2 vocabulary teaching by social robots: the role of gestures and on-screen cues as scaffolds
Files
Program
KU Authors
Co-Authors
Publication Date
Language
Type
Embargo Status
NO
Journal Title
Journal ISSN
Volume Title
Alternative Title
Abstract
Social robots are receiving an ever-increasing interest in popular media and scientific literature. Yet, empirical evaluation of the educational use of social robots remains limited. In the current paper, we focus on how different scaffolds (co-speech hand gestures vs. visual cues presented on the screen) influence the effectiveness of a robot second language (L2) tutor. In two studies, Turkish-speaking 5-year-olds (n = 72) learned English measurement terms (e.g., big, wide) either from a robot or a human tutor. We asked whether (1) the robot tutor can be as effective as the human tutor when they follow the same protocol, (2) the scaffolds differ in how they support L2 vocabulary learning, and (3) the types of hand gestures affect the effectiveness of teaching. In all conditions, children learned new L2 words equally successfully from the robot tutor and the human tutor. However, the tutors were more effective when teaching was supported by the on-screen cues that directed children's attention to the referents of target words, compared to when the tutor performed co-speech hand gestures representing the target words (i.e., iconic gestures) or pointing at the referents (i.e., deictic gestures). The types of gestures did not significantly influence learning. These findings support the potential of social robots as a supplementary tool to help young children learn language but suggest that the specifics of implementation need to be carefully considered to maximize learning gains. Broader theoretical and practical issues regarding the use of educational robots are also discussed.
Source
Publisher
Frontiers
Subject
Psychology
Citation
Has Part
Source
Frontiers in Education
Book Series Title
Edition
DOI
10.3389/feduc.2020.599636