Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published October 2018 | Accepted Version
Book Section - Chapter Open

A Neurorobotic Experiment for Crossmodal Conflict Resolution in Complex Environments

Abstract

Crossmodal conflict resolution is crucial for robot sensorimotor coupling through the interaction with the environment, yielding swift and robust behaviour also in noisy conditions. In this paper, we propose a neurorobotic experiment in which an iCub robot exhibits human-like responses in a complex crossmodal environment. To better understand how humans deal with multisensory conflicts, we conducted a behavioural study exposing 33 subjects to congruent and incongruent dynamic audio-visual cues. In contrast to previous studies using simplified stimuli, we designed a scenario with four animated avatars and observed that the magnitude and extension of the visual bias are related to the semantics embedded in the scene, i.e., visual cues that are congruent with environmental statistics (moving lips and vocalization) induce the strongest bias. We implement a deep learning model that processes stereophonic sound, facial features, and body motion to trigger a discrete behavioural response. After training the model, we exposed the iCub to the same experimental conditions as the human subjects, showing that the robot can replicate similar responses in real time. Our interdisciplinary work provides important insights into how crossmodal conflict resolution can be modelled in robots and introduces future research directions for the efficient combination of sensory observations with internally generated knowledge and expectations.

Additional Information

© 2018 IEEE. Open-source code: cml.knowledge-technology.info. This research was supported by National Natural Science Foundation of China (NSFC), the China Scholarship Council, and the German Research Foundation (DFG) under project Transregio Crossmodal Learning (TRR 169). The authors would like to thank Jonathan Tong, Athanasia Kanellou, Matthias Kerzel, Guochun Yang, and Zhenghan Li for discussions and technical support.

Attached Files

Accepted Version - 1802.10408.pdf

Files

1802.10408.pdf
Files (1.7 MB)
Name Size Download all
md5:1ca9711d2cf2c3979a250ec9a0769bf9
1.7 MB Preview Download

Additional details

Created:
August 19, 2023
Modified:
October 20, 2023