Academic Journal

A comparison of visual and auditory EEG interfaces for robot multi-stage task control

التفاصيل البيبلوغرافية
العنوان: A comparison of visual and auditory EEG interfaces for robot multi-stage task control
المؤلفون: Arulkumaran, Kai, Di Vincenzo, Marina, Dossa, Rousslan Fernand Julien, Akiyama, Shogo, Ogawa Lillrank, Dan, Sato, Motoshige, Tomeoka, Kenichi, Sasai, Shuntaro
المساهمون: Japan Science and Technology Agency
المصدر: Frontiers in Robotics and AI ; volume 11 ; ISSN 2296-9144
بيانات النشر: Frontiers Media SA
سنة النشر: 2024
المجموعة: Frontiers (Publisher - via CrossRef)
الوصف: Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy—most commonly electroencephalography-based—the two most common choices are to provide either auditory or visual stimuli to the user—each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies—and preferences—for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.
نوع الوثيقة: article in journal/newspaper
اللغة: unknown
DOI: 10.3389/frobt.2024.1329270
DOI: 10.3389/frobt.2024.1329270/full
الاتاحة: http://dx.doi.org/10.3389/frobt.2024.1329270
https://www.frontiersin.org/articles/10.3389/frobt.2024.1329270/full
Rights: https://creativecommons.org/licenses/by/4.0/
رقم الانضمام: edsbas.769B024F
قاعدة البيانات: BASE
الوصف
DOI:10.3389/frobt.2024.1329270