Human Gaze-Driven Spatial Tasking of an Autonomous MAV

التفاصيل البيبلوغرافية
العنوان: Human Gaze-Driven Spatial Tasking of an Autonomous MAV
المؤلفون: Liangzhe Yuan, Garrett Warnell, Giuseppe Loianno, Christopher Reardon
المساهمون: Yuan, L., Reardon, C., Warnell, G., Loianno, G.
المصدر: IEEE Robotics and Automation Letters. 4:1343-1350
بيانات النشر: Institute of Electrical and Electronics Engineers (IEEE), 2019.
سنة النشر: 2019
مصطلحات موضوعية: 0209 industrial biotechnology, Control and Optimization, Computer science, ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION, Biomedical Engineering, 02 engineering and technology, Virtual reality, Tracking (particle physics), 050105 experimental psychology, 020901 industrial engineering & automation, Artificial Intelligence, Inertial measurement unit, 0501 psychology and cognitive sciences, Computer vision, Set (psychology), business.industry, Orientation (computer vision), Mechanical Engineering, 05 social sciences, Gaze, Computer Science Applications, Human-Computer Interaction, Control and Systems Engineering, Robot, Eye tracking, Computer Vision and Pattern Recognition, Artificial intelligence, business
الوصف: In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human's head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.
تدمد: 2377-3774
DOI: 10.1109/lra.2019.2895419
URL الوصول: https://explore.openaire.eu/search/publication?articleId=doi_dedup___::71fd5b991be2b08afdfdc2f1ce4de9c6
https://doi.org/10.1109/lra.2019.2895419
Rights: CLOSED
رقم الانضمام: edsair.doi.dedup.....71fd5b991be2b08afdfdc2f1ce4de9c6
قاعدة البيانات: OpenAIRE
الوصف
تدمد:23773774
DOI:10.1109/lra.2019.2895419