Socially Pertinent Robots in Gerontological Healthcare

📅 2024-04-11
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the practical utility and user acceptance of multimodal conversational social robots in geriatric healthcare. Two field experiments were conducted at a Parisian day-care center for older adults, deploying full-size humanoid robots—marking the first systematic validation of usability and acceptance in real-world eldercare settings based on the H2020 SPRING architecture. Methodologically, the system integrates a modular social software framework, robust environmental perception, adaptive behavioral control, and multimodal interaction mechanisms. Acceptance was evaluated via the Acceptance Evaluation Scale (AES) and System Usability Scale (SUS) among adults aged 60+ and caregivers. Results demonstrate high overall acceptance, with significantly enhanced user experience when the robot exhibits robust perception and flexible multimodal interaction. This work provides empirical evidence and a reusable technical framework for deploying social robots in authentic geriatric care environments.

Technology Category

Application Category

📝 Abstract
Despite the many recent achievements in developing and deploying social robotics, there are still many underexplored environments and applications for which systematic evaluation of such systems by end-users is necessary. While several robotic platforms have been used in gerontological healthcare, the question of whether or not a social interactive robot with multi-modal conversational capabilities will be useful and accepted in real-life facilities is yet to be answered. This paper is an attempt to partially answer this question, via two waves of experiments with patients and companions in a day-care gerontological facility in Paris with a full-sized humanoid robot endowed with social and conversational interaction capabilities. The software architecture, developed during the H2020 SPRING project, together with the experimental protocol, allowed us to evaluate the acceptability (AES) and usability (SUS) with more than 60 end-users. Overall, the users are receptive to this technology, especially when the robot perception and action skills are robust to environmental clutter and flexible to handle a plethora of different interactions.
Problem

Research questions and friction points this paper is trying to address.

Evaluate social robot's usability in gerontological healthcare
Assess multi-modal conversational robot acceptance
Test robot's interaction robustness in diverse settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Humanoid robot with social capabilities
Multi-modal conversational interaction system
Evaluation in gerontological healthcare settings
🔎 Similar Papers
No similar papers found.
Xavier Alameda-Pineda
Xavier Alameda-Pineda
Research Director, Leader of the RobotLearn Team, Inria
Computer VisionAudio ProcessingMachine LearningHuman-Robot Interaction
Angus Addlesee
Angus Addlesee
Interaction Lab, Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.
D
Daniel Hern'andez Garc'ia
Interaction Lab, Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.
C
Chris Reinke
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
S
Soraya Arias
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
Federica Arrigoni
Federica Arrigoni
Politecnico di Milano
Computer Vision
A
Alex Auternaud
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
L
Lauriane Blavette
Lusage Living Lab, Assistance Publique - Hopitaux de Paris, 54-56 Rue Pascal, 75013, Paris, France.
Cigdem Beyan
Cigdem Beyan
Associate Professor @University of Verona
Computer VisionDeep LearningMultimediaAffective ComputingHuman-Centered AI
Luis Gomez Camara
Luis Gomez Camara
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
O
Ohad Cohen
Acoustic Signal Processing Laboratory, Bar-Ilan University, Ramat-Gan, 5290002, Israel.
Alessandro Conti
Alessandro Conti
University of Trento
vision and languagemulti-modal learning
S
Sébastien Dacunha
Lusage Living Lab, Assistance Publique - Hopitaux de Paris, 54-56 Rue Pascal, 75013, Paris, France.
Christian Dondrup
Christian Dondrup
Interaction Lab, Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.
Y
Yoav Ellinson
Acoustic Signal Processing Laboratory, Bar-Ilan University, Ramat-Gan, 5290002, Israel.
Francesco Ferro
Francesco Ferro
PAL Robotics, C/ Pujades 77-79, 08005, Barcelona, Spain.
Sharon Gannot
Sharon Gannot
Prof. of Data Engineering, Bar-Ilan University, Israel
Acoustic signal processingAudio-video processingMachine learning for audio processing
F
Florian Gras
ERM Automatismes, 561 allée Bellecour, 84200, Carpentras, France.
N
Nancie Gunson
Interaction Lab, Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.
R
Radu Horaud
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
Moreno D'Incà
Moreno D'Incà
University of Trento
Generative AIMultimodal LLMFairnessSafety
I
Imad Kimouche
ERM Automatismes, 561 allée Bellecour, 84200, Carpentras, France.
Séverin Lemaignan
Séverin Lemaignan
Senior Scientist, PAL Robotics
Social RoboticsCognitive RoboticsArtificial IntelligenceHuman-Robot InteractionRobotics for Learning
Oliver Lemon
Oliver Lemon
Professor; Academic Lead National Robotarium; Director of Interaction Lab, Edinburgh
Conversational AISpoken Language UnderstandingSpoken Dialog SystemsDialog SystemsHuman-Robot
C
Cyril Liotard
ERM Automatismes, 561 allée Bellecour, 84200, Carpentras, France.
L
Luca Marchionni
PAL Robotics, C/ Pujades 77-79, 08005, Barcelona, Spain.
Mordehay Moradi
Mordehay Moradi
M.Sc student in Data Information Processing at Bar Ilan University
signal processingspeech processingdeep and machine learningcomputer vision
T
Tomás Pajdla
Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University in Prague, Jugoslávských partyzánů 1580/3, 160 00 Dejvice, Czechia.
Maribel Pino
Maribel Pino
Broca Living Lab, Hôpital Broca (APHP); Université Paris Cité
Dementia careAIsocial robotsLiving LabsHealth Technology Assessment
Michal Polic
Michal Polic
Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University in Prague, Jugoslávských partyzánů 1580/3, 160 00 Dejvice, Czechia.
M
Matthieu Py
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
A
Ariel Rado
Acoustic Signal Processing Laboratory, Bar-Ilan University, Ramat-Gan, 5290002, Israel.
B
Bin Ren
Department of Information and Computer Science, University of Trento, Via Sommarive 9, 38123, Trento, Italy.
Elisa Ricci
Elisa Ricci
University of Trento & Fondazione Bruno Kessler
Computer VisionDeep LearningRobotics
A
Anne-Sophie Rigaud
Lusage Living Lab, Assistance Publique - Hopitaux de Paris, 54-56 Rue Pascal, 75013, Paris, France.
Paolo Rota
Paolo Rota
Associate Professor @ University of Trento
Computer VisionVideo UnderstandingVision and LanguageMotion Understanding
Marta Romeo
Marta Romeo
Heriot-Watt University
assistive roboticshuman-robot interactionsocial intelligencetrust
Nicu Sebe
Nicu Sebe
University of Trento
computer visionmultimedia
W
Weronika Siei'nska
Interaction Lab, Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.
P
Pinchas Tandeitnik
Acoustic Signal Processing Laboratory, Bar-Ilan University, Ramat-Gan, 5290002, Israel.
F
Francesco Tonini
Department of Information and Computer Science, University of Trento, Via Sommarive 9, 38123, Trento, Italy.
N
Nicolas Turro
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
T
Timothée Wintz
RobotLearn Team, Inria at Univ. Grenoble Alpes, CNRS, LJK, 655, Avenue de l’Europe, 38334, Montbonnot, France.
Yanchao Yu
Yanchao Yu
Interaction Lab, Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, United Kingdom.