top of page
SUBMISSIONS
Submissions: About
Aimee Allen, Tom Drummond and Dana Kulić: Augmenting Consequential Sounds Produced by Robots to Improve Human Perceptions [PDF]
​
Gustavo Assunção, Bruno Patrão, Nuno Gonçalves, Miguel Castelo-Branco and Paulo Menezes: Sound-based Emotional Regulation for Improved HRI [PDF]
Roberto Bresin, Emma Frid, Adrian B. Latupeirissa and Claudio Panariello: Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound [PDF]
Pauline Chevalier, Davide Ghiglino, Federica Floris, Tiziana Priolo and Agnieszka Wykowska: Motor Noises and Auditory Sensitivity [PDF]
Omar Eldardeer, Alessandra Sciutti, Matthew Tata and Francesco Rea: Auditory Perception for Interactive Robots: a Cognitive Framework to Include Motor Commands and Working Memory in the Process of Auditory Sound Localization [PDF]
Lukas Grasse and Matthew S. Tata: An End-to-End Platform for Human-Robot Speech Interaction [PDF]
Akihiro Matsufuji and Angelica Lim: How a Robot Should Speak Depends on Social, Environmental, Cognitive, Emotional, and Cultural Contexts [PDF]
Iain McGregor: What's In A Name? [PDF]
​
Elias Naphausen: The Voices of Otherness [PDF]
Bastian Orthmann, Ilaria Torre and Iolanda Leite: Auditory displays of robots’ actions and intentions [PDF]
Akanksha Saran, Kush Desai, Andrea Thomaz and Scott Niekum: A Case for Leveraging Human Prosody for Robot Learning
Sudhir Shenoy, Fateme Nikseresht, Yueyue Hou, Xinran Wang and Afsaneh Doryab: Adaptive Humanoid Robots for Pain Management [PDF]
Ella Velner, Khiet P. Truong and Vanessa Evers: Sound and Sensibility: To Beep or Not to Beep [PDF]
Brian J. Zhang, Christopher A. Sanchez and Naomi T. Fitter: Consequential Robot Sound: Should Robots Be Quiet and High-Pitched? [PDF]
Submissions: Text
bottom of page