top of page

KEYNOTES

Our speakers bring together perspectives from across industry and academia, presenting insights from their work on using sound and music to create rich and engaging interactions between humans and robotic agents.

BEN GABALDON

Interaction Sound Designer, Facebook, Anki

Ben's introduction to audio was a second hand multitrack tape recorder and no idea how to use it. Years of patience, tinkering, and a persistent curiosity for tweaking and improving recordings has led to a focus in audio design for games, and more recently, home robotics. Ben was lead sound designer for robots Cozmo and Vector, who arguably feature some of the most elaborate sound design currently existing in a commercial robot.


1557968438476.jpg

GUY HOFFMAN

Assistant Professor, Cornell University

Guy’s research field is human-robot interaction. He is particularly interested in joint activities between humans and robots; robotic personal companions; anticipation and timing in HRI, musical performance robots; robot improvisation; and nonverbal communication in HRI. He has been involved in the creation of various musical robots, including robotic musician Shimon.

1_1K8RVP9hQ0eSDx6dyYsJsQ.jpg

DYLAN MOORE

Associate, McKinsey & Company, Stanford University

Dylan's work explores how people interact with automated systems. He explores how robots and autonomous vehicles can communicate with pedestrians and other road users through implicit auditory and visual signals, identifying the communicative elements of natural messages to design broadly understood implicit messages that will enhance interactions with autonomous systems.

Dylan+Moore+headshot.jpg
Speakers: Speakers
bottom of page