Department of Mechanical Engineering2024-11-092009978-1-4244-2788-81050-4729N/A2-s2.0-70350373886https://IEEExplore.IEEE.org/document/5152572/authors#authorshttps://hdl.handle.net/20.500.14288/12591This work presents the development of a real-time framework for the research of Multimodal Feedback of Robots/Talking Agents in the context of Human Robot Interaction (HRI) and Human Computer Interaction (HCI). For evaluating the framework, a Multimodal corpus is built (ENTERFACE_STEAD), and a study on the important multimodal features was done for building an active Robot/Agent listener of a storytelling experience with Humans. The experiments show that even when building the same reactive behavior models for Robot and Talking Agents, the interpretation and the realization of the behavior communicated is different due to the different communicative channels Robots/Agents offer be it physical but less-human-like in Robots, and virtual but more expressive and human-like in Talking agents.AutomationAutomatic controlRoboticsGenerating robot/agent backchannels during a storytelling experimentConference proceeding2577-087X276080401088N/A9669