Research Outputs
Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2
Browse
4 results
Search Results
Publication Restricted Advantage actor-critic deep reinforcement learning approach for paint shop planning and scheduling(Koç University, 2024) Özcan, Mert Can; Türkay, Metin; 0000-0003-4769-6714; Koç University Graduate School of Sciences and Engineering; Computational Sciences and Engineering; 24956Publication Restricted Engaging human-robot interaction with batch reinforcement learning(Koç University, 2020) Hussain, Nusrah; Erzin, Engin; 0000-0002-2715-2368; Koç University Graduate School of Sciences and Engineering; Electrical and Electronics Engineering; 34503Publication Restricted Keyframe demonstration seeded and bayesian optimized policy search(Koç University, 2022) Töre, Onur Berk; Akgün, Barış; 0000-0002-4079-6889; Koç University Graduate School of Sciences and Engineering; Computer Science and Engineering; 258784Publication Open Access Speech driven backchannel generation using deep Q-network for enhancing engagement in human-robot interaction(International Speech Communication Association ( ISCA), 2019) Department of Computer Engineering; Hussain, Nusrah; Erzin, Engin; Sezgin, Tevfik Metin; Yemez, Yücel; PhD Student; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503; 18632; 107907We present a novel method for training a social robot to generate backchannels during human-robot interaction. We address the problem within an off-policy reinforcement learning framework, and show how a robot may learn to produce non-verbal backchannels like laughs, when trained to maximize the engagement and attention of the user. A major contribution of this work is the formulation of the problem as a Markov decision process (MDP) with states defined by the speech activity of the user and rewards generated by quantified engagement levels. The problem that we address falls into the class of applications where unlimited interaction with the environment is not possible (our environment being a human) because it may be time-consuming, costly, impracticable or even dangerous in case a bad policy is executed. Therefore, we introduce deep Q-network (DQN) in a batch reinforcement learning framework, where an optimal policy is learned from a batch data collected using a more controlled policy. We suggest the use of human-to-human dyadic interaction datasets as a batch of trajectories to train an agent for engaging interactions. Our experiments demonstrate the potential of our method to train a robot for engaging behaviors in an offline manner.