Researcher:
Eteke, Cem

Loading...
Profile Picture
ORCID

Job Title

PhD Student

First Name

Cem

Last Name

Eteke

Name

Name Variants

Eteke, Cem

Email Address

Birth Date

Search Results

Now showing 1 - 3 of 3
  • Placeholder
    Publication
    Reward learning from very few demonstrations
    (Ieee-Inst Electrical Electronics Engineers Inc, 2021) N/A; Eteke, Cem; Kebüde, Doğancan; Akgün, Barış; PhD Student; Graduate School of Sciences and Engineering; N/A
    This article introduces a novel skill learning framework that learns rewards from very few demonstrations and uses them in policy search (PS) to improve the skill. The demonstrations are used to learn a parameterized policy to execute the skill and a goal model, as a hidden Markov model (HMM), to monitor executions. The rewards are learned from the HMM structure and its monitoring capability. The HMM is converted to a finite-horizon Markov reward process (MRP). A Monte Carlo approach is used to calculate its values. Then, the HMM and the values are merged into a partially observable MRP to obtain execution returns to be used with PS for improving the policy. In addition to reward learning, a black box PS method with an adaptive exploration strategy is adopted. The resulting framework is evaluated with five PS approaches and two skills in simulation. The results show that the learned dense rewards lead to better performance compared to sparse monitoring signals, and using an adaptive exploration lead to faster convergence with higher success rates and lower variance. The efficacy of the framework is validated in a real-robot settings by improving three skills to complete success from complete failure using learned rewards where sparse rewards failed completely.
  • Placeholder
    Publication
    Communicative cues for reach-to-grasp motions: from humans to robots: robotics track
    (International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2018) N/A; N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Kebüde, Doğancan; Eteke, Cem; Sezgin, Tevfik Metin; Akgün, Barış; Master Student; Master Student; Faculty Member; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; N/A; 18632; 258784
    Intent communication is an important challenge in the context of human-robot interaction. The aim of this work is to identify subtle non-verbal cues that make communication among humans fluent and use them to generate intent expressive robot motion. A human- human reach-to-grasp experiment (n = 14) identified two temporal and two spatial cues: (1) relative time to reach maximum hand aperture (AM), (2) overall motion duration (07), (3) exaggeration in motion (Exg), and (4) change in grasp modality (GM). Results showed there was statistically significant difference in the temporal cues between no-intention and intention conditions. In a follow-up experiment (n = 30), reach-to-grasp motions of a simulated robot containing different cue combinations were shown to the partici-pants. They were asked to guess the target object during robot's motion, based on the assumption that intent expressive motion would result in earlier and more accurate guesses. Results showed that, OT, GM and several cue combinations led to faster and more accurate guesses which imply they can be used to generate communicative motion. However, MA had no effect, and surprisingly Exg had a negative effect on expressiveness.
  • Placeholder
    Publication
    Communicative cues for reach-to-grasp motions: From humans to robots
    (Assoc Computing Machinery, 2018) N/A; N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Kebüde, Doğancan; Eteke, Cem; Sezgin, Tevfik Metin; Akgün, Barış; Master Student; PhD Student; Faculty Member; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; N/A; 42946; 258784
    Intent communication is an important challenge in the context of human-robot interaction. The aim of this work is to identify subtle non-verbal cues that make communication among humans fluent and use them to generate intent expressive robot motion. A human human reach-to-grasp experiment (n = 14) identified two temporal and two spatial cues: (1) relative time to reach maximum hand aperture (MA), (2) overall motion duration (OT), (3) exaggeration in motion (Exg), and (4) change in grasp modality (GM). Results showed there was statistically significant difference in the temporal cues between no-intention and intention conditions. In a follow-up experiment (n = 30), reach-to-grasp motions of a simulated robot containing different cue combinations were shown to the participants. They were asked to guess the target object during robot's motion, based on the assumption that intent expressive motion would result in earlier and more accurate guesses. Results showed that, OT, GM and several cue combinations led to faster and more accurate guesses which imply they can be used to generate communicative motion. However, MA had no effect, and surprisingly Exg had a negative effect on expressiveness.