Research Outputs

Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2

Browse

Search Results

Now showing 1 - 4 of 4
  • Placeholder
    Publication
    Data-driven vibrotactile rendering of digital buttons on touchscreens
    (Academic Press Ltd- Elsevier Science Ltd, 2020) N/A; N/A; Department of Computer Engineering; Department of Mechanical Engineering; Sadia, Büshra; Emgin, Senem Ezgi; Sezgin, Tevfik Metin; Başdoğan, Çağatay; PhD Student; PhD Student; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Mechanical Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; N/A; 18632; 125489
    Interaction with physical buttons is an essential part of our daily routine. We use buttons daily to turn lights on, to call an elevator, to ring a doorbell, or even to turn on our mobile devices. Buttons have distinct response characteristics and are easily activated by touch. However, there is limited tactile feedback available for their digital counterparts displayed on touchscreens. Although mobile phones incorporate low-cost vibration motors to enhance touch-based interactions, it is not possible to generate complex tactile effects on touchscreens. It is also difficult to relate the limited vibrotactile feedback generated by these motors to different types of physical buttons. In this study, we focus on creating vibrotactile feedback on a touchscreen that simulates the feeling of physical buttons using piezo actuators attached to it. We first recorded and analyzed the force, acceleration, and voltage data from twelve participants interacting with three different physical buttons: latch, toggle, and push buttons. Then, a button-specific vibrotactile stimulus was generated for each button based on the recorded data. Finally, we conducted a three-alternative forced choice (3AFC) experiment with twenty participants to explore whether the resultant stimulus is distinct and realistic. In our experiment, participants were able to match the three digital buttons with their physical counterparts with a success rate of 83%. In addition, we harvested seven adjective pairs from the participants expressing their perceptual feeling of pressing the physical buttons. All twenty participants rated the degree of their subjective feelings associated with each adjective for all the physical and digital buttons investigated in this study. Our statistical analysis showed that there exist at least three adjective pairs for which participants have rated two out of three digital buttons similar to their physical counterparts.
  • Placeholder
    Publication
    Gaze-based prediction of pen-based virtual interaction tasks
    (Academic Press Ltd- Elsevier Science Ltd, 2015) Department of Computer Engineering; Department of Computer Engineering; Çiğ, Çağla; Sezgin, Tevfik Metin; PhD Student; Faculty Member; Department of Computer Engineering; College of Engineering; College of Engineering; N/A; 18632
    In typical human-computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, we describe how eye gaze movements that naturally occur during pen-based interaction can be used to reduce dependency on explicit mode selection mechanisms in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 88% success rate with the aid of users' natural eye gaze behavior during pen-only interaction. (C) 2014 Elsevier Ltd. All rights reserved.
  • Placeholder
    Publication
    Gaze-based predictive user interfaces: visualizing user intentions in the presence of uncertainty
    (Academic Press Ltd- Elsevier Science Ltd, 2018) N/A; N/A; Department of Computer Engineering; Karaman, Çağla Çiğ; Sezgin, Tevfik Metin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 18632
    Human eyes exhibit different characteristic patterns during different virtual interaction tasks such as moving a window, scrolling a piece of text, or maximizing an image. Human-computer, studies literature contains examples of intelligent systems that can predict user's task-related intentions and goals based on eye gaze behavior. However, these systems are generally evaluated in terms of prediction accuracy, and on previously collected offline interaction data. Little attention has been paid to creating real-time interactive systems using eye gaze and evaluating them in online use. We have five main contributions that address this gap from a variety of aspects. First, we present the first line of work that uses real-time feedback generated by a gaze-based probabilistic task prediction model to build an adaptive real-time visualization system: Our system is able to dynamically provide adaptive interventions that are informed by real-time user behavior data. Second, we propose two novel adaptive visualization approaches that take into account the presence of uncertainty in the outputs of prediction models. Third, we offer a personalization method to suggest which approach will be more suitable for each user in terms of system performance (measured in terms of prediction accuracy). Personalization boosts system performance and provides users with the more optimal visualization approach (measured in terms of usability and perceived task load). Fourth, by means of a thorough usability study, we quantify the effects of the proposed visualization approaches and prediction errors on natural user behavior and the performance of the underlying prediction systems. Finally, this paper also demonstrates that our previously-published gaze-based task prediction system, which was assessed as successful in an offline test scenario, can also be successfully utilized in realistic online usage scenarios.
  • Placeholder
    Publication
    Modeling context-sensitive metacognitive control of focusing on a mental model during a mental process
    (Springer Nature, 2021) Treur, Jan; Department of Computer Engineering; Canbaloğlu, Gülay; Undergraduate Student; Department of Computer Engineering; College of Engineering; N/A
    Focusing on a proper mental model during mental processes is often crucial. Metacognition is used to control such focusing in a context-sensitive manner. In this paper, a second-order adaptive mental network model is introduced for this form of metacognitive control. The second-order adaptive network model obtained is illustrated by a case scenario concerning social interaction.