Publications without Fulltext

Permanent URI for this collectionhttps://hdl.handle.net/20.500.14288/3

Browse

Search Results

Now showing 1 - 3 of 3
  • Placeholder
    Publication
    AffectON: Incorporating affect into dialog generation
    (IEEE-Inst Electrical Electronics Engineers Inc, 2023) Bucinca, Zana; Department of Computer Engineering; Department of Computer Engineering; Yemez, Yücel; Erzin, Engin; Sezgin, Tevfik Metin; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); College of Engineering
    Due to its expressivity, natural language is paramount for explicit and implicit affective state communication among humans. The same linguistic inquiry (e.g., How are you?) might induce responses with different affects depending on the affective state of the conversational partner(s) and the context of the conversation. Yet, most dialog systems do not consider affect as constitutive aspect of response generation. In this article, we introduce AffectON, an approach for generating affective responses during inference. For generating language in a targeted affect, our approach leverages a probabilistic language model and an affective space. AffectON is language model agnostic, since it can work with probabilities generated by any language model (e.g., sequence-to-sequence models, neural language models, n-grams). Hence, it can be employed for both affective dialog and affective language generation. We experimented with affective dialog generation and evaluated the generated text objectively and subjectively. For the subjective part of the evaluation, we designed a custom user interface for rating and provided recommendations for the design of such interfaces. The results, both subjective and objective demonstrate that our approach is successful in pulling the generated language toward the targeted affect, with little sacrifice in syntactic coherence.
  • Placeholder
    Publication
    Exploring users interested in 3D food printing and their attitudes: case of the employees of a kitchen appliance company
    (Taylor and Francis inc, 2022) N/A; N/A; Department of Sociology; Department of Media and Visual Arts; Department of Sociology; Department of Media and Visual Arts; Kocaman, Yağmur; Mert, Aslı Ermiş; Özcan, Oğuzhan; PhD Student; Faculty Member; Faculty Member; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; College of Social Sciences and Humanities; N/A; N/A; 12532
    3D Food Printing (3DFP) technology is expected to enter homes in the near future as a kitchen appliance. on the other hand, 3DFP is perceived as a non-domestic technology by potential users and domestic users' attitudes and everyday habits received less attention in previous 3DFP research. Exploring their perspective is needed to reflect their daily kitchen dynamics on the design process and discover possible new benefits situated in the home kitchen. on this basis, this study focuses on finding potential 3DFP users and explores their attitudes towards using 3DFP technology in their home kitchens through a two-stage study: First, we prioritized potential users based on their relationship with food through a questionnaire and found six factors that positively affect their attitude towards 3DFP: cooking every day, ordering food less than once a month, eating out at least a couple of times a month, having a mini oven, A multicooker, or a kettle, liking to try new foods, thinking that cooking is a fun activity. Second, we conducted semi-structured interviews with seven participants to discuss the possible benefits and drawbacks of 3DFP technology for their daily lives in the kitchen. Results revealed two new benefits that 3DFP at home may provide: risk-free cooking and cooking for self-improvement. We discuss the potential implications of these two benefits for design and HCI research focusing on how to facilitate automation and pleasurable aspects of cooking into future 3DFP devices.
  • Placeholder
    Publication
    Gaze-based predictive user interfaces: visualizing user intentions in the presence of uncertainty
    (Academic Press Ltd- Elsevier Science Ltd, 2018) N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Karaman, Çağla Çiğ; Sezgin, Tevfik Metin; PhD Student; Faculty Member; Graduate School of Sciences and Engineering; College of Engineering; N/A; 18632
    Human eyes exhibit different characteristic patterns during different virtual interaction tasks such as moving a window, scrolling a piece of text, or maximizing an image. Human-computer, studies literature contains examples of intelligent systems that can predict user's task-related intentions and goals based on eye gaze behavior. However, these systems are generally evaluated in terms of prediction accuracy, and on previously collected offline interaction data. Little attention has been paid to creating real-time interactive systems using eye gaze and evaluating them in online use. We have five main contributions that address this gap from a variety of aspects. First, we present the first line of work that uses real-time feedback generated by a gaze-based probabilistic task prediction model to build an adaptive real-time visualization system: Our system is able to dynamically provide adaptive interventions that are informed by real-time user behavior data. Second, we propose two novel adaptive visualization approaches that take into account the presence of uncertainty in the outputs of prediction models. Third, we offer a personalization method to suggest which approach will be more suitable for each user in terms of system performance (measured in terms of prediction accuracy). Personalization boosts system performance and provides users with the more optimal visualization approach (measured in terms of usability and perceived task load). Fourth, by means of a thorough usability study, we quantify the effects of the proposed visualization approaches and prediction errors on natural user behavior and the performance of the underlying prediction systems. Finally, this paper also demonstrates that our previously-published gaze-based task prediction system, which was assessed as successful in an offline test scenario, can also be successfully utilized in realistic online usage scenarios.