Research Outputs
Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2
Browse
4 results
Search Results
Publication Metadata only Informing the design of question-asking conversational agents for reflection(Springer Science and Business Media Deutschland GmbH, 2024) ; Department of Media and Visual Arts; Karaturhan, Pelin; Orhan, İlayda; Yantaç, Asım Evren; Department of Media and Visual Arts; Koç Üniversitesi KARMA Gerçeklik Teknolojileri Eğitim, Uygulama ve Yayma Merkezi (KARMA) / Koç University KARMA Mixed Reality Technologies Training, Implementation and Dissemination Centre (KARMA); KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; School of Medicine;Reflecting on everyday experiences offers valuable insights and has the potential to enhance psychological well-being. Yet, only some have access to a facilitator for reflection. Conversational agents hold promise as companions for these discussions. We surveyed individuals with therapy experience to understand user needs and arrived at interaction strategies used in therapy. We then evaluated these strategies with five therapists and transformed our data, along with their input, into a set of interaction strategies to be used on conversational agents for reflection. We developed an AI chatbot prototype where we implemented these strategies and conducted a 1-week in-the-wild study with 34 participants to evaluate the interaction strategies and experiences of interacting with a chatbot for reflection. Findings reveal that participants are willing to engage with a chatbot, even with limited capabilities. Critical aspects include the chatbot’s contextual awareness, statement repetition, and human-like qualities. Successfully balancing questions with non-question statements is essential for a pleasurable dialogue-driven reflection. Our paper presents implications for future design and research studies. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024.Publication Metadata only Motor memory in HCI(ACM SIGCHI, 2020) Patibanda, Rakesh; Semertzidis, Nathan Arthur; Scary, Michaela; La Delfa, Joseph Nathan; Baytaş, Mehmet Aydin; Martin-Niedecken, Anna Lisa; Strohmeier, Paul; Fruchard, Bruno; Leigh, Sang-Won; Mekler, Elisa D.; Nanayakkara, Suranga; Wiemeyer, Josef; Berthouze, Nadia; Kunze, Kai; Rikakis, Thanassis; Kelliher, Aisling; Warwick, Kevin; Van Den Hoven, Elise; Mueller, Florian Floyd; Mann, Steve; N/A; N/A; N/A; N/AThere is mounting evidence acknowledging that embodiment is foundational to cognition. In HCI, this understanding has been incorporated in concepts like embodied interaction, bodily play, and natural user-interfaces. However, while embodied cognition suggests a strong connection between motor activity and memory, we find the design of technological systems that target this connection to be largely overlooked. Considering this, we are provided with an opportunity to extend human capabilities through augmenting motor memory. Augmentation of motor memory is now possible with the advent of new and emerging technologies including neuromodulation, electric stimulation, brain-computer interfaces, and adaptive intelligent systems. This workshop aims to explore the possibility of augmenting motor memory using these and other technologies. In doing so, we stand to benefit not only from new technologies and interactions, but also a means to further study cognition.Publication Metadata only Realtime engagement measurement in human-computer interaction(Institute of Electrical and Electronics Engineers Inc., 2020) Department of Computer Engineering; Department of Computer Engineering; Department of Computer Engineering; N/A; N/A; N/A; Sezgin, Tevfik Metin; Yemez, Yücel; Erzin, Engin; Türker, Bekir Berker; Numanoğlu, Tuğçe; Kesim, Ege; Faculty Member; Faculty Member; Faculty Member; PhD Student; Master Student; Master Student; Department of Computer Engineering; College of Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; 18632; 107907; 34503; N/A; N/A; N/ASocial robots are expected to understand their interlocutors and behave accordingly like humans do. Endowing robots with the capability of monitoring user engagement during their interactions with humans is one of the crucial steps towards achieving this goal. In this work, an interactive game is designed and implemented, which is played with a robot. During the interaction, the user engagement is monitored in realtime via detection of user gaze, turn-taking, laughters/smiles and head nods from audio-visual data. In the experiments conducted, the realtime monitored engagement is found to be consistent with the humanannotated engagement levels.Publication Metadata only Use of affective visual Information for summarization of human-centric videos(2022) Kopro, Berkay; Department of Computer Engineering; Erzin, Engin; Faculty Member; Department of Computer Engineering; College of Engineering; 34503The increasing volume of user-generated human-centric video content and its applications, such as video retrieval and browsing, require compact representations addressed by the video summarization literature. Current supervised studies formulate video summarization as a sequence-to-sequence learning problem, and the existing solutions often neglect the surge of the human-centric view, which inherently contains affective content. In this study, we investigate the affective-information enriched supervised video summarization task for human-centric videos. First, we train a visual input-driven state-of-the-art continuous emotion recognition model (CER-NET) on the RECOLA dataset to estimate activation and valence attributes. Then, we integrate the estimated emotional attributes and their high-level embeddings from the CER-NET with the visual information to define the proposed affective video summarization (AVSUM) architectures. In addition, we investigate the use of attention to improve the AVSUM architectures and propose two new architectures based on temporal attention (TA-AVSUM) and spatial attention (SA-AVSUM). We conduct video summarization experiments on the TvSum and COGNIMUSE datasets. The proposed temporal attention-based TA-AVSUM architecture attains competitive video summarization performances with strong improvements for the human-centric videos compared to the state-of-the-art in terms of F-score, self-defined face recall, and rank correlation metrics.