Publications without Fulltext

Permanent URI for this collectionhttps://hdl.handle.net/20.500.14288/3

Browse

Search Results

Now showing 1 - 10 of 51
  • Placeholder
    Item
    Tactile perception of coated smooth surfaces
    (Institute of Electrical and Electronics Engineers Inc., 2023) 0000-0002-2443-8416; N/A; 0000-0002-6382-7334; Sezgin, Alperen; Er, Utku; Turkuz, Seniz; N/A; N/A; Department of Mechanical Engineering; Aliabbasi, Easa; Aydıngül, Volkan; Başdoğan, Çağatay; PhD Student; Master Student; Faculty Member; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; 125489
    Although surface coating is commonly utilized in many industries for improving the aesthetics and functionality of the end product, our tactile perception of coated surfaces has not been investigated in depth yet. In fact, there are only a few studies investigating the effect of coating material on our tactile perception of extremely smooth surfaces having roughness amplitudes in the order of a few nanometers. Moreover, the current literature needs more studies linking the physical measurements performed on these surfaces to our tactile perception in order to further understand the adhesive contact mechanism leading to our percept. In this study, we first perform 2AFC experiments with 8 participants to quantify their tactile discrimination ability of 5 smooth glass surfaces coated with 3 different materials. We then measure the coefficient of friction between human finger and those 5 surfaces via a custom-made tribometer and their surface energies via Sessile drop test performed with 4 different liquids. The results of our psychophysical experiments and the physical measurements show that coating material has a strong influence on our tactile perception and human finger is capable of detecting differences in surface chemistry due to, possibly, molecular interactions.
  • Placeholder
    Publication
    Eliciting parents' insights into products for supporting and tracking children's fine motor development
    (Assoc Computing Machinery, 2022) Department of Psychology;Department of Media and Visual Arts; Gürbüzsel, İpek; Göksun, Tilbe; Coşkun, Aykut; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities
    Early development of fine motor skills is a critical milestone for children, which also helps the formation and maturation of other developmental areas like language development. While toys and daily artefacts could support children's fine motor skills, parents play a profound role in monitoring their developmental progress. Although there are several products to support fine motor development and help parents monitor their children's progress, the literature lacks a source that might inform the design of such products. As the first step of a bigger research project, we conducted semi-structured interviews with 13 parents to gather their insights into and expectations of such supportive products. We designed a sensor-embedded toy concept, ANIMO, aimed at supporting the fine motor development of 7 to 24-month-old children and assisting parents in tracking their children's developmental progress via a mobile app. We showed this concept to parents during interviews to facilitate the insight elicitation process. We present ANIMO, three themes summarizing parents' insights and expectations into products supporting fine motor development along with implications for their design.
  • Placeholder
    Publication
    Designing physical objects for young children's magnitude understanding: a TUI research through design journey
    (Assoc Computing Machinery, 2022) Department of Psychology;Department of Media and Visual Arts; Beşevli, Ceylan; Göksun, Tilbe; Özcan, Oğuzhan; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities
    Magnitude understanding, an understudied topic in Child-Computer Interaction, entails making nonsymbolic ' moreless' comparisons that influence young children's later math and academic achievements. To support this ability, designing tangible user interfaces ( TUIs) demands considering many facets, ranging from elements within the physical world to the digital design components. This multifaceted activity brings many design decisions often not reflected in research. Therefore, we present this reflection via our research through design process in developing a vital design element, the physical form. We share our (i) physical object design criteria elicitation for magnitude understanding, (ii) hands- on making process, and (iii) preliminary studies with children engaging with objects. With our insights obtained through these steps, we project how this physical object-initiated research inspires the TUI in the upcoming steps and present design takeaways for CCI researchers.
  • Placeholder
    Publication
    AffectON: Incorporating affect into dialog generation
    (IEEE-Inst Electrical Electronics Engineers Inc, 2023) Bucinca, Zana; Department of Computer Engineering; Department of Computer Engineering; Yemez, Yücel; Erzin, Engin; Sezgin, Tevfik Metin; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); College of Engineering
    Due to its expressivity, natural language is paramount for explicit and implicit affective state communication among humans. The same linguistic inquiry (e.g., How are you?) might induce responses with different affects depending on the affective state of the conversational partner(s) and the context of the conversation. Yet, most dialog systems do not consider affect as constitutive aspect of response generation. In this article, we introduce AffectON, an approach for generating affective responses during inference. For generating language in a targeted affect, our approach leverages a probabilistic language model and an affective space. AffectON is language model agnostic, since it can work with probabilities generated by any language model (e.g., sequence-to-sequence models, neural language models, n-grams). Hence, it can be employed for both affective dialog and affective language generation. We experimented with affective dialog generation and evaluated the generated text objectively and subjectively. For the subjective part of the evaluation, we designed a custom user interface for rating and provided recommendations for the design of such interfaces. The results, both subjective and objective demonstrate that our approach is successful in pulling the generated language toward the targeted affect, with little sacrifice in syntactic coherence.
  • Placeholder
    Publication
    Hotspotizer: end-user authoring of mid-air gestural interactions
    (Association for Computing Machinery, 2014) N/A; Department of Computer Engineering; Department of Media and Visual Arts; Department of Computer Engineering; Department of Media and Visual Arts; Baytaş, Mehmet Aydın; Yemez, Yücel; Özcan, Oğuzhan; PhD Student; Faculty Member; Faculty Member; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Engineering; College of Social Sciences and Humanities; N/A; 107907; 12532
    Drawing from a user-centered design process and guidelines derived from the literature, we developed a paradigm based on space discretization for declaratively authoring mid-air gestures and implemented it in Hotspotizer, an end-to-end toolkit for mapping custom gestures to keyboard commands. Our implementation empowers diverse user populations - including end-users without domain expertise - to develop custom gestural interfaces within minutes, for use with arbitrary applications.
  • Placeholder
    Publication
    An audio-driven dancing avatar
    (Springer, 2008) Balci, Koray; Kizoglu, Idil; Akarun, Lale; Canton-Ferrer, Cristian; Tilmanne, Joelle; Bozkurt, Elif; Erdem, A. Tanju; Department of Computer Engineering; N/A; N/A; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Erzin, Engin; Tekalp, Ahmet Murat; Faculty Member; PhD Student; Master Student; Faculty Member; Faculty Member; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; 107907; N/A; N/A; 34503; 26207
    We present a framework for training and synthesis of an audio-driven dancing avatar. The avatar is trained for a given musical genre using the multicamera video recordings of a dance performance. The video is analyzed to capture the time-varying posture of the dancer's body whereas the musical audio signal is processed to extract the beat information. We consider two different marker-based schemes for the motion capture problem. The first scheme uses 3D joint positions to represent the body motion whereas the second uses joint angles. Body movements of the dancer are characterized by a set of recurring semantic motion patterns, i.e., dance figures. Each dance figure is modeled in a supervised manner with a set of HMM (Hidden Markov Model) structures and the associated beat frequency. In the synthesis phase, an audio signal of unknown musical type is first classified, within a time interval, into one of the genres that have been learnt in the analysis phase, based on mel frequency cepstral coefficients (MFCC). The motion parameters of the corresponding dance figures are then synthesized via the trained HMM structures in synchrony with the audio signal based on the estimated tempo information. Finally, the generated motion parameters, either the joint angles or the 3D joint positions of the body, are animated along with the musical audio using two different animation tools that we have developed. Experimental results demonstrate the effectiveness of the proposed framework.
  • Placeholder
    Publication
    On the convergence of ICA algorithms with symmetric orthogonalization
    (IEEE, 2008) Department of Electrical and Electronics Engineering; Department of Electrical and Electronics Engineering; Erdoğan, Alper Tunga; Faculty Member; College of Engineering; 41624
    We study the convergence behavior of Independent Component Analysis (ICA) algorithms that are based on the contrast function maximization and that employ symmetric orthogonalization method to guarantee the orthogonality property of the search matrix. In particular, the characterization of the critical points of the corresponding optimization problem and the stationary points of the conventional gradient ascent and fixed point algorithms are obtained. As an interesting and a useful feature of the symmetrical orthogonalization method, we show that the use of symmetric orthogonalization enables the monotonic convergence for the fixed point ICA algorithms that are based on the convex contrast functions.
  • Placeholder
    Publication
    Exploring users interested in 3D food printing and their attitudes: case of the employees of a kitchen appliance company
    (Taylor and Francis inc, 2022) N/A; N/A; Department of Sociology; Department of Media and Visual Arts; Department of Sociology; Department of Media and Visual Arts; Kocaman, Yağmur; Mert, Aslı Ermiş; Özcan, Oğuzhan; PhD Student; Faculty Member; Faculty Member; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; College of Social Sciences and Humanities; N/A; N/A; 12532
    3D Food Printing (3DFP) technology is expected to enter homes in the near future as a kitchen appliance. on the other hand, 3DFP is perceived as a non-domestic technology by potential users and domestic users' attitudes and everyday habits received less attention in previous 3DFP research. Exploring their perspective is needed to reflect their daily kitchen dynamics on the design process and discover possible new benefits situated in the home kitchen. on this basis, this study focuses on finding potential 3DFP users and explores their attitudes towards using 3DFP technology in their home kitchens through a two-stage study: First, we prioritized potential users based on their relationship with food through a questionnaire and found six factors that positively affect their attitude towards 3DFP: cooking every day, ordering food less than once a month, eating out at least a couple of times a month, having a mini oven, A multicooker, or a kettle, liking to try new foods, thinking that cooking is a fun activity. Second, we conducted semi-structured interviews with seven participants to discuss the possible benefits and drawbacks of 3DFP technology for their daily lives in the kitchen. Results revealed two new benefits that 3DFP at home may provide: risk-free cooking and cooking for self-improvement. We discuss the potential implications of these two benefits for design and HCI research focusing on how to facilitate automation and pleasurable aspects of cooking into future 3DFP devices.
  • Placeholder
    Publication
    Audio-facial laughter detection in naturalistic dyadic conversations
    (Ieee-Inst Electrical Electronics Engineers Inc, 2017) N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Department of Computer Engineering; Department of Computer Engineering; Türker, Bekir Berker; Yemez, Yücel; Sezgin, Tevfik Metin; Erzin, Engin; PhD Student; Faculty Member; Faculty Member; Faculty Member; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; 107907; 18632; 34503
    We address the problem of continuous laughter detection over audio-facial input streams obtained from naturalistic dyadic conversations. We first present meticulous annotation of laughters, cross-talks and environmental noise in an audio-facial database with explicit 3D facial mocap data. Using this annotated database, we rigorously investigate the utility of facial information, head movement and audio features for laughter detection. We identify a set of discriminative features using mutual information-based criteria, and show how they can be used with classifiers based on support vector machines (SVMs) and time delay neural networks (TDNNs). Informed by the analysis of the individual modalities, we propose a multimodal fusion setup for laughter detection using different classifier-feature combinations. We also effectively incorporate bagging into our classification pipeline to address the class imbalance problem caused by the scarcity of positive laughter instances. Our results indicate that a combination of TDNNs and SVMs lead to superior detection performance, and bagging effectively addresses data imbalance. Our experiments show that our multimodal approach supported by bagging compares favorably to the state of the art in presence of detrimental factors such as cross-talk, environmental noise, and data imbalance.
  • Placeholder
    Publication
    Learning from the users for spatio-temporal data visualization explorations on social events
    (Springer Int Publishing Ag, 2016) N/A; Department of Media and Visual Arts; Department of Media and Visual Arts; Çay, Damla; Yantaç, Asım Evren; PhD Student; Faculty Member; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; N/A; 52621
    The amount of volunteered geographic information is on the rise through geo-tagged data on social media. While this growth opens new paths for designers and developers to form new geographical visualizations and interactive geographic tools, it also engenders new design and visualization problems. We now can turn any kind of data into daily useful information to be used during our daily lives. This paper is about exploration of novel visualization methods for spatio-temporal data related to what is happening in the city, planned or unplanned. We, hereby evaluate design students' works on visualizing social events in the city and share the results as design implications. Yet we contribute by presenting intuitive visualization ideas for social events, for the use of interactive media designers and developers who are developing map based interactive tools.