Publications without Fulltext

Permanent URI for this collectionhttps://hdl.handle.net/20.500.14288/3

Browse

Search Results

Now showing 1 - 10 of 366
  • Placeholder
    Publication
    Virtual reality simulation-based training in otolaryngology
    (Springer London Ltd, 2023) N/A; Ünsaler, Selin; Hafız, Ayşenur Meriç; Gökler, Ozan; Özkaya, Yasemin Sıla; School of Medicine; Koç University Hospital
    VR simulators will gain wider place in medical education in order to ensure high quality surgical training. The integration of VR simulators into residency programs is actually required more than ever in the era after the pandemic. In this review, the literature is reviewed for articles that reported validation results of different VR simulators designed for the field of otolaryngology. A total of 213 articles searched from Pubmed and Web of Science databases with the key words "virtual reality simulation" and "otolaryngology" on January 2022 are retrieved. After removal of duplicates, 190 articles were reviewed by two independent authors. All the accessible articles in english and which report on validation studies of virtual reality systems are included in this review. There were 33 articles reporting validation studies of otolaryngology simulators. Twenty one articles reported on otology simulator validation studies, eight articles reported rhinology simulator validation studies and four articles reported on pharyngeal and laryngeal surgery simulators. Otology simulators are shown to increase the performance of the trainees. In some studies, efficacy of simulators has been found comparable to cadaveric bone dissections and trainees reported that VR simulators was very useful in facilitating the learning process and improved the learning curves. Rhinology simulators designed for endoscopic sinus surgery are shown to have the construct validity to differentiate the surgeons of different level of expertise. Simulators in temporal bone surgery and endoscopic sinus surgery can mimic the surgical environment and anatomy along with different surgical scenarios, thus can be more implemented in surgical training and evaluation of the trainees in the future. Currently there are no validated surgical simulators for pharyngeal and laryngeal surgery.
  • Placeholder
    Publication
    Exploring projection based mixed reality with tangibles for nonsymbolic preschool math education
    (Assoc Computing Machinery, 2019) N/A; N/A; Department of Psychology; Department of Media and Visual Arts; Department of Electrical and Electronics Engineering; Salman, Elif; Beşevli, Ceylan; Göksun, Tilbe; Özcan, Oğuzhan; Ürey, Hakan; Master Student; Researcher; Faculty Member; Faculty Member; Faculty Member; Department of Psychology; Department of Media and Visual Arts; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; College of Social Sciences and Humanities; College of Engineering; N/A; N/A; 47278; 12532; 8579
    A child's early math development can stem from interactions with the physical world. Accordingly, current tangible interaction studies focus on preschool children's formal (symbolic) mathematics, i.e. number knowledge. However, recent developmental studies stress the importance of nonsymbolic number representation in math learning, i.e. understanding quantity relations without counting(more/less). To our knowledge, there are no tangible systems based on this math concept. We developed an initial tangible based mixed-reality(MR) setup with a small tabletop projector and depth camera. Our goal was observing children's interaction with the setup to guide our further design process towards developing nonsymbolic math trainings. In this paper we present our observations from sessions with four 3-to-5 year old children and discuss their meaning for future work. Initial clues show that our MR setup leads to exploratory and mindful interactions, which might be generalizable to other tangible MR systems for child education and could inspire interaction design studies.
  • Placeholder
    Publication
    A novel test coverage metric for concurrently-accessed software components (A work-in-progress paper)
    (Springer-Verlag Berlin, 2006) N/A; Department of Computer Engineering; N/A; Department of Computer Engineering; Department of Computer Engineering; Taşıran, Serdar; Elmas, Tayfun; Bölükbaşı, Güven; Keremoğlu, M. Erkan; Faculty Member; PhD Student; Undergraduate Student; Reseacher; Department of Computer Engineering; College of Engineering; Graduate School of Sciences and Engineering; College of Engineering, College of Engineering; N/A; N/A; N/A; N/A
    We propose a novel, practical coverage metric called "location pairs" (LP) for concurrently-accessed software components. The LP metric captures well common concurrency errors that lead to atomicity or refinement violations. We describe a software tool for measuring LP coverage and outline an inexpensive application of predicate abstraction and model checking for ruling out infeasible coverage targets.
  • Placeholder
    Publication
    Hotspotizer: end-user authoring of mid-air gestural interactions
    (Association for Computing Machinery, 2014) N/A; Department of Computer Engineering; Department of Media and Visual Arts; Baytaş, Mehmet Aydın; Yemez, Yücel; Özcan, Oğuzhan; PhD Student; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Media and Visual Arts; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Engineering; College of Social Sciences and Humanities; N/A; 107907; 12532
    Drawing from a user-centered design process and guidelines derived from the literature, we developed a paradigm based on space discretization for declaratively authoring mid-air gestures and implemented it in Hotspotizer, an end-to-end toolkit for mapping custom gestures to keyboard commands. Our implementation empowers diverse user populations - including end-users without domain expertise - to develop custom gestural interfaces within minutes, for use with arbitrary applications.
  • Placeholder
    Publication
    Exploring users interested in 3D food printing and their attitudes: case of the employees of a kitchen appliance company
    (Taylor and Francis inc, 2022) N/A; N/A; Department of Sociology; Department of Media and Visual Arts; Kocaman, Yağmur; Mert, Aslı Ermiş; Özcan, Oğuzhan; PhD Student; Faculty Member; Faculty Member; Department of Sociology; Department of Media and Visual Arts; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; College of Social Sciences and Humanities; N/A; N/A; 12532
    3D Food Printing (3DFP) technology is expected to enter homes in the near future as a kitchen appliance. on the other hand, 3DFP is perceived as a non-domestic technology by potential users and domestic users' attitudes and everyday habits received less attention in previous 3DFP research. Exploring their perspective is needed to reflect their daily kitchen dynamics on the design process and discover possible new benefits situated in the home kitchen. on this basis, this study focuses on finding potential 3DFP users and explores their attitudes towards using 3DFP technology in their home kitchens through a two-stage study: First, we prioritized potential users based on their relationship with food through a questionnaire and found six factors that positively affect their attitude towards 3DFP: cooking every day, ordering food less than once a month, eating out at least a couple of times a month, having a mini oven, A multicooker, or a kettle, liking to try new foods, thinking that cooking is a fun activity. Second, we conducted semi-structured interviews with seven participants to discuss the possible benefits and drawbacks of 3DFP technology for their daily lives in the kitchen. Results revealed two new benefits that 3DFP at home may provide: risk-free cooking and cooking for self-improvement. We discuss the potential implications of these two benefits for design and HCI research focusing on how to facilitate automation and pleasurable aspects of cooking into future 3DFP devices.
  • Placeholder
    Publication
    Multicamera audio-visual analysis of dance figures
    (IEEE, 2007) N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Ofli, Ferda; Erzin, Engin; Yemez, Yücel; Tekalp, Ahmet Murat; PhD Student; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; College of Engineering; N/A; 34503; 107907; 26207
    We present an automated system for multicamera motion capture and audio-visual analysis of dance figures. the multiview video of a dancing actor is acquired using 8 synchronized cameras. the motion capture technique is based on 3D tracking of the markers attached to the person's body in the scene, using stereo color information without need for an explicit 3D model. the resulting set of 3D points is then used to extract the body motion features as 3D displacement vectors whereas MFC coefficients serve as the audio features. in the first stage of multimodal analysis, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to determine the recurrent elementary audio and body motion patterns. then in the second stage, we investigate the correlation of body motion patterns with audio patterns, that can be used for estimation and synthesis of realistic audio-driven body animation.
  • Placeholder
    Publication
    Audio-facial laughter detection in naturalistic dyadic conversations
    (Ieee-Inst Electrical Electronics Engineers Inc, 2017) N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Department of Computer Engineering; Türker, Bekir Berker; Yemez, Yücel; Sezgin, Tevfik Metin; Erzin, Engin; PhD Student; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; 107907; 18632; 34503
    We address the problem of continuous laughter detection over audio-facial input streams obtained from naturalistic dyadic conversations. We first present meticulous annotation of laughters, cross-talks and environmental noise in an audio-facial database with explicit 3D facial mocap data. Using this annotated database, we rigorously investigate the utility of facial information, head movement and audio features for laughter detection. We identify a set of discriminative features using mutual information-based criteria, and show how they can be used with classifiers based on support vector machines (SVMs) and time delay neural networks (TDNNs). Informed by the analysis of the individual modalities, we propose a multimodal fusion setup for laughter detection using different classifier-feature combinations. We also effectively incorporate bagging into our classification pipeline to address the class imbalance problem caused by the scarcity of positive laughter instances. Our results indicate that a combination of TDNNs and SVMs lead to superior detection performance, and bagging effectively addresses data imbalance. Our experiments show that our multimodal approach supported by bagging compares favorably to the state of the art in presence of detrimental factors such as cross-talk, environmental noise, and data imbalance.
  • Placeholder
    Publication
    Object placement for high bandwidth memory augmented with high capacity memory
    (IEEE, 2017) N/A; N/A; Department of Computer Engineering; Laghari, Mohammad; Erten, Didem Unat; Master Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 219274
    High bandwidth memory (HBM) is a new emerging technology that aims to improve the performance of bandwidth limited applications. Even though it provides high bandwidth, it must be augmented with DRAM to meet the memory capacity requirement of any applications. Due to this limitation, objects in an application should be optimally placed on the heterogeneous memory subsystems. In this study, we propose an object placement algorithm that places program objects to fast or slow memories in case the capacity of fast memory is insufficient to hold all the objects to increase the overall application performance. Our algorithm uses the reference counts and type of references (read or write) to make an initial placement of data. In addition, we perform various memory bandwidth benchmarks to be used in our placement algorithm on Intel Knights Landing (KNL) architecture. Not surprisingly high bandwidth memory sustains higher read bandwidth than write bandwidth, however, placing write-intensive data on HBM results in better overall performance because write-intensive data is punished by the DRAM speed more severely compared to read intensive data. Moreover, our benchmarks demonstrate that if a basic block makes references to both types of memories, it performs worse than if it makes references to only one type of memory in some cases. We test our proposed placement algorithm with 6 applications under various system configurations. By allocating objects according to our placement scheme, we are able to achieve a speedup of up to 2x.
  • Placeholder
    Publication
    SecVLC: secure visible light communication for military vehicular networks
    (Association for Computing Machinery (ACM), 2016) Tsonev, Dobroslav; Burchardt, Harald; N/A; Department of Electrical and Electronics Engineering; Department of Computer Engineering; Uçar, Seyhan; Ergen, Sinem Çöleri; Özkasap, Öznur; PhD Student; Faculty Member; Faculty Member; Department of Electrical and Electronics Engineering; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; N/A; 7211; 113507
    Technology coined as the vehicular ad hoc network (VANET) is harmonizing with Intelligent Transportation System (ITS) and Intelligent Traffic System (ITF). An application sce- nario of VANET is the military communication where ve- hicles move as a convoy on roadways, requiring secure and reliable communication. However, utilization of radio fre- quency (RF) communication in VANET limits its usage in military applications, due to the scarce frequency band and its vulnerability to security attacks. Visible Light Communi- cation (VLC) has been recently introduced as a more secure alternative, limiting the reception of neighboring nodes with its directional transmission. However, secure vehicular VLC that ensures confidential data transfer among the participat- ing vehicles, is an open problem. In this paper, we propose a secure military light communication protocol (SecVLC) for enabling efficient and secure data sharing. We use the directionality property of VLC to ensure that only target vehicles participate in the communication. Vehicles use full- duplex communication where infra-red (IR) is utilized to share a secret key and VLC is used to receive encrypted data. We experimentally demonstrate the suitability of SecVLC in outdoor scenarios at varying inter-vehicular distances with key metrics of interest, including the security, data packet delivery ratio and delay.
  • Placeholder
    Publication
    Gestanalytics: experiment and analysis tool for gesture-elicitation studies
    (Assoc Computing Machinery, 2017) N/A; Department of Media and Visual Arts; Buruk, Oğuz Turan; Özcan, Oğuzhan; PhD Student; Faculty Member; Department of Media and Visual Arts; KU Arçelik Research Center for Creative Industries (KUAR) / KU Arçelik Yaratıcı Endüstriler Uygulama ve Araştırma Merkezi (KUAR); Graduate School of Social Sciences and Humanities; College of Social Sciences and Humanities; N/A; 12532
    Gesture-elicitation studies are common and important studies for understanding user preferences. In these studies, researchers aim at extracting gestures which are desirable by users for different kinds of interfaces. During this process, researchers have to manually analyze many videos which is a tiring and a time-consuming process. Although current tools for video analysis provide annotation opportunity and features like automatic gesture analysis, researchers still need to (1) divide videos into meaningful pieces, (2) manually examine each piece, (3) match collected user data with these, (4) code each video and (5) verify their coding. These processes are burdensome and current tools do not aim to make this process easier and faster. To fill this gap, we developed "GestAnalytics" with features of simultaneous video monitoring, video tagging and filtering. Our internal pilot tests show that GestAnalytics can be a beneficial tool for researchers who practice video analysis for gestural interfaces.