Researcher: Çiğ, Çağla
Name Variants
Çiğ, Çağla
Email Address
Birth Date
6 results
Search Results
Now showing 1 - 6 of 6
Publication Metadata only New modalities, new challenges - annotating sketching and gaze data(Institute of Electrical and Electronics Engineers (IEEE), 2013) N/A; Department of Computer Engineering; Çiğ, Çağla; Sezgin, Tevfik Metin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 18632One active line of research in the IUI community aims to build interfaces that combine multiple communication modalities to support more natural human-computer interaction. Multimodal interaction research relies heavily on the availability of carefully annotated data in various modalities. As a result, many authors have suggested general-purpose tools for annotation. However, existing tools do not support annotation of a number of recently emerging modalities. In particular, annotation of pen and eye gaze data is not fully supported by existing annotation systems despite the increasing popularity of tablets and eye gaze-Aware systems. This paper presents our efforts for designing and implementing a general-purpose annotator with a comprehensive support for a large number of modalities./ Öz: Akıllı kullanıcı arayüzleri üzerine yapılan çalışmalardaki aktif araştırma alanlarından biri de çoklu iletişim kiplerini bir araya getirerek daha doğal insan-makine etkileşimini desteklemeyi hedefler. Çeşitli kiplerden verilerin otomatik öğrenmeye imkân verecek şekilde etiketlenmesi, çokkipli etkileşim araştırmalarının yürütülmesinde önemli bir role sahiptir. Bundan dolayı, pek çok yazar etiketleme için genelamaçlı araçlar önermiştir. Ancak, hali hazırdaki araçlar yakın geçmişte yaygınlık kazanmaya başlamış bir takım kipleri desteklememektedir. Özellikle, tabletlerin ve bakış yönü temelli sistemlerin giderek artan popülerliğine rağmen, kalem ve bakış yönü verilerinin etiketlenmesi hali hazırdaki etiketleme sistemleri tarafından tam anlamıyla desteklenmemektedir. Bu bildiri çok sayıda kip için kapsamlı destek sağlayabilecek bir genel-amaçlı etiketleyici için tasarı ve geliştirme çabalarımızı sunmaktadır.Publication Metadata only Gaze-based proactive user interface for pen-based systems(Association for Computing Machinery, 2014) N/A; Çiğ, Çağla; PhD Student; Graduate School of Sciences and Engineering; N/AIn typical human-computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, I describe the roadmap for my PhD research which aims at using eye gaze movements that naturally occur during pen-based interaction to reduce dependency on explicit mode selection mechanisms in pen-based systems.Publication Metadata only Gaze-based prediction of pen-based virtual interaction tasks(Academic Press Ltd- Elsevier Science Ltd, 2015) Department of Computer Engineering; Department of Computer Engineering; Çiğ, Çağla; Sezgin, Tevfik Metin; PhD Student; Faculty Member; Department of Computer Engineering; College of Engineering; College of Engineering; N/A; 18632In typical human-computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, we describe how eye gaze movements that naturally occur during pen-based interaction can be used to reduce dependency on explicit mode selection mechanisms in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 88% success rate with the aid of users' natural eye gaze behavior during pen-only interaction. (C) 2014 Elsevier Ltd. All rights reserved.Publication Metadata only Gaze-based virtual task predictor(Association for Computing Machinery, 2014) N/A; Department of Computer Engineering; Çiğ, Çağla; Sezgin, Tevfik Metin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 18632Pen-based systems promise an intuitive and natural interaction paradigm for tablet PCs and stylus-enabled phones. However, typical pen-based interfaces require users to switch modes frequently in order to complete ordinary tasks. Mode switching is usually achieved through hard or soft modifier keys, buttons, and soft-menus. Frequent invocation of these auxiliary mode switching elements goes against the goal of intuitive, fluid, and natural interaction. In this paper, we present a gaze-based virtual task prediction system that has the potential to alleviate dependence on explicit mode switching in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 80% success rate with the aid of users' natural eye gaze behavior during pen-only interaction.Publication Metadata only Gaze-based real-time activity recognition for proactive interfaces(IEEE, 2015) Department of Computer Engineering; N/A; Sezgin, Tevfik Metin; Çiğ, Çağla; Faculty Member; PhD Student; Department of Computer Engineering; College of Engineering; Graduate School of Sciences and Engineering; 18632; N/AOne active line of research on gaze-based interaction aims to predict user activities during interaction with computerized systems. All of the existing studies, however, are able to detect the performed activity only after the activity ends. Therefore, it is not possible to employ these systems in real-time proactive user interfaces. In this paper (1) an existing activity prediction system for pen-based mobile devices is modified for real-time activity prediction and (2) an alternative time-based activity prediction system is introduced. The results of our comprehensive experiments demonstrate that the newly developed system is more successful than the existing system with respect to real-time activity prediction.Publication Open Access Gaze-based biometric authentication: hand-eye coordination patterns as a biometric trait(Eurographics, 2016) N/A; Department of Computer Engineering; Sezgin, Tevfik Metin; Çiğ, Çağla; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; 18632; N/AWe propose a biometric authentication system for pointer-based systems including, but not limited to, increasingly prominent pen-based mobile devices. To unlock a mobile device equipped with our biometric authentication system, all the user needs to do is manipulate a virtual object presented on the device display. The user can select among a range of familiar manipulation tasks, namely drag, connect, maximize, minimize, and scroll. These simple tasks take around 2 seconds each and do not require any prior education or training [ÇS15]. More importantly, we have discovered that each user has a characteristic way of performing these tasks. Features that express these characteristics are hidden in the user's accompanying hand-eye coordination, gaze, and pointer behaviors. For this reason, as the user performs any selected task, we collect his/her eye gaze and pointer movement data using an eye gaze tracker and a pointer-based input device (e.g. a pen, stylus, finger, mouse, joystick etc.), respectively. Then, we extract meaningful and distinguishing features from this multimodal data to summarize the user's characteristic way of performing the selected task. Finally, we authenticate the user through three layers of security: (1) user must have performed the manipulation task correctly (e.g. by drawing the correct pattern), (2) user's hand-eye coordination and gaze behaviors while performing this task should confirm with his/her hand-eye coordination and gaze behavior model in the database, and (3) user's pointer behavior while performing this task should confirm with his/her pointer behavior model in the database.