Publication: Gaze-based prediction of pen-based virtual interaction tasks
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Çiğ, Çağla | |
dc.contributor.kuauthor | Sezgin, Tevfik Metin | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 18632 | |
dc.date.accessioned | 2024-11-09T23:22:37Z | |
dc.date.issued | 2015 | |
dc.description.abstract | In typical human-computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, we describe how eye gaze movements that naturally occur during pen-based interaction can be used to reduce dependency on explicit mode selection mechanisms in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 88% success rate with the aid of users' natural eye gaze behavior during pen-only interaction. (C) 2014 Elsevier Ltd. All rights reserved. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | NO | |
dc.description.publisherscope | International | |
dc.description.sponsoredbyTubitakEu | TÜBİTAK | |
dc.description.sponsorship | TUBITAK (The Scientific and Technological Research Council of Turkey) [110E175] | |
dc.description.sponsorship | TUBA (Turkish Academy of Sciences) The authors gratefully acknowledge the support and funding of TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 110E175 and TUBA (Turkish Academy of Sciences). | |
dc.description.volume | 73 | |
dc.identifier.doi | 10.1016/j.ijhcs.2014.09.005 | |
dc.identifier.eissn | 1095-9300 | |
dc.identifier.issn | 1071-5819 | |
dc.identifier.scopus | 2-s2.0-84908431699 | |
dc.identifier.uri | http://dx.doi.org/10.1016/j.ijhcs.2014.09.005 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/11099 | |
dc.identifier.wos | 345479200009 | |
dc.keywords | Sketch-based interaction | |
dc.keywords | Multimodal interaction | |
dc.keywords | Predictive interfaces | |
dc.keywords | Gaze-based interfaces | |
dc.keywords | Feature selection | |
dc.keywords | Feature representation | |
dc.keywords | Multimodal databases | |
dc.keywords | Hand-eye coordination | |
dc.keywords | Movements | |
dc.keywords | Behavior | |
dc.language | English | |
dc.publisher | Academic Press Ltd- Elsevier Science Ltd | |
dc.source | International Journal of Human-Computer Studies | |
dc.subject | Computer science | |
dc.subject | Cybernetics | |
dc.subject | Ergonomics | |
dc.subject | Psychology | |
dc.title | Gaze-based prediction of pen-based virtual interaction tasks | |
dc.type | Journal Article | |
dspace.entity.type | Publication | |
local.contributor.authorid | N/A | |
local.contributor.authorid | 0000-0002-1524-1646 | |
local.contributor.kuauthor | Çiğ, Çağla | |
local.contributor.kuauthor | Sezgin, Tevfik Metin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |