Publication:
Gaze-based prediction of pen-based virtual interaction tasks

dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorÇiğ, Çağla
dc.contributor.kuauthorSezgin, Tevfik Metin
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokid18632
dc.date.accessioned2024-11-09T23:22:37Z
dc.date.issued2015
dc.description.abstractIn typical human-computer interaction, users convey their intentions through traditional input devices (e.g. keyboards, mice, joysticks) coupled with standard graphical user interface elements. Recently, pen-based interaction has emerged as a more intuitive alternative to these traditional means. However, existing pen-based systems are limited by the fact that they rely heavily on auxiliary mode switching mechanisms during interaction (e.g. hard or soft modifier keys, buttons, menus). In this paper, we describe how eye gaze movements that naturally occur during pen-based interaction can be used to reduce dependency on explicit mode selection mechanisms in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 88% success rate with the aid of users' natural eye gaze behavior during pen-only interaction. (C) 2014 Elsevier Ltd. All rights reserved.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuTÜBİTAK
dc.description.sponsorshipTUBITAK (The Scientific and Technological Research Council of Turkey) [110E175]
dc.description.sponsorshipTUBA (Turkish Academy of Sciences) The authors gratefully acknowledge the support and funding of TUBITAK (The Scientific and Technological Research Council of Turkey) under grant number 110E175 and TUBA (Turkish Academy of Sciences).
dc.description.volume73
dc.identifier.doi10.1016/j.ijhcs.2014.09.005
dc.identifier.eissn1095-9300
dc.identifier.issn1071-5819
dc.identifier.scopus2-s2.0-84908431699
dc.identifier.urihttp://dx.doi.org/10.1016/j.ijhcs.2014.09.005
dc.identifier.urihttps://hdl.handle.net/20.500.14288/11099
dc.identifier.wos345479200009
dc.keywordsSketch-based interaction
dc.keywordsMultimodal interaction
dc.keywordsPredictive interfaces
dc.keywordsGaze-based interfaces
dc.keywordsFeature selection
dc.keywordsFeature representation
dc.keywordsMultimodal databases
dc.keywordsHand-eye coordination
dc.keywordsMovements
dc.keywordsBehavior
dc.languageEnglish
dc.publisherAcademic Press Ltd- Elsevier Science Ltd
dc.sourceInternational Journal of Human-Computer Studies
dc.subjectComputer science
dc.subjectCybernetics
dc.subjectErgonomics
dc.subjectPsychology
dc.titleGaze-based prediction of pen-based virtual interaction tasks
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authorid0000-0002-1524-1646
local.contributor.kuauthorÇiğ, Çağla
local.contributor.kuauthorSezgin, Tevfik Metin
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files