Publication:
Gaze-based virtual task predictor

dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.kuauthorÇığ, Çağla
dc.contributor.kuauthorSezgin, Tevfik Metin
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.date.accessioned2024-11-09T22:56:39Z
dc.date.issued2014
dc.description.abstractPen-based systems promise an intuitive and natural interaction paradigm for tablet PCs and stylus-enabled phones. However, typical pen-based interfaces require users to switch modes frequently in order to complete ordinary tasks. Mode switching is usually achieved through hard or soft modifier keys, buttons, and soft-menus. Frequent invocation of these auxiliary mode switching elements goes against the goal of intuitive, fluid, and natural interaction. In this paper, we present a gaze-based virtual task prediction system that has the potential to alleviate dependence on explicit mode switching in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 80% success rate with the aid of users' natural eye gaze behavior during pen-only interaction.
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.description.sponsorshipACM SIGCHI
dc.identifier.doi10.1145/2666642.2666647
dc.identifier.isbn9781-4503-0125-1
dc.identifier.scopus2-s2.0-84919372278
dc.identifier.urihttps://doi.org/10.1145/2666642.2666647
dc.identifier.urihttps://hdl.handle.net/20.500.14288/7418
dc.keywordsFeature representation
dc.keywordsGaze-based interfaces
dc.keywordsMultimodal databases
dc.keywordsMultimodal interaction
dc.keywordsPredictive interfaces
dc.keywordsSketch-based interaction
dc.language.isoeng
dc.publisherAssociation for Computing Machinery
dc.relation.ispartofGazeIn 2014 - Proceedings of the 7th ACM Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality, Co-located with ICMI 2014
dc.subjectEngineering
dc.subjectElectrical electronic engineering
dc.subjectTelecommunications
dc.titleGaze-based virtual task predictor
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorÇiğ, Çağla
local.contributor.kuauthorSezgin, Tevfik Metin
local.publication.orgunit1GRADUATE SCHOOL OF SCIENCES AND ENGINEERING
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
local.publication.orgunit2Graduate School of Sciences and Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files