Publication: Gaze-based virtual task predictor
| dc.contributor.department | Department of Computer Engineering | |
| dc.contributor.department | Graduate School of Sciences and Engineering | |
| dc.contributor.kuauthor | PhD Student, Çığ, Çağla | |
| dc.contributor.kuauthor | Faculty Member, Sezgin, Tevfik Metin | |
| dc.contributor.schoolcollegeinstitute | College of Engineering | |
| dc.contributor.schoolcollegeinstitute | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
| dc.date.accessioned | 2024-11-09T22:56:39Z | |
| dc.date.issued | 2014 | |
| dc.description.abstract | Pen-based systems promise an intuitive and natural interaction paradigm for tablet PCs and stylus-enabled phones. However, typical pen-based interfaces require users to switch modes frequently in order to complete ordinary tasks. Mode switching is usually achieved through hard or soft modifier keys, buttons, and soft-menus. Frequent invocation of these auxiliary mode switching elements goes against the goal of intuitive, fluid, and natural interaction. In this paper, we present a gaze-based virtual task prediction system that has the potential to alleviate dependence on explicit mode switching in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 80% success rate with the aid of users' natural eye gaze behavior during pen-only interaction. | |
| dc.description.indexedby | Scopus | |
| dc.description.openaccess | YES | |
| dc.description.publisherscope | International | |
| dc.description.sponsoredbyTubitakEu | N/A | |
| dc.description.sponsorship | ACM SIGCHI | |
| dc.identifier.doi | 10.1145/2666642.2666647 | |
| dc.identifier.isbn | 9781-4503-0125-1 | |
| dc.identifier.scopus | 2-s2.0-84919372278 | |
| dc.identifier.uri | https://doi.org/10.1145/2666642.2666647 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14288/7418 | |
| dc.keywords | Feature representation | |
| dc.keywords | Gaze-based interfaces | |
| dc.keywords | Multimodal databases | |
| dc.keywords | Multimodal interaction | |
| dc.keywords | Predictive interfaces | |
| dc.keywords | Sketch-based interaction | |
| dc.language.iso | eng | |
| dc.publisher | Association for Computing Machinery | |
| dc.relation.ispartof | GazeIn 2014 - Proceedings of the 7th ACM Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye-Gaze and Multimodality, Co-located with ICMI 2014 | |
| dc.subject | Engineering | |
| dc.subject | Electrical electronic engineering | |
| dc.subject | Telecommunications | |
| dc.title | Gaze-based virtual task predictor | |
| dc.type | Conference Proceeding | |
| dspace.entity.type | Publication | |
| local.contributor.kuauthor | Çiğ, Çağla | |
| local.contributor.kuauthor | Sezgin, Tevfik Metin | |
| local.publication.orgunit1 | GRADUATE SCHOOL OF SCIENCES AND ENGINEERING | |
| local.publication.orgunit1 | College of Engineering | |
| local.publication.orgunit2 | Department of Computer Engineering | |
| local.publication.orgunit2 | Graduate School of Sciences and Engineering | |
| relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
| relation.isOrgUnitOfPublication | 3fc31c89-e803-4eb1-af6b-6258bc42c3d8 | |
| relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
| relation.isParentOrgUnitOfPublication | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 | |
| relation.isParentOrgUnitOfPublication | 434c9663-2b11-4e66-9399-c863e2ebae43 | |
| relation.isParentOrgUnitOfPublication.latestForDiscovery | 8e756b23-2d4a-4ce8-b1b3-62c794a8c164 |
