Department of Computer Engineering2024-11-0920149781-4503-0125-110.1145/2666642.26666472-s2.0-84919372278http://dx.doi.org/10.1145/2666642.2666647https://hdl.handle.net/20.500.14288/7418Pen-based systems promise an intuitive and natural interaction paradigm for tablet PCs and stylus-enabled phones. However, typical pen-based interfaces require users to switch modes frequently in order to complete ordinary tasks. Mode switching is usually achieved through hard or soft modifier keys, buttons, and soft-menus. Frequent invocation of these auxiliary mode switching elements goes against the goal of intuitive, fluid, and natural interaction. In this paper, we present a gaze-based virtual task prediction system that has the potential to alleviate dependence on explicit mode switching in pen-based systems. In particular, we show that a range of virtual manipulation commands, that would otherwise require auxiliary mode switching elements, can be issued with an 80% success rate with the aid of users' natural eye gaze behavior during pen-only interaction.EngineeringElectrical electronic engineeringTelecommunicationsGaze-based virtual task predictorConference proceedinghttps://www.scopus.com/inward/record.uri?eid=2-s2.0-84919372278&doi=10.1145%2f2666642.2666647&partnerID=40&md5=ad6f4470c9cb14620e106478cff9448b1726