Department of Computer EngineeringDepartment of Electrical and Electronics Engineering2024-11-092006978-1-4244-0366-010.1109/ICME.2006.2626632-s2.0-34247646607http://dx.doi.org/10.1109/ICME.2006.262663https://hdl.handle.net/20.500.14288/15126Multimodal speech and speaker modeling and recognition are widely accepted as vital aspects of state of the art human-machine interaction systems. While correlations between speech and lip motion as well as speech and facial expressions are widely studied, relatively little work has been done to investigate the correlations between speech and gesture. Detection and modeling of head, hand and arm gestures of a speaker have been studied extensively and these gestures were shown to carry linguistic information. A typical example is the head gesture while saying "yes/no". In this study, correlation between gestures and speech is investigated. In speech signal analysis, keyword spotting and prosodic accent event detection has been performed. In gesture analysis, hand positions and parameters of global head motion arc used as features. The detection of gestures is based on discrete pre-designated symbol sets, which are manually labeled during the training phase. The gesture-speech correlation is modelled by examining the co-occurring speech and gesture patterns. This correlation can be used to fuse gesture and speech modalities for edutainment applications (i.e. video games, 3-D animations) where natural gestures of talking avatars is animated from speech. A speech driven gesture animation example has been implemented for demonstration.Computer scienceArtificial intelligenceImaging sciencePhotographic technologyTelecommunicationsCombined gesture-speech analysis and speech driven gesture synthesisConference proceeding245384802004N/A5873