Department of Computer EngineeringDepartment of Electrical and Electronics Engineering2024-11-1020030-7803-7750-8N/A2-s2.0-0345565788https://hdl.handle.net/20.500.14288/16652In this paper we present a multimodal audio-visual speaker identification system. The objective is to improve the recognition performance over conventional unimodal schemes. The proposed system decomposes the information existing in a video stream into three components: speech, face texture and lip motion. Lip motion between successive frames is first computed in terms of optical row vectors and then encoded as a feature vector in a magnitude-direction histogram domain. The feature vectors obtained along the whole stream are then interpolated to match the rate of the speech signal and fused with mel frequency cepstral coeffcients (MFCC) of the corresponding speech signal. The resulting joint feature vectors are used to train and test a Hidden Markov Model (HMM) based identification system. Face texture images are treated separately in eigenface domain and integrated to the system through decision-fusion. Experimental results are also included for demonstration of the system performance.Computer ScienceArtificial intelligenceImaging systemsPhotographyMultimodal speaker identification with audio-video processingConference proceeding1870105000029376