Department of Electrical and Electronics EngineeringDepartment of Computer Engineering2024-11-0920040780-3831-8410.1109/SIU.2004.1338280http://dx.doi.org/10.1109/SIU.2004.1338280https://hdl.handle.net/20.500.14288/15357This paper addresses the selection of best lip motion features for biometric open-set speaker identification. The best features are those that result in the highest discrimination of individual speakers in a population. We first detect the face region in each video frame. The lip region for each frame is then segmented following registration of successive face regions by global motion compensation. The initial lip feature vector is composed of the 2D-DCT coefficients of the optical flow vectors within the lip region at each frame. We propose to select the most discriminative features from the full set of transform coefficients by using a probabilistic measure that maximizes the ratio of intra-class and inter-class probabilities. The resulting discriminative feature vector with reduced dimension is expected to maximize the identification performance. Experimental results support that the resulting discriminative feature vector with reduced dimension improves the identification performance.Electrical electronics engineeringComputer engineeringThe use of lip motion for biometric speaker identificationBiyometrik konuşmacı tanıma için dudak devinimi kullanımı]Conference proceedinghttps://www.scopus.com/inward/record.uri?eid=2-s2.0-18844379701andpartnerID=40andmd5=b24367477f8c2264d8f5367e9bd58a9f225861200038N/A8369