Department of Electrical and Electronics EngineeringDepartment of Computer Engineering2024-11-092015978-1-5108-1790-62-s2.0-84959168683https://hdl.handle.net/20.500.14288/810Automatic continuous emotion tracking (CET) has received increased attention with expected applications in medical, robotic, and human-machine interaction areas. The speech signal carries useful clues to estimate the affective state of the speaker. In this paper, we present Total Variability Space (TVS) for CET from speech data. TVS is a widely used framework in speaker and language recognition applications. In this study, we applied TVS as an unsupervised emotional feature extraction framework. Assuming a low temporal variation in the affective space, we discretize the continuous affective state and extract i-vectors. Experimental evaluations are performed on the CreativeIT dataset and fusion results with pool of statistical functions over mel frequency cepstral coefficients (MFCCs) show a 2% improvement for the emotion tracking from speech.pdfAcousticsComputer scienceContinuous emotion tracking using total variability spaceConference proceeding380581600273N/ANOIR00682