Publication:
Spontaneous smile intensity estimation by fusing saliency maps and convolutional neural networks

dc.contributor.coauthorWei, Qinglan
dc.contributor.coauthorMorency, Louis-Philippe
dc.contributor.coauthorSun, Bo
dc.contributor.departmentN/A
dc.contributor.kuauthorBozkurt, Elif
dc.contributor.kuprofilePhD Student
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.yokidN/A
dc.date.accessioned2024-11-09T23:19:46Z
dc.date.issued2019
dc.description.abstractSmile intensity estimation plays important roles in applications such as affective disorder prediction, life satisfaction prediction, camera technique improvement, etc. In recent studies, many researchers applied only traditional features, such as local binary pattern and local phase quantization (LPQ) to represent smile intensity. To improve the performance of spontaneous smile intensity estimation, we introduce a feature set that combines the saliency map (SM)-based handcrafted feature and non-low-level convolutional neural network (CNN) features. We took advantage of the opponent-color characteristic of SMs and the multiple convolutional level features, which were assumed to be mutually complementary. Experiments were made on the Binghamton-Pittsburgh 4D (BP4D) database and Denver Intensity of Spontaneous Facial Action (DISFA) database. We set the local binary patterns on three orthogonal planes (LBPTOP) method as a baseline, and the experimental results show that the CNN features can better estimate smile intensity. Finally, through the proposed SM-LBPTOP feature fusion with the median- and high-level CNN features, we obtained the best result (52.08% on BP4D, 70.55% on DISFA), demonstrating our hypothesis is reasonable: the SM-based handcrafted feature is a good supplement to CNNs in spontaneous smile intensity estimation. (C) 2019 SPIE and IS&T
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.issue2
dc.description.openaccessNO
dc.description.publisherscopeInternational
dc.description.sponsorshipBeijing Natural Science Foundation [4182031] This work was supported by the Beijing Natural Science Foundation (Grant No. 4182031) on students affect recognition research based on deep spatial filter network and multitask learning.
dc.description.volume28
dc.identifier.doi10.1117/1.JEI.28.2.023031
dc.identifier.eissn1560-229X
dc.identifier.issn1017-9909
dc.identifier.quartileQ4
dc.identifier.scopus2-s2.0-85065475503
dc.identifier.urihttp://dx.doi.org/10.1117/1.JEI.28.2.023031
dc.identifier.urihttps://hdl.handle.net/20.500.14288/10600
dc.identifier.wos473731200045
dc.keywordsSmile intensity
dc.keywordsSaliency maps
dc.keywordsConvolutional neural network
dc.languageEnglish
dc.publisherSpie-Soc Photo-Optical Instrumentation Engineers
dc.sourceJournal of Electronic Imaging
dc.subjectEngineering
dc.subjectElectrical electronic engineering
dc.subjectOptics
dc.subjectImaging science
dc.subjectPhotographic technology
dc.titleSpontaneous smile intensity estimation by fusing saliency maps and convolutional neural networks
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.kuauthorBozkurt, Elif

Files