Publication:
Real-time audiovisual laughter detection

dc.contributor.departmentN/A
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorTürker, Bekir Berker
dc.contributor.kuauthorBuçinca, Zana
dc.contributor.kuauthorSezgin, Tevfik Metin
dc.contributor.kuauthorYemez, Yücel
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuprofilePhD Student
dc.contributor.kuprofileMaster Student
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokidN/A
dc.contributor.yokid18632
dc.contributor.yokid107907
dc.contributor.yokid34503
dc.date.accessioned2024-11-09T23:34:30Z
dc.date.issued2017
dc.description.abstractLaughter detection is an essential aspect towards effective human-computer interaction. This work primarily addresses the problem of laughter detection in a real-time environment. We utilize annotated audio and visual data collected from a Kinect sensor to identify discriminative features for audio and video, separately. We show how the features can be used with classifiers such as support vector machines (SVM). The two modalities are then fused into a single output to form a decision. We test our setup by emulating real-time data with Kinect sensor, and compare the results with the offline version of the setup. Our results indicate that our laughter detection system gives a promising performance for a real-time human-computer interactions.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.identifier.doi10.1109/SIU.2017.7960598
dc.identifier.isbn9781-5090-6494-6
dc.identifier.linkhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85026307863&doi=10.1109%2fSIU.2017.7960598&partnerID=40&md5=b014fd61e6156a8d292056b989613be0
dc.identifier.scopus2-s2.0-85026307863
dc.identifier.urihttp://dx.doi.org/10.1109/SIU.2017.7960598
dc.identifier.urihttps://hdl.handle.net/20.500.14288/12363
dc.identifier.wos413813100461
dc.keywordsAffective computing and interaction
dc.keywordsApplied machine learning
dc.keywordsReal-Time laughter detection
dc.languageTurkish
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.source2017 25th Signal Processing and Communications Applications Conference, SIU 2017
dc.subjectAcoustics
dc.subjectComputer Science
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.subjectSoftware Electrical electronics engineering engineering
dc.titleReal-time audiovisual laughter detection
dc.title.alternativeÇok kipli ve gerçek zamanlı gülme sezimi
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authoridN/A
local.contributor.authorid0000-0002-1524-1646
local.contributor.authorid0000-0002-7515-3138
local.contributor.authorid0000-0002-2715-2368
local.contributor.kuauthorTürker, Bekir Berker
local.contributor.kuauthorBuçinca, Zana
local.contributor.kuauthorSezgin, Tevfik Metin
local.contributor.kuauthorYemez, Yücel
local.contributor.kuauthorErzin, Engin
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae

Files