Department of Computer Engineering2024-11-092017978-1-5090-6494-62165-0608N/AN/Ahttps://hdl.handle.net/20.500.14288/13798Laughter detection is an essential aspect towards effective human-computer interaction. This work primarily addresses the problem of laughter detection in a real-time environment. We utilize annotated audio and visual data collected from a Kinect sensor to identify discriminative features for audio and video, separately. We show how the features can be used with classifiers such as support vector machines (SVM). The two modalities are then fused into a single output to form a decision. We test our setup by emulating real-time data with Kinect sensor, and compare the results with the offline version of the setup. Our results indicate that our laughter detection system gives a promising performance for a real-time human-computer interactions.AcousticsComputer scienceArtificial intelligenceEngineeringElectrical and electronic engineeringTelecommunicationsReal-time audiovisual laughter detectionÇok kipli ve gerçek zamanli gülme sezimiConference proceeding41381310046110360