Department of Computer Engineering2024-11-0920189781-5386-1501-010.1109/SIU.2018.84045222-s2.0-85050809300http://dx.doi.org/10.1109/SIU.2018.8404522https://hdl.handle.net/20.500.14288/11148Wearable systems have the potential to reduce bias and inaccuracy in current dietary monitoring methods. The analysis of food intake sounds provides important guidance for developing an automated diet monitoring system. Most of the attempts in recent years can be ragarded as impractical due to the need for multiple sensors that specialize in swallowing or chewing detection separately. In this study, we provide a unified system for detecting swallowing and chewing activities with a laryngeal microphone placed on the neck, as well as some daily activities such as speech, coughing or throat clearing. Our proposed system is trained on the dataset containing 10 different food items collected from 8 subjects. The spectrograms, which are extracted from the 276 minute records in total, are fed into a deep autoencoder architecture. In the three-class evaluations (chewing, swallowing and rest), we achieve 71.7% of the F-score and 76.3% of the accuracy. These results provide a promising contribution to an automated food monitoring system that will be developed under everyday conditions.Civil engineeringElectrical electronics engineeringTelecommunicationFood intake detection using autoencoder-based deep neural networksOtokodlayıcı tabanlı derin sinir aǧları kullanarak gıda tüketiminin tespit edilmesiConference proceedinghttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85050809300&doi=10.1109%2fSIU.2018.8404522&partnerID=40&md5=93897485c25f3ed4a321be91554018e65114485003758600