Publication: Food intake detection using autoencoder-based deep neural networks
dc.contributor.department | N/A | |
dc.contributor.department | Department of Computer Engineering | |
dc.contributor.kuauthor | Turan, Mehmet Ali Tuğtekin | |
dc.contributor.kuauthor | Erzin, Engin | |
dc.contributor.kuprofile | PhD Student | |
dc.contributor.kuprofile | Faculty Member | |
dc.contributor.other | Department of Computer Engineering | |
dc.contributor.schoolcollegeinstitute | Graduate School of Sciences and Engineering | |
dc.contributor.schoolcollegeinstitute | College of Engineering | |
dc.contributor.yokid | N/A | |
dc.contributor.yokid | 34503 | |
dc.date.accessioned | 2024-11-09T23:22:52Z | |
dc.date.issued | 2018 | |
dc.description.abstract | Wearable systems have the potential to reduce bias and inaccuracy in current dietary monitoring methods. The analysis of food intake sounds provides important guidance for developing an automated diet monitoring system. Most of the attempts in recent years can be ragarded as impractical due to the need for multiple sensors that specialize in swallowing or chewing detection separately. In this study, we provide a unified system for detecting swallowing and chewing activities with a laryngeal microphone placed on the neck, as well as some daily activities such as speech, coughing or throat clearing. Our proposed system is trained on the dataset containing 10 different food items collected from 8 subjects. The spectrograms, which are extracted from the 276 minute records in total, are fed into a deep autoencoder architecture. In the three-class evaluations (chewing, swallowing and rest), we achieve 71.7% of the F-score and 76.3% of the accuracy. These results provide a promising contribution to an automated food monitoring system that will be developed under everyday conditions. | |
dc.description.indexedby | WoS | |
dc.description.indexedby | Scopus | |
dc.description.openaccess | YES | |
dc.description.publisherscope | International | |
dc.description.sponsorship | Aselsan | |
dc.description.sponsorship | et al. | |
dc.description.sponsorship | Huawei | |
dc.description.sponsorship | IEEE Signal Processing Society | |
dc.description.sponsorship | IEEE Turkey Section | |
dc.description.sponsorship | Netas | |
dc.identifier.doi | 10.1109/SIU.2018.8404522 | |
dc.identifier.isbn | 9781-5386-1501-0 | |
dc.identifier.link | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85050809300&doi=10.1109%2fSIU.2018.8404522&partnerID=40&md5=93897485c25f3ed4a321be91554018e6 | |
dc.identifier.scopus | 2-s2.0-85050809300 | |
dc.identifier.uri | http://dx.doi.org/10.1109/SIU.2018.8404522 | |
dc.identifier.uri | https://hdl.handle.net/20.500.14288/11148 | |
dc.identifier.wos | 511448500375 | |
dc.keywords | Automated dietary monitoring | |
dc.keywords | Eating activity detection | |
dc.keywords | Sparse autoencoders | |
dc.keywords | Throat microphone | |
dc.keywords | Wearable sensors | |
dc.language | Turkish | |
dc.publisher | Institute of Electrical and Electronics Engineers (IEEE) | |
dc.source | 26th IEEE Signal Processing and Communications Applications Conference, SIU 2018 | |
dc.subject | Civil engineering | |
dc.subject | Electrical electronics engineering | |
dc.subject | Telecommunication | |
dc.title | Food intake detection using autoencoder-based deep neural networks | |
dc.title.alternative | Otokodlayıcı tabanlı derin sinir aǧları kullanarak gıda tüketiminin tespit edilmesi | |
dc.type | Conference proceeding | |
dspace.entity.type | Publication | |
local.contributor.authorid | 0000-0002-3822-235X | |
local.contributor.authorid | 0000-0002-2715-2368 | |
local.contributor.kuauthor | Turan, Mehmet Ali Tuğtekin | |
local.contributor.kuauthor | Erzin, Engin | |
relation.isOrgUnitOfPublication | 89352e43-bf09-4ef4-82f6-6f9d0174ebae | |
relation.isOrgUnitOfPublication.latestForDiscovery | 89352e43-bf09-4ef4-82f6-6f9d0174ebae |