Publication:
A deep learning approach for data driven vocal tract area function estimation

dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorErzin, Engin
dc.contributor.kuauthorAsadiabadi, Sasan
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Computer Engineering
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteCollege of Sciences
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.yokid34503
dc.contributor.yokidN/A
dc.date.accessioned2024-11-09T13:45:07Z
dc.date.issued2018
dc.description.abstractIn this paper we present a data driven vocal tract area function (VTAF) estimation using Deep Neural Networks (DNN). We approach the VTAF estimation problem based on sequence to sequence learning neural networks, where regression over a sliding window is used to learn arbitrary non-linear one-to-many mapping from the input feature sequence to the target articulatory sequence. We propose two schemes for efficient estimation of the VTAF; (1) a direct estimation of the area function values and (2) an indirect estimation via predicting the vocal tract boundaries. We consider acoustic speech and phone sequence as two possible input modalities for the DNN estimators. Experimental evaluations are performed over a large data comprising acoustic and phonetic features with parallel articulatory information from the USC-TIMIT database. Our results show that the proposed direct and indirect schemes perform the VTAF estimation with mean absolute error (MAE) rates lower than 1.65 mm, where the direct estimation scheme is observed to perform better than the indirect scheme.
dc.description.fulltextYES
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.description.sponsorshipN/A
dc.description.versionAuthor's final manuscript
dc.formatpdf
dc.identifier.doi10.1109/SLT.2018.8639582
dc.identifier.embargoNO
dc.identifier.filenameinventorynoIR01885
dc.identifier.isbn9781538643341
dc.identifier.issn2639-5479
dc.identifier.linkhttps://doi.org/10.1109/SLT.2018.8639582
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85063083027
dc.identifier.urihttps://hdl.handle.net/20.500.14288/3581
dc.identifier.wos463141800025
dc.keywordsSpeech articulation
dc.keywordsVocal tract area function
dc.keywordsDeep neural network
dc.keywordsConvolutional neural network
dc.languageEnglish
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.grantnoNA
dc.relation.urihttp://cdm21054.contentdm.oclc.org/cdm/ref/collection/IR/id/8568
dc.source2018 IEEE Workshop on Spoken Language Technology (SLT)
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.subjectEngineering, electrical and electronic
dc.titleA deep learning approach for data driven vocal tract area function estimation
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authorid0000-0002-2715-2368
local.contributor.authoridN/A
local.contributor.kuauthorErzin, Engin
local.contributor.kuauthorAsadiabadi, Sasan
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
8568.pdf
Size:
342.28 KB
Format:
Adobe Portable Document Format