Researcher: Turan, Mehmet Ali Tuğtekin
Name Variants
Turan, Mehmet Ali Tuğtekin
Email Address
Birth Date
17 results
Search Results
Now showing 1 - 10 of 17
Publication Metadata only Enhancement of throat microphone recordings by learning phone-dependent mappings of speech spectra(Institute of Electrical and Electronics Engineers (IEEE), 2013) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503We investigate spectral envelope mapping problem with joint analysis of throat- and acoustic-microphone recordings to enhance throatmicrophone speech. A new phone-dependent GMM-based spectral envelope mapping scheme, which performs the minimum mean square error (MMSE) estimation of the acoustic-microphone spectral envelope, has been proposed. Experimental evaluations are performed to compare the proposed mapping scheme to the state-of-theart GMM-based estimator using both objective and subjective evaluations. Objective evaluations are performed with the log-spectral distortion (LSD) and the wideband perceptual evaluation of speech quality (PESQ) metrics. Subjective evaluations are performed with the A/B pair comparison listening test. Both objective and subjective evaluations yield that the proposed phone-dependent mapping consistently improves performances over the state-of-the-art GMM estimator.Publication Metadata only Source and filter estimation for throat-microphone speech enhancement(IEEE-Inst Electrical Electronics Engineers Inc, 2016) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503In this paper, we propose a new statistical enhancement system for throat microphone recordings through source and filter separation. Throat microphones (TM) are skin-attached piezoelectric sensors that can capture speech sound signals in the form of tissue vibrations. Due to their limited bandwidth, TM recorded speech suffers from intelligibility and naturalness. In this paper, we investigate learning phone-dependent Gaussian mixture model (GMM)-based statistical mappings using parallel recordings of acoustic microphone (AM) and TM for enhancement of the spectral envelope and excitation signals of the TM speech. The proposed mappings address the phone-dependent variability of tissue conduction with TM recordings. While the spectral envelope mapping estimates the line spectral frequency (LSF) representation of AM from TM recordings, the excitation mapping is constructed based on the spectral energy difference (SED) of AM and TM excitation signals. The excitation enhancement is modeled as an estimation of the SED features from the TM signal. The proposed enhancement system is evaluated using both objective and subjective tests. Objective evaluations are performed with the log-spectral distortion (LSD), the wideband perceptual evaluation of speech quality (PESQ) and mean-squared error (MSE) metrics. Subjective evaluations are performed with an A/B comparison test. Experimental results indicate that the proposed phone-dependent mappings exhibit enhancements over phone-independent mappings. Furthermore enhancement of the TM excitation through statistical mappings of the SED features introduces significant objective and subjective performance improvements to the enhancement of TM recordings.Publication Metadata only Artificial bandwidth extension of speech excitation(IEEE, 2015) Department of Computer Engineering; N/A; Erzin, Engin; Turan, Mehmet Ali Tuğtekin; Faculty Member; PhD Student; Department of Computer Engineering; College of Engineering; Graduate School of Sciences and Engineering; 34503; N/AIn this paper, a new approach that extends narrowband excitation signals to synthesize wide-band speech have been proposed. Bandwidth extension problem is analyzed using source-filter separation framework where a speech signal is decomposed into two independent components. For spectral envelope extension, our former work based on hidden Markov model have been used. For excitation signal extension, the proposed method moves the spectrum based on correlation analysis where the distance between the harmonics and the structure of the excitation signal are preserved in high-bands. In experimental studies, we also apply two other well-known extension techniques for excitation signals comparatively and evaluate the overall performance of proposed system using the PESQ metric. Our findings indicate that the proposed extension method outperforms other two techniques. © 2015 IEEE./ Öz: Bu çalışmada dar bantlı kaynak sinyallerinin bant genişliği artırılarak geniş bantlı konuşma sentezleyen yeni bir yaklaşım önerilmektedir. Bant genişletme problemi kaynak süzgeç analizinin yardımıyla iki bağımsız bileşen üzerinde ayrı ayrı ele alınmıştır. Süzgeç yapısını şekillendiren izgesel zarfı, saklı Markov modeli tabanlı geçmiş çalışmamızı kullanarak iyileştirirken, dar bantlı kaynak sinyalinin genişletilmesi için izgesel kopyalamaya dayalı yeni bir yöntem öneriyoruz. Bu yeni yöntemde dar bantlı kaynak sinyalinin yüksek frekans bileşenlerindeki harmonik yapısını, ilinti analizi ile genişletip geniş bantlı kaynak sinyali sentezlemekteyiz. Öne sürülen bu iyileştirmenin başarımını ölçebilmek için literatürde sıklıkla kullanılan iki ayrı genişletme yöntemi de karşılaştırmalı olarak degerlendirilmekte- dir. Deneysel çalışmalarda öne sürdüğümüz genişletmenin PESQ ölçütüyle nesnel başarımı gösterilmiştir.Publication Metadata only A new statistical excitation mapping for enhancement of throat microphone recordings(International Speech and Communication Association, 2013) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503In this paper we investigate a new statistical excitation mapping technique to enhance throat-microphone speech using joint analysis of throat- And acoustic-microphone recordings. In a recent study we employed source-filter decomposition to enhance spectral envelope of the throat-microphone recordings. In the source-filter decomposition framework we observed that the spectral envelope difference of the excitation signals of throatand acoustic-microphone recordings is an important source of the degradation in the throat-microphone voice quality. In this study we model spectral envelope difference of the excitation signals as a spectral tilt vector, and we propose a new phone-dependent GMM-based spectral tilt mapping scheme to enhance throat excitation signal. Experiments are performed to evaluate the proposed excitation mapping scheme in comparison with the state-of-the-art throat-microphone speech enhancement techniques using both objective and subjective evaluations. Objective evaluations are performed with the wideband perceptual evaluation of speech quality (ITU-PESQ) metric. Subjective evaluations are performed with the A/B pair comparison listening test. Both objective and subjective evaluations yield that the proposed statistical excitation mapping consistently delivers higher improvements than the statistical mapping of the spectral envelope to enhance the throat-microphone recordings.Publication Metadata only Improving phoneme recognition of throat microphone speech recordings using transfer learning(Elsevier, 2021) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503Throat microphones (TM) are a type of skin-attached non-acoustic sensors, which are robust to environmental noise but carry a lower signal bandwidth characterization than the traditional close-talk microphones (CM). Attaining high-performance phoneme recognition is a challenging task when the training data from a degrading channel, such as TM, is limited. In this paper, we address this challenge for the TM speech recordings using a transfer learning approach based on the stacked denoising auto-encoders (SDA). The proposed transfer learning approach defines an SDA-based domain adaptation framework to map the source domain CM representations and the target domain TM representations into a common latent space, where the mismatch across TM and CM is eliminated to better train an acoustic model and to improve the TM phoneme recognition. For the phoneme recognition task, we use the convolutional neural network (CNN) and the hidden Markov model (HMM) based CNN/HMM hybrid system, which delivers better acoustic modeling performance compared to the conventional Gaussian mixture model (GMM) based models. In the experimental evaluations, we observed more than 12% relative phoneme error rate (PER) improvement for the TM recordings with the proposed transfer learning approach compared to baseline performances.Publication Metadata only A subjective listening test of six different artificial bandwidth extension approaches in English, Chinese, German, and Korean(Institute of Electrical and Electronics Engineers (IEEE), 2016) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503In studies on artificial bandwidth extension (ABE), there is a lack of international coordination in subjective tests between multiple methods and languages. Here we present the design of absolute category rating listening tests evaluating 12 ABE variants of six approaches in multiple languages, namely in American English, Chinese, German, and Korean. Since the number of ABE variants caused a higher-than-recommended length of the listening test, ABE variants were distributed into two separate listening tests per language. The paper focuses on the listening test design, which aimed at merging the subjective scores of both tests and thus allows for a joint analysis of all ABE variants under test at once. A language-dependent analysis, evaluating ABE variants in the context of the underlying coded narrowband speech condition showed statistical significant improvement in English, German, and Korean for some ABE solutions.Publication Metadata only Monitoring infant's emotional cry in domestic environments using the capsule network architecture(International Speech and Communication Association, 2018) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503Automated recognition of an infant's cry from audio can be considered as a preliminary step for the applications like remote baby monitoring. In this paper, we implemented a recently introduced deep learning topology called capsule network (CapsNet) for the cry recognition problem. A capsule in the CapsNet, which is defined as a new representation, is a group of neurons whose activity vector represents the probability that the entity exists. Active capsules at one level make predictions, via transformation matrices, for the parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We employed spectrogram representations from the short segments of an audio signal as an input of the CapsNet. For experimental evaluations, we apply the proposed method on INTERSPEECH 2018 computational paralinguistics challenge (ComParE), crying sub-challenge, which is a three-class classification task using an annotated database (CRIED). Provided audio samples contains recordings from 20 healthy infants and categorized into the three classes namely neutral, fussing and crying. We show that the multi-layer CapsNet is competitive with the baseline performance on the CRIED corpus and is considerably better than a conventional convolutional net.Publication Metadata only A phonetic classification for throat microphone enhancement(IEEE, 2014) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503In this analysis paper, we investigate the effect of phonetic clustering based on place and manner of articulation for the enhancement of throat-microphone speech through spectral envelope mapping. Place of articulation (PoA) and manner of articulation (MoA) dependent GMM-based spectral envelope mapping schemes have been investigated using the reflection coefficient representation of the linear prediction model. Reflection coefficients are expected to localize mapping performance within the concatenation of lossless tubes model of the vocal tract. In experimental studies, we evaluate spectral mapping performance within clusters of the PoA and MoA using the log-spectral distortion (LSD) and as function of reflection coefficient mapping using the mean-square error distance. Our findings indicate that highest degradations after the spectral mapping occur with stops and liquids of the MoA, and velar and alveolar classes of the PoA. The MoA classification attains higher improvements than the PoA classification.Publication Metadata only Food intake detection using autoencoder-based deep neural networks(Institute of Electrical and Electronics Engineers (IEEE), 2018) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503Wearable systems have the potential to reduce bias and inaccuracy in current dietary monitoring methods. The analysis of food intake sounds provides important guidance for developing an automated diet monitoring system. Most of the attempts in recent years can be ragarded as impractical due to the need for multiple sensors that specialize in swallowing or chewing detection separately. In this study, we provide a unified system for detecting swallowing and chewing activities with a laryngeal microphone placed on the neck, as well as some daily activities such as speech, coughing or throat clearing. Our proposed system is trained on the dataset containing 10 different food items collected from 8 subjects. The spectrograms, which are extracted from the 276 minute records in total, are fed into a deep autoencoder architecture. In the three-class evaluations (chewing, swallowing and rest), we achieve 71.7% of the F-score and 76.3% of the accuracy. These results provide a promising contribution to an automated food monitoring system that will be developed under everyday conditions.Publication Metadata only Classification of ingestion sounds using Hilbert-Huang transform(Institute of Electrical and Electronics Engineers (IEEE), 2017) N/A; Department of Computer Engineering; Turan, Mehmet Ali Tuğtekin; Erzin, Engin; PhD Student; Faculty Member; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 34503Automatic classification of food ingestion gives a precise and objective solution for dietary monitoring which is an active research area. In this study, we aim to classify ingestion sounds of the six different food types recorded from the throat microphone. We observe that these records show a different energy distribution than normal speech signals. To reveal the characteristics of intake signals, we prefer a model that could reflect the energy distributions. Using the Hilbert-Huang transformation, we decompose the signal on the local time-scale. As a result of this hierarchical decomposition, zero-crossing rates and short-term energies are calculated for each component. These feature sets are then classified using the support vector machine classifier. After the experimental studies, a classification accuracy of 72% is obtained for the six-class classifier that indicates the proposed methodology is promising for further studies.