Researcher:
Tekalp, Ahmet Murat

Loading...
Profile Picture
ORCID

Job Title

Faculty Member

First Name

Ahmet Murat

Last Name

Tekalp

Name

Name Variants

Tekalp, Ahmet Murat

Email Address

Birth Date

Search Results

Now showing 1 - 10 of 244
  • Placeholder
    Publication
    Performance measures for video object segmentation and tracking
    (IEEE-Inst Electrical Electronics Engineers Inc, 2004) Erdem, Çiğdem Eroğlu; Sankur, Bülent; Department of Electrical and Electronics Engineering; Tekalp, Ahmet Murat; Faculty Member; Department of Electrical and Electronics Engineering; College of Engineering; 26207
    We propose measures to evaluate quantitatively the performance of video object segmentation and tracking methods without ground-truth (GT) segmentation maps. The proposed measures are based on spatial differences of color and motion along the boundary of the estimated video object plane and temporal differences between the color histogram of the current object plane and its predecessors. They can be used to localize (spatially and/or temporally) regions where segmentation results are good or bad; and/or they can be combined to yield a single numerical measure to indicate the goodness of the boundary segmentation and tracking results over a sequence. The validity of the proposed performance measures without GT have been demonstrated by canonical correlation analysis with another set of measures with GT on a set of sequences (where GT information is available). Experimental results are presented to evaluate the segmentation maps obtained from various sequences using different segmentation approaches.
  • Placeholder
    Publication
    Robust speech recognition using adaptively denoised wavelet coefficients
    (IEEE, 2004) Department of Electrical and Electronics Engineering; Department of Electrical and Electronics Engineering; N/A; Tekalp, Ahmet Murat; Erzin, Engin; Akyol, Emrah; Faculty Member; Faculty Member; Master Student; Department of Electrical and Electronics Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; 26207; 34503; N/A
    The existence of additive noise affects the performance of speech recognition in real environments. We propose a new set of feature vectors for robust speech recognition using denoised wavelet coefficients. The use of wavelet coefficients in speech processing is motivated by the ability of the wavelet transform to capture both time and frequency information and the non-stationary behaviour of speech signals. We use one set of noisy data, such as data with car noise, and we use hard thresholding in the best basis for denoising. We use isolated digits as our database in our HMM based speech recognition system. A performance comparison of hard thresholding denoised wavelet coefficients and MFCC feature vectors is presented.
  • Placeholder
    Publication
    End-to-end service-level management framework over multi-domain software defined networks
    (Institute of Electrical and Electronics Engineers (IEEE), 2016) N/A; N/A; N/A; Department of Electrical and Electronics Engineering; Bağcı, Kadir Tolga; Nacaklı, Selin; Şahin, Kemal Emrecan; Tekalp, Ahmet Murat; PhD Student; PhD Student; Master Student; Faculty Member; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; N/A; N/A; 26207
    We introduce a distributed, dynamic, end-to-end (E2E) service-level management framework over a multi-domain SDN in order to enable end users to negotiate with their service providers a level of service according to their needs and budget. In this framework, the service provider offers multiple levels of service and allocates network resources to each user to satisfy specific service level requests in a fair manner. To this effect, controllers of different domains negotiate with each other to satisfy the service level parameters of service requests, where functions that manage E2E services collaborate with functions that manage network resources of respective domains. The proposed framework and procedures have been verified over a newly developed large-scale multi-domain SDN emulation environment./ Öz: Çok-alanlı yazılım tanımlı ağlarda (YTA), son kul- lanıcıların servis sağlayıcıları ile belirli bir servis kalitesi için uzlaşmalarını sağlamak amacıyla da gıtık, dinamik ve uçtan uca servis kalitesi yönetimi önermekteyiz. Bu yapıda servis saglayıcıları birçok servis seviyesi önermekte ve ağ kaynaklarını kullanıcılara adil bir ¸sekilde bölü¸stürmektedir. Bu amaçla, uçtan uca servisleri ve her bir alanın kaynaklarını yöneten modüller işbirligi yaparak farklı alanların ağ yöneticilerinin servis istek- lerinin kısıtlarını sağlayacak şekilde birbirleri ile uzlaşmalarına olanak sağlamaktadır. Önerilen yapı ve modüller yeni geliştirilmiş büyük ölçekli çok-alanlı bir YTA’da test edilmiştir.
  • Placeholder
    Publication
    An audio-driven dancing avatar
    (Springer, 2008) Balci, Koray; Kizoglu, Idil; Akarun, Lale; Canton-Ferrer, Cristian; Tilmanne, Joelle; Bozkurt, Elif; Erdem, A. Tanju; Department of Computer Engineering; N/A; N/A; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Yemez, Yücel; Ofli, Ferda; Demir, Yasemin; Erzin, Engin; Tekalp, Ahmet Murat; Faculty Member; PhD Student; Master Student; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Electrical and Electronics Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; 107907; N/A; N/A; 34503; 26207
    We present a framework for training and synthesis of an audio-driven dancing avatar. The avatar is trained for a given musical genre using the multicamera video recordings of a dance performance. The video is analyzed to capture the time-varying posture of the dancer's body whereas the musical audio signal is processed to extract the beat information. We consider two different marker-based schemes for the motion capture problem. The first scheme uses 3D joint positions to represent the body motion whereas the second uses joint angles. Body movements of the dancer are characterized by a set of recurring semantic motion patterns, i.e., dance figures. Each dance figure is modeled in a supervised manner with a set of HMM (Hidden Markov Model) structures and the associated beat frequency. In the synthesis phase, an audio signal of unknown musical type is first classified, within a time interval, into one of the genres that have been learnt in the analysis phase, based on mel frequency cepstral coefficients (MFCC). The motion parameters of the corresponding dance figures are then synthesized via the trained HMM structures in synchrony with the audio signal based on the estimated tempo information. Finally, the generated motion parameters, either the joint angles or the 3D joint positions of the body, are animated along with the musical audio using two different animation tools that we have developed. Experimental results demonstrate the effectiveness of the proposed framework.
  • Placeholder
    Publication
    Multicamera audio-visual analysis of dance figures
    (IEEE, 2007) N/A; N/A; Department of Computer Engineering; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Ofli, Ferda; Erzin, Engin; Yemez, Yücel; Tekalp, Ahmet Murat; PhD Student; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; College of Engineering; N/A; 34503; 107907; 26207
    We present an automated system for multicamera motion capture and audio-visual analysis of dance figures. the multiview video of a dancing actor is acquired using 8 synchronized cameras. the motion capture technique is based on 3D tracking of the markers attached to the person's body in the scene, using stereo color information without need for an explicit 3D model. the resulting set of 3D points is then used to extract the body motion features as 3D displacement vectors whereas MFC coefficients serve as the audio features. in the first stage of multimodal analysis, we perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of the audio and body motion features, separately, to determine the recurrent elementary audio and body motion patterns. then in the second stage, we investigate the correlation of body motion patterns with audio patterns, that can be used for estimation and synthesis of realistic audio-driven body animation.
  • Placeholder
    Publication
    Application QoS fairness in wireless video scheduling
    (Institute of Electrical and Electronics Engineers (IEEE), 2006) N/A; N/A; Department of Electrical and Electronics Engineering; Department of Electrical and Electronics Engineering; Department of Electrical and Electronics Engineering; Özçelebi, Tanır; Tekalp, Ahmet Murat; Civanlar, Mehmet Reha; Sunay, Mehmet Oğuz; PhD Student; Faculty Member; Faculty Member; Faculty Member; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; College of Engineering; College of Engineering; College of Engineering; N/A; 26207; 16372; N/A
    The video pre-roll delay for filling up the client buffer can not be too long for user utility and buffer limitations in wireless point-to-multipoint streaming systems. Cross-layer design that deals with both physical and application layer aspects jointly is necessary for this purpose. We present a cross-layer optimized multiuser video adaptation and user scheduling framework for wireless video communication, where Quality-of-Service (QoS) fairness among users is provided with maximum video quality and video throughput. Both protocol layers are jointly optimized using a single Multi-Objective Optimization (MOO) framework that aims to schedule the user with the least remaining playback time and the highest video throughput (delivered video seconds per transmission slot) with maximum video quality. Experiments carried out in the IS-856 (1×EV-DO) standard and ITU pedestrian and vehicular environments demonstrate the improvements over the state-of-the-art schedulers in terms of video QoS fairness, video quality and throughput. / İstemci arabelleğini doldurmak için videodan önce gösterilen reklam gecikmesi, kablosuz noktadan çok noktaya akış sistemlerinde kullanıcı yardımcı programı ve arabellek sınırlamaları için çok uzun olamaz. Bu amaç için hem fiziksel hem de uygulama katmanı özelliklerini birlikte ele alan çapraz katman tasarımı gereklidir. Kablosuz video iletişimi için, kullanıcılar arasında Hizmet Kalitesi (QoS) adaletinin maksimum video kalitesi ve video çıkışı ile sağlandığı, katmanlar arası optimize edilmiş çok kullanıcılı bir video uyarlaması ve kullanıcı planlama çerçevesi sunuyoruz. Her iki protokol katmanı, kullanıcıyı maksimum video kalitesiyle en az kalan oynatma süresi ve en yüksek video verimi (iletim yuvası başına iletilen video saniyesi) ile programlamayı amaçlayan tek bir Çok Amaçlı Optimizasyon (MOO) çerçevesi kullanılarak ortaklaşa optimize edilmiştir. IS-856 (lxEV-DO) standardında ve ITU yaya ve araç ortamlarında gerçekleştirilen deneyler, video QoS adaleti, video kalitesi ve verim açısından en son teknoloji zamanlayıcılara göre iyileştirmeler göstermektedir.
  • Placeholder
    Publication
    Optimal rate and input format control for content and context adaptive video streaming
    (IEEE, 2004) Department of Electrical and Electronics Engineering; Department of Electrical and Electronics Engineering; N/A; Tekalp, Ahmet Murat; Civanlar, Mehmet Reha; Özçelebi, Tanır; Faculty Member; Faculty Member; PhD Student; Department of Electrical and Electronics Engineering; College of Engineering; College of Engineering; Graduate School of Sciences and Engineering; 26207; 16372; N/A
    A novel dynamic programming based technique for optimal selection of input video format and compression rate for video streaming based on "relevancy" of the content and user context is presented. The technique uses context dependent content analysis to divide the input video into temporal segments. User selected relevance levels assigned to these segments are used in formulating a constrained optimization problem, which is solved using dynamic programming. The technique minimizes a weighted distortion measure and the initial waiting time for continuous playback under maximum acceptable distortion constraints. Spatial resolution and frame rate of input video and the DCT quantization parameters are used as optimization variables. The technique is applied to encoding of soccer videos using an H.264 [1] encoder. The improvements obtained over a standard H.264 implementation are demonstrated by experimental results.
  • Placeholder
    Publication
    Emerging 3-D imaging and display technologies
    (Institute of Electrical and Electronics Engineers (IEEE), 2017) Javidi, Bahram; Department of Electrical and Electronics Engineering; Tekalp, Ahmet Murat; Faculty Member; Department of Electrical and Electronics Engineering; College of Engineering; 26207
    We have become an information-centric society vastly dependent on the collection, communication, and presentation of information. At any given moment, it is likely that we are in the vicinity of some form of a display as displays play a prominent role in a variety of devices and applications. Three-dimensional imaging and display technologies are important components for presentation and visualization of information and for creating real-world-like environments in communication. There are broad applications of 3-D imaging and display technologies in computers, communication, mobile devices, TV, video, entertainment, robotics, metrology, security and defense, healthcare, and medicine.
  • Placeholder
    Publication
    Openqos: an openflow controller design for multimedia delivery with end-to-end quality of service over software-defined networks
    (IEEE, 2012) Department of Electrical and Electronics Engineering; N/A; N/A; N/A; Tekalp, Ahmet Murat; Eğilmez, Hilmi Enes; Dane, Said Tahsin; Bağcı, Kadir Tolga; Faculty Member; Master Student; Master Student; PhD Student; Department of Electrical and Electronics Engineering; College of Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; Graduate School of Sciences and Engineering; 26207; N/A; N/A; N/A
    OpenFlow is a Software Defined Networking (SDN) paradigm that decouples control and data forwarding layers of routing. In this paper, we propose OpenQoS, which is a novel OpenFlow controller design for multimedia delivery with end-to-end Quality of Service (QoS) support. Our approach is based on QoS routing where the routes of multimedia traffic are optimized dynamically to fulfill the required QoS. We measure performance of OpenQoS over a real test network and compare it with the performance of the current state-of-the-art, HTTP-based multi-bitrate adaptive streaming. Our experimental results show that OpenQoS can guarantee seamless video delivery with little or no video artifacts experienced by the end-users. Moreover, unlike current QoS architectures, in OpenQoS the guaranteed service is handled without having adverse effects on other types of traffic in the network.
  • Placeholder
    Publication
    Lossless watermarking for image authentication: a new framework and an implementation
    (IEEE-Inst Electrical Electronics Engineers Inc, 2006) Çelik, Mehmet Utku; Sharma, Gaurav; Department of Electrical and Electronics Engineering; Tekalp, Ahmet Murat; Faculty Member; Department of Electrical and Electronics Engineering; College of Engineering; 26207
    We present a novel framework for lossless (invertible) authentication watermarking, which enables zero-distortion reconstruction of the un-watermarked images upon verification. As opposed to earlier. lossless authentication methods that required reconstruction of the original image prior to validation, the new framework allows validation of the watermarked images before recovery of the original image. This reduces computational requirements in situations when either the verification step fails or the zero-distortion reconstruction is not needed. For verified images, integrity of the reconstructed image is ensured by the uniqueness of the reconstruction procedure. The framework also enables public(-key) authentication without granting access to the perfect original and allows for efficient tamper localization. Effectiveness of the framework is demonstrated by implementing the framework using hierarchical image authentication along with lossless generalized-least significant bit data embedding.