Research Outputs
Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2
Browse
19 results
Search Results
Publication Metadata only An analytical framework for self-organizing peer-to-peer anti-entropy algorithms(Elsevier, 2010) N/A; Department of Computer Engineering; Department of Mathematics; Department of Mathematics; Department of Mathematics; Özkasap, Öznur; Çağlar, Mine; Yazıcı, Emine Şule; Küçükçifçi, Selda; Faculty Member; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Mathematics; College of Engineering; College of Sciences; College of Sciences; College of Sciences; 113507; 105131; 27432; 105252An analytical framework is developed for establishing exact performance measures for peer-to-peer (P2P) anti-entropy paradigms used in biologically inspired epidemic data dissemination. Major benefits of these paradigms are that they are fully distributed, self-organizing, utilize local data only via pair-wise interactions, and provide eventual consistency, reliability and scalability. We derive exact expressions for infection probabilities through elaborated counting techniques on a digraph. Considering the first passage times of a Markov chain based on these probabilities, we find the expected message delay experienced by each peer and its overall mean as a function of initial number of infectious peers. Further delay and overhead analysis is given through simulations and the analytical framework. The number of contacted peers at each round of the anti-entropy approach is an important parameter for both delay and overhead. These exact performance measures and theoretical results would be beneficial when utilizing the models in several P2P distributed system and network services Such as replicated servers, multicast protocols, loss recovery, failure detection and group membership management.Publication Open Access Analysis of push-type epidemic data dissemination in fully connected networks(Elsevier, 2014) Sezer, Ali Devin; Department of Mathematics; Çağlar, Mine; Faculty Member; Department of Mathematics; College of Sciences; 105131Consider a fully connected network of nodes, some of which have a piece of data to be disseminated to the whole network. We analyze the following push-type epidemic algorithm: in each push round, every node that has the data, i.e., every infected node, randomly chooses c E Z. other nodes in the network and transmits, i.e., pushes, the data to them. We write this round as a random walk whose each step corresponds to a random selection of one of the infected nodes; this gives recursive formulas for the distribution and the moments of the number of newly infected nodes in a push round. We use the formula for the distribution to compute the expected number of rounds so that a given percentage of the network is infected and continue a numerical comparison of the push algorithm and the pull algorithm (where the susceptible nodes randomly choose peers) initiated in an earlier work. We then derive the fluid and diffusion limits of the random walk as the network size goes to infinity and deduce a number of properties of the push algorithm: (1) the number of newly infected nodes in a push round, and the number of random selections needed so that a given percent of the network is infected, are both asymptotically normal, (2) for large networks, starting with a nonzero proportion of infected nodes, a pull round infects slightly more nodes on average, (3) the number of rounds until a given proportion lambda of the network is infected converges to a constant for almost all lambda is an element of (0, 1). Numerical examples for theoretical results are provided.Publication Open Access Classification of imbalanced data with a geometric digraph family(Journal of Machine Learning Research (JMLR), 2016) Department of Mathematics; Manukyan, Artur; Ceyhan, Elvan; PhD Student; Undergraduate Student; Faculty Member; Department of Mathematics; Graduate School of Sciences and Engineering; College of SciencesWe use a geometric digraph family called class cover catch digraphs (CCCDs) to tackle the class imbalance problem in statistical classification. CCCDs provide graph theoretic solutions to the class cover problem and have been employed in classification. We assess the classification performance of CCCD classifiers by extensive Monte Carlo simulations, comparing them with other classifiers commonly used in the literature. In particular, we show that CCCD classifiers perform relatively well when one class is more frequent than the other in a two-class setting, an example of the cl ass imbalance problem. We also point out the relationship between class imbalance and class overlapping problems, and their influence on the performance of CCCD classifiers and other classification methods as well as some state-of-the-art algorithms which are robust to class imbalance by construction. Experiments on both simulated and real data sets indicate that CCCD classifiers are robust to the class imbalance problem. CCCDs substantially undersample from the majority class while preserving the information on the discarded points during the undersampling process. Many state-of-the-art methods, however, keep this information by means of ensemble classifiers, but CCCDs yield only a single classifier with the same property, making it both appealing and fast.Publication Metadata only Exact performance measures for peer-to-peer epidemic information diffusion(Springer-Verlag Berlin, 2006) N/A; Department of Computer Engineering; Department of Mathematics; Department of Mathematics; Department of Mathematics; Özkasap, Öznur; Yazıcı, Emine Şule; Küçükçifçi, Selda; Çağlar, Mine; Faculty Member; Faculty Member; Faculty Member; Faculty Member; Department of Computer Engineering; Department of Mathematics; College of Engineering; College of Sciences; College of Sciences; College of Sciences; 113507; 27432; 105252; 105131We consider peer-to-peer anti-entropy paradigms for epidemic information diffusion, namely pull, push and hybrid cases, and provide exact performance measures for them. Major benefits of the proposed epidemic algorithms are that they are fully distributed, utilize local information only via pair-wise interactions, and provide eventual consistency, scalability and communication topology-independence. Our contribution is the derivation of exact expressions for infection probabilities through elaborated counting techniques on a digraph. Considering the first passage times of a Markov chain based on these probabilities, we find the expected message delay experienced by each peer and its overall mean as a function of initial number of infectious peers. In terms of these criteria, the hybrid approach outperforms pull and push paradigms, and push is better than the pull case. Such theoretical results would be beneficial when integrating the models in several peer-to-peer distributed application scenarios.Publication Metadata only Minimum covering for hexagon triple systems(Springer, 2004) Lindner, CC; Department of Mathematics; Küçükçifçi, Selda; Faculty Member; Department of Mathematics; College of Sciences; 105252A hexagon triple is a graph consisting of three triangles of the form (a, x, b), (b, y, c), and (c, z, a), where a, b,k c, x, y, z are distinct. The triangle (a, b, c) is called the inside triangle and the triangles (a, x, b), (b, y, c), and (c, z, a) are called outside triangles. A 3k-fold hexagon triple system of order n is a pair (X, H), where H is an edge-disjoint collection of hexagon triples which partitions the edge set of 3kK(n) with vertex set X. Note that the outside triangles form a 3k-fold triple system. If the 3k-fold hexagon triple system (X, H) has the additional property that the inside triangles form a k-fold triple system, then (X, H) is said to be perfect. A covering of 3kK(n) with hexagon triples is a triple (X, H, P) such that: 1. 3kK(n) has vertex set X. 2. P is a subset of E(lambdaK(n)) with vertex set X for some lambda, and 3. H is an edge disjoint partition of E(3kK(n)) boolean OR P with hexagon triples. If P is as small as possible (X, H, P) is called a minimum covering of 3kK(n) with hexagon triples. If the inside triangles of the hexagon triples in H form a minimum covering of kK(n) with triangles, the covering is said to be perfect. A complete solution for the problem of constructing perfect 3k-fold hexagon triple system and perfect maximum packing of 3kK(n) with hexagon triples was given recently by the authors [2]. In this work, we give a complete solution of the problem of constructing perfect minimum covering of 3kK(n) with hexagon triples.Publication Metadata only Multicast transport protocol analysis: self-similar sources(Springer-Verlag Berlin, 2004) Department of Mathematics; Department of Computer Engineering; Çağlar, Mine; Özkasap, Öznur; Faculty Member; Faculty Member; Department of Mathematics; Department of Computer Engineering; College of Sciences; College of Engineering; 105131; 113507We study the traffic that scalable multicast protocols generate in terms of message delays over the network as well as traffic counts at the link level in the case of self-similar sources. In particular, we study Bimodal Multicast and Scalable Reliable Multicast protocols proposed for scalable reliable multicasting. These protocols are based on different mechanisms for recovering from message losses and providing scalability. We discuss the protocol mechanisms as the main underlying factor in our empirical results. Our results can be considered as a contribution to the general problem of integration of multicast communication to large scale.Publication Metadata only Multiclass G/M/1 queueing system with self-similar input and non-preemptive priority(Elsevier, 2008) Iftikhar, Mohsin; Singh, Tejeshwar; Landfeldt, Bjorn; Department of Mathematics; Çağlar, Mine; Faculty Member; Department of Mathematics; College of Sciences; 105131In order to deliver innovative and cost-effective IP multimedia applications over mobile devices, there is a need to develop a unified service platform for the future mobile Internet referred as the Next Generation (NG) all-IP network. It is convincingly demonstrated by numerous recent studies that modern multimedia network traffic exhibits long-range dependence (LRD) and self-similarity. These characteristics pose many novel and challenging problems in traffic engineering and network planning. One of the major concerns is how to allocate network resources efficiently to diverse traffic classes with heterogeneous QoS constraints. However, much of the current understanding of wireless traffic modeling is based on classical Poisson distributed traffic, which can yield misleading results and hence poor network planning. Unlike most existing studies that primarily focus on the analysis of single-queue systems based on the simplest First-Come-First-Serve (FCFS) scheduling policy, in this paper we introduce the first of its kind analytical performance model for multiple-queue systems with self-similar traffic scheduled by priority queueing to support differentiated QoS classes. The proposed model is based on a G/M/1 queueing system that takes into account multiple classes of traffic that exhibit long-range dependence and self-similarity. We analyze the model on the basis of non-preemptive priority and find exact packet delay and packet loss rate of the corresponding classes. We develop a finite queue Markov chain for non-preemptive priority scheduling, extending the previous work on infinite capacity systems. We extract a numerical solution for the proposed analytical framework by formulating and solving the corresponding Markov chain. We further present a comparison of the numerical analysis with comprehensive simulation studies of the same system. We also implement a Cisco-router based test bed, which serves to validate the mathematical, numerical, and simulation results as well as to support in understanding the QoS behaviour of realistic traffic input.Publication Metadata only On maximal partial Latin hypercubes(Springer, 2023) Donovan, Diane M.; Grannell, Mike J.; Department of Mathematics; Yazıcı, Emine Şule; Department of Mathematics; College of SciencesA lower bound is presented for the minimal number of filled cells in a maximal partial Latin hypercube of dimension d and order n. The result generalises and extends previous results for d= 2 (Latin squares) and d= 3 (Latin cubes). Explicit constructions show that this bound is near-optimal for large n> d . For d> n , a connection with Hamming codes shows that this lower bound gives a related upper bound for the same quantity. The results can be interpreted in terms of independent dominating sets in certain graphs, and in terms of codes that have covering radius 1 and minimum distance at least 2.Publication Metadata only Overall and pairwise segregation tests based on nearest neighbor contingency tables(Elsevier, 2009) Department of Mathematics; Ceyhan, Elvan; Faculty Member; Department of Mathematics; College of Sciences; N/AMultivariate interaction between two or more classes (or species) has important consequences in many fields and may cause multivariate clustering patterns such as spatial segregation or association. The spatial segregation occurs when members of a class tend to be found near members of the same class (i.e., near conspecifics) while spatial association occurs when members of a class tend to be found near members of the other class or classes. These patterns can be studied using a nearest neighbor contingency table (NNCT). The null hypothesis is randomness in the nearest neighbor (NN) structure, which may result from - among other patterns - random labeling (RL) or complete spatial randomness (CSR) of points from two or more classes (which is called the CSR independence, henceforth). New versions of overall and cell-specific tests based on NNCTs (i.e., NNCT-tests) are introduced and compared with Dixon's overall and cell-specific tests and various other spatial clustering methods. Overall segregation tests are used to detect any deviation from the null case, while the cell-specific tests are post hoc pairwise spatial interaction tests that are applied when the overall test yields a significant result. The distributional properties of these tests are analyzed and finite sample performance of the tests are assessed by an extensive Monte Carlo simulation study. Furthermore, it is shown that the new NNCT-tests have better performance in terms of Type I error and power estimates. The methods are also applied on two real life data sets for illustrative purposes. (c) 2008 Elsevier B.V. All rights reserved.Publication Metadata only QC-LDPC codes from difference matrices and difference covering arrays(IEEE-Inst Electrical Electronics Engineers Inc, 2023) Donovan, Diane M.; Rao, Asha; Üsküplü, Elif; Department of Mathematics; Yazıcı, Emine Şule; Department of Mathematics; ; College of Sciences;We give a framework that generalizes LDPC code constructions using transversal designs or related structures such as mutually orthogonal Latin squares. Our constructions offer a broader range of code lengths and codes rates. Similar earlier constructions rely on the existence of finite fields of order a power of a prime, which significantly restricts the functionality of the resulting codes. In contrast, the LDPC codes constructed here are based on difference matrices and difference covering arrays, structures that are available for any order a, resulting in LDPC codes across a broader class of parameters, notably length a(a - 1), for all even a. Such values are not possible with earlier constructions, thus establishing the novelty of these new constructions. Specifically the codes constructed here satisfy the RC constraint and for a odd, have length a(2) and rate 1 - (4a - 3)/a(2), and for a even, length a(2) - a and rate at least 1 - (4a - 6)/(a(2 )- a). When 3 does not divide a, these LDPC codes have stopping distance at least 8. When a is odd and both 3 and 5 do not divide a, our construction delivers an infinite family of QC-LDPC codes with minimum distance at least 10. We also determine lower bounds for the stopping distance of the code. Further we include simulation results illustrating the performance of our codes. The BER and FER performance of our codes over AWGN (via simulation) is at least equivalent to codes constructed previously.