Research Outputs
Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2
Browse
6 results
Search Results
Publication Metadata only Detection and mitigation of targeted data poisoning attacks in federated learning(IEEE, 2022) Department of Computer Engineering; Erbil, Pınar; Gürsoy, Mehmet Emre; Department of Computer Engineering; College of EngineeringFederated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker's goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve > 95% true positive rates while incurring only < 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.Publication Metadata only FractalRG: advanced fractal region growing using Gaussian mixture models for left atrium segmentation(Academic Press Inc Elsevier Science, 2024) Firouznia, Marjan; Koupaei, Javad Alikhani; Faez, Karim; Jabdaragh, Aziza Saber; Department of Computer Engineering; Demir, Çiğdem Gündüz; Department of Computer Engineering; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); College of EngineeringThis paper presents an advanced region growing method for precise left atrium (LA) segmentation and estimation of atrial wall thickness in CT/MRI scans. The method leverages a Gaussian mixture model (GMM) and fractal dimension (FD) analysis in a three -step procedure to enhance segmentation accuracy. The first step employs GMM for seed initialization based on the probability distribution of image intensities. The second step utilizes fractal -based texture analysis to capture image self -similarity and texture complexity. An enhanced approach for generating 3D fractal maps is proposed, providing valuable texture information for region growing. In the last step, fractal -guided 3D region growing is applied for segmentation. This process expands seed points iteratively by adding neighboring voxels meeting specific similarity criteria. GMM estimations and fractal maps are used to restrict the region growing process, reducing the search space for global segmentation and enhancing computational efficiency. Experiments on a dataset of 10 CT scans with 3,947 images resulted in a Dice score of 0.85, demonstrating superiority over traditional techniques. In a dataset of 30 MRI scans with 3,600 images, the proposed method achieved a competitive Dice score of 0.89 +/- 0.02, comparable to Deep Learning -based models. These results highlight the effectiveness of our approach in accurately delineating the LA region across diverse imaging modalities.Publication Metadata only Histopathological classification of colon tissue images with self-supervised models(IEEE, 2023) Department of Computer Engineering; Erden, Mehmet Bahadır; Cansız, Selahattin; Demir, Çiğdem Gündüz; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of EngineeringDeep learning techniques have demonstrated their ability to facilitate medical image diagnostics by offering more precise and accurate predictions. Convolutional neural network (CNN) architectures have been employed for a decade as the primary approach to enable automated diagnosis. On the other hand, recently proposed vision transformers (ViTs) based architectures have shown success in various computer vision tasks. However, their efficacy in medical image classification tasks remains largely unexplored due to their requirement for large datasets. Nevertheless, significant performance gains can be achieved by leveraging self-supervised learning techniques through pretraining. This paper analyzes performance of self-supervised pretrained networks in medical image classification tasks. Results on colon histopathology images revealed that CNN based architectures are more effective when trained from scratch, while pretrained models could achieve similar levels of performance with limited data.Publication Metadata only HyperE2VID: improving event-based video reconstruction via hypernetworks(IEEE-Inst Electrical Electronics Engineers Inc, 2024) Ercan, Burak; Eker, Onur; Sağlam, Canberk; Erdem, Erkut; Department of Computer Engineering; Erdem, Aykut; Department of Computer Engineering; Koç Üniversitesi İş Bankası Enfeksiyon Hastalıkları Uygulama ve Araştırma Merkezi (EHAM) / Koç University İşbank Center for Infectious Diseases (KU-IS CID); College of Engineering;Event-based cameras are becoming increasingly popular for their ability to capture high-speed motion with low latency and high dynamic range. However, generating videos from events remains challenging due to the highly sparse and varying nature of event data. To address this, in this study, we propose HyperE2VID, a dynamic neural network architecture for event-based video reconstruction. Our approach uses hypernetworks to generate per-pixel adaptive filters guided by a context fusion module that combines information from event voxel grids and previously reconstructed intensity images. We also employ a curriculum learning strategy to train the network more robustly. Our comprehensive experimental evaluations across various benchmark datasets reveal that HyperE2VID not only surpasses current state-of-the-art methods in terms of reconstruction quality but also achieves this with fewer parameters, reduced computational requirements, and accelerated inference times.Publication Metadata only Implications of node selection in decentralized federated learning(IEEE, 2023) Department of Computer Engineering; Lodhi, Ahnaf Hannan; Akgün, Barış; Özkasap, Öznur; Department of Computer Engineering; Graduate School of Sciences and Engineering; College of EngineeringDecentralized Federated Learning (DFL) offers a fully distributed alternative to Federated Learning (FL). However, the lack of global information in a highly heterogeneous environment negatively impacts its performance. Node selection in FL has been suggested to improve both communication efficiency and convergence rate. In order to assess its impact on DFL performance, this work evaluates node selection using performance metrics. It also proposes and evaluates a time-varying parameterized node selection method for DFL employing validation accuracy and its per-round change. The mentioned criteria are evaluated using both hard and stochastic/soft selection on sparse networks. The results indicate that the bias associated with node selection adversely impacts performance as training progresses. Furthermore, under extreme conditions, soft selection is observed to result in higher diversity for better generalization, while a completely random selection is shown to be preferable with very limited participation.Publication Metadata only Role of audio in video summarization(IEEE, 2023) Department of Computer Engineering; Shoer, İbrahim; Köprü, Berkay; Erzin, Engin; Department of Computer Engineering; Koç Üniversitesi İş Bankası Yapay Zeka Uygulama ve Araştırma Merkezi (KUIS AI)/ Koç University İş Bank Artificial Intelligence Center (KUIS AI); Graduate School of Sciences and Engineering; College of EngineeringVideo summarization attracts attention for efficient video representation, retrieval, and browsing to ease volume and traffic surge problems. Although video summarization mostly uses the visual channel for compaction, the benefits of audio-visual modeling appeared in recent literature. The information coming from the audio channel can be a result of audio-visual correlation in the video content. In this study, we propose a new audio-visual video summarization framework integrating four ways of audio-visual information fusion with GRU-based and attention-based networks. Furthermore, we investigate a new explainability methodology using audio-visual canonical correlation analysis (CCA) to better understand and explain the role of audio in the video summarization task. Experimental evaluations on the TVSum dataset attain F1 score and Kendall-tau score improvements for the audio-visual video summarization. Furthermore, splitting video content on TVSum and COGNIMUSE datasets based on audio-visual CCA as positively and negatively correlated videos yields a strong performance improvement over the positively correlated videos for audio-only and audio-visual video summarization.