Publication: Defending Against Beta Poisoning Attacks in Machine Learning Models
| dc.conference.location | Chania; Minoa Palace Resort | |
| dc.contributor.coauthor | Gulciftci, Nilufer (60056869900) | |
| dc.contributor.coauthor | Gürsoy, Mehmet Emre (56888513800) | |
| dc.date.accessioned | 2025-12-31T08:20:18Z | |
| dc.date.available | 2025-12-31 | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Poisoning attacks, in which an attacker adversarially manipulates the training dataset of a machine learning (ML) model, pose a significant threat to ML security. Beta Poisoning is a recently proposed poisoning attack that disrupts model accuracy by making the training dataset linearly nonseparable. In this paper, we propose four defense strategies against Beta Poisoning attacks: kNN Proximity-Based Defense (KPB), Neighborhood Class Comparison (NCC), Clustering-Based Defense (CBD), and Mean Distance Threshold (MDT). The defenses are based on our observations regarding the characteristics of poisoning samples generated by Beta Poisoning, e.g., poisoning samples have close proximity to one another, and they are centered near the mean of the target class. Experimental evaluations using MNIST and CIFAR-10 datasets demonstrate that KPB and MDT can achieve perfect accuracy and F1 scores, while CBD and NCC also provide strong defensive capabilities. Furthermore, by analyzing performance across varying parameters, we offer practical insights regarding defenses' behaviors under varying conditions. © 2025 Elsevier B.V., All rights reserved. | |
| dc.description.fulltext | Yes | |
| dc.description.harvestedfrom | Manual | |
| dc.description.indexedby | Scopus | |
| dc.description.publisherscope | International | |
| dc.description.readpublish | N/A | |
| dc.description.sponsoredbyTubitakEu | N/A | |
| dc.identifier.doi | 10.1109/CSR64739.2025.11130050 | |
| dc.identifier.embargo | No | |
| dc.identifier.isbn | 9798331535919 | |
| dc.identifier.quartile | N/A | |
| dc.identifier.scopus | 2-s2.0-105016248787 | |
| dc.identifier.uri | https://doi.org/10.1109/CSR64739.2025.11130050 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.14288/31511 | |
| dc.keywords | Ai Security | |
| dc.keywords | Cybersecurity | |
| dc.keywords | Machine Learning | |
| dc.keywords | Poison-ing Attacks | |
| dc.keywords | Supervised Learning | |
| dc.keywords | Cybersecurity | |
| dc.keywords | Machine Learning | |
| dc.keywords | Nearest Neighbor Search | |
| dc.keywords | Network Security | |
| dc.keywords | Ai Security | |
| dc.keywords | Clusterings | |
| dc.keywords | Cyber Security | |
| dc.keywords | Machine Learning Models | |
| dc.keywords | Machine-learning | |
| dc.keywords | Mean Distances | |
| dc.keywords | Neighbourhood | |
| dc.keywords | Poison-ing Attack | |
| dc.keywords | Poisoning Attacks | |
| dc.keywords | Training Dataset | |
| dc.keywords | Learning Systems | |
| dc.language.iso | eng | |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | |
| dc.relation.affiliation | Koç University | |
| dc.relation.collection | Koç University Institutional Repository | |
| dc.relation.ispartof | 5th IEEE International Conference on Cyber Security and Resilience, CSR 2025 | |
| dc.relation.openaccess | Yes | |
| dc.rights | CC BY-NC-ND (Attribution-NonCommercial-NoDerivs) | |
| dc.rights.uri | https://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.title | Defending Against Beta Poisoning Attacks in Machine Learning Models | |
| dc.type | Conference Proceeding | |
| dspace.entity.type | Publication |
