Publication:
Defending against beta poisoning attacks in machine learning models

dc.conference.locationChania; Minoa Palace Resort
dc.contributor.coauthorGulciftci, Nilufer
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorGürsoy, Mehmet Emre
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.date.accessioned2025-12-31T08:20:18Z
dc.date.available2025-12-31
dc.date.issued2025
dc.description.abstractPoisoning attacks, in which an attacker adversarially manipulates the training dataset of a machine learning (ML) model, pose a significant threat to ML security. Beta Poisoning is a recently proposed poisoning attack that disrupts model accuracy by making the training dataset linearly nonseparable. In this paper, we propose four defense strategies against Beta Poisoning attacks: kNN Proximity-Based Defense (KPB), Neighborhood Class Comparison (NCC), Clustering-Based Defense (CBD), and Mean Distance Threshold (MDT). The defenses are based on our observations regarding the characteristics of poisoning samples generated by Beta Poisoning, e.g., poisoning samples have close proximity to one another, and they are centered near the mean of the target class. Experimental evaluations using MNIST and CIFAR-10 datasets demonstrate that KPB and MDT can achieve perfect accuracy and F1 scores, while CBD and NCC also provide strong defensive capabilities. Furthermore, by analyzing performance across varying parameters, we offer practical insights regarding defenses' behaviors under varying conditions.
dc.description.fulltextYes
dc.description.harvestedfromManual
dc.description.indexedbyScopus
dc.description.indexedbyWOS
dc.description.publisherscopeInternational
dc.description.readpublishN/A
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.1109/CSR64739.2025.11130050
dc.identifier.embargoNo
dc.identifier.isbn9798331535919
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-105016248787
dc.identifier.urihttps://doi.org/10.1109/CSR64739.2025.11130050
dc.identifier.urihttps://hdl.handle.net/20.500.14288/31511
dc.identifier.wos001575967100010
dc.keywordsAI security
dc.keywordsCybersecurity
dc.keywordsMachine learning
dc.keywordsPoison-ing attacks
dc.keywordsSupervised learning
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.affiliationKoç University
dc.relation.collectionKoç University Institutional Repository
dc.relation.ispartofProceedings of the 2025 IEEE International Conference on Cyber Security and Resilience, CSR 2025
dc.relation.openaccessYes
dc.rightsCC BY-NC-ND (Attribution-NonCommercial-NoDerivs)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectComputer science
dc.titleDefending against beta poisoning attacks in machine learning models
dc.typeConference Proceeding
dspace.entity.typePublication
person.familyNameGürsoy
person.givenNameMehmet Emre
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files