Publication:
Beta poisoning attacks against machine learning models: extensions, limitations and defenses

dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorGürsoy, Mehmet Emre
dc.contributor.kuauthorKara, Atakan
dc.contributor.kuauthorKöprücü, Nursena
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.date.accessioned2024-12-29T09:36:00Z
dc.date.issued2022
dc.description.abstractThe rise of machine learning (ML) has made ML models lucrative targets for adversarial attacks. One of these attacks is Beta Poisoning, which is a recently proposed training-time attack based on heuristic poisoning of the training dataset. While Beta Poisoning was shown to be effective against linear ML models, it was originally developed with a fixed Gaussian Kernel Density Estimator (KDE) for likelihood estimation, and its effectiveness against more advanced, non-linear ML models has not been explored. In this paper, we advance the state of the art in Beta Poisoning attacks by making three novel contributions. First, we extend the attack so that it can be executed with arbitrary KDEs and norm functions. We integrate Gaussian, Laplacian, Epanechnikov and Logistic KDEs with three norm functions, and show that the choice of KDE can significantly impact attack effectiveness, especially when attacking linear models. Second, we empirically show that Beta Poisoning attacks are ineffective against non-linear ML models (such as neural networks and multi-layer perceptrons), even with our extensions. Results imply that the effectiveness of the attack decreases as model non-linearity and complexity increase. Finally, our third contribution is the development of a discriminator-based defense against Beta Poisoning attacks. Results show that our defense strategy achieves 99% and 93% accuracy in identifying poisoning samples on MNIST and CIFAR-10 datasets, respectively.
dc.description.indexedbyWOS
dc.description.indexedbyScopus
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.1109/TPS-ISA56441.2022.00031
dc.identifier.isbn978-1-6654-7408-5
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-85150678321
dc.identifier.urihttps://doi.org/10.1109/TPS-ISA56441.2022.00031
dc.identifier.urihttps://hdl.handle.net/20.500.14288/21888
dc.identifier.wos978301700021
dc.keywordsMachine learning
dc.keywordsNetwork layers
dc.keywordsNetwork security
dc.language.isoeng
dc.publisherIEEE
dc.relation.ispartof2022 IEEE 4th International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications, TPS-ISA
dc.subjectComputer science
dc.subjectArtificial intelligence
dc.subjectComputer science
dc.titleBeta poisoning attacks against machine learning models: extensions, limitations and defenses
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorKara, Atakan
local.contributor.kuauthorKöprücü, Nursena
local.contributor.kuauthorGürsoy, Mehmet Emre
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files