Publication:
SplitGuard: detecting and mitigating training-hijacking attacks in split learning

dc.contributor.coauthorÇiçek, A. Ercument
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorErdoğan, Ege
dc.contributor.kuauthorKüpçü, Alptekin
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.date.accessioned2024-11-09T23:43:48Z
dc.date.issued2022
dc.description.abstractDistributed deep learning frameworks such as split learning provide great benefits with regards to the computational cost of training deep neural networks and the privacy-aware utilization of the collective data of a group of data-holders. Split learning, in particular, achieves this goal by dividing a neural network between a client and a server so that the client computes the initial set of layers, and the server computes the rest. However, this method introduces a unique attack vector for a malicious server attempting to steal the client's private data: the server can direct the client model towards learning any task of its choice, e.g. towards outputting easily invertible values. With a concrete example already proposed (Pasquini et al., CCS '21), such training-hijacking attacks present a significant risk for the data privacy of split learning clients. In this paper, we propose SplitGuard, a method by which a split learning client can detect whether it is being targeted by a training-hijacking attack or not. We experimentally evaluate our method's effectiveness, compare it with potential alternatives, and discuss in detail various points related to its use. We conclude that SplitGuard can effectively detect training-hijacking attacks while minimizing the amount of information recovered by the adversaries.
dc.description.indexedbyScopus
dc.description.indexedbyWOS
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.1145/3559613.3563198
dc.identifier.isbn9781-4503-9873-2
dc.identifier.scopus2-s2.0-85143253952
dc.identifier.urihttps://doi.org/10.1145/3559613.3563198
dc.identifier.urihttps://hdl.handle.net/20.500.14288/13557
dc.identifier.wos1138986700013
dc.keywordsData privacy
dc.keywordsMachine learning
dc.keywordsModel inversion
dc.keywordsSplit learning Deep neural networks
dc.keywordsLearning systems
dc.keywordsMultilayer neural networks
dc.keywordsAttack vector
dc.keywordsClient models
dc.keywordsComputational costs
dc.keywordsLearning frameworks
dc.keywordsMachine-learning
dc.keywordsModel inversion
dc.keywordsNeural-networks
dc.keywordsPrivacy aware
dc.keywordsPrivate data
dc.keywordsSplit learning
dc.keywordsData privacy
dc.language.isoeng
dc.publisherAssociation for Computing Machinery
dc.relation.ispartofWPES 2022 - Proceedings of the 21st Workshop on Privacy in the Electronic Society, co-located with CCS 2022
dc.subjectNeural networks (Neurobiology)
dc.subjectInstructional systems
dc.titleSplitGuard: detecting and mitigating training-hijacking attacks in split learning
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorKüpçü, Alptekin
local.contributor.kuauthorErdoğan, Ege
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
IR05714.pdf
Size:
10 MB
Format:
Adobe Portable Document Format