Publication:
UnSplit: data-oblivious model inversion, model stealing, and label inference attacks against split learning

dc.contributor.coauthorÇiçek, A. Ercument
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorErdoğan, Ege
dc.contributor.kuauthorKüpçü, Alptekin
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.date.accessioned2024-11-10T00:08:17Z
dc.date.issued2022
dc.description.abstractTraining deep neural networks often forces users to work in a distributed or outsourced setting, accompanied with privacy concerns. Split learning aims to address this concern by distributing the model among a client and a server. The scheme supposedly provides privacy, since the server cannot see the clients' models and inputs. We show that this is not true via two novel attacks. (1) We show that an honest-but-curious split learning server, equipped only with the knowledge of the client neural network architecture, can recover the input samples and obtain a functionally similar model to the client model, without being detected. (2) We show that if the client keeps hidden only the output layer of the model to ''protect'' the private labels, the honest-but-curious server can infer the labels with perfect accuracy. We test our attacks using various benchmark datasets and against proposed privacy-enhancing extensions to split learning. Our results show that plaintext split learning can pose serious risks, ranging from data (input) privacy to intellectual property (model parameters), and provide no more than a false sense of security.
dc.description.indexedbyScopus
dc.description.indexedbyWOS
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuN/A
dc.identifier.doi10.1145/3559613.3563201
dc.identifier.isbn9781-4503-9873-2
dc.identifier.scopus2-s2.0-85143252442
dc.identifier.urihttps://doi.org/10.1145/3559613.3563201
dc.identifier.urihttps://hdl.handle.net/20.500.14288/16931
dc.identifier.wos1138986700012
dc.keywordsData privacy
dc.keywordsLabel leakage
dc.keywordsMachine learning
dc.keywordsModel inversion
dc.keywordsModel stealing
dc.keywordsSplit learning Deep neural networks
dc.keywordsLearning systems
dc.keywordsNetwork architecture
dc.keywordsClient models
dc.keywordsInference attacks
dc.keywordsInversion models
dc.keywordsLabel leakage
dc.keywordsMachine-learning
dc.keywordsModel inversion
dc.keywordsModel stealing
dc.keywordsNeural network architecture
dc.keywordsPrivacy concerns
dc.keywordsSplit learning
dc.keywordsData privacy
dc.language.isoeng
dc.publisherAssociation for Computing Machinery
dc.relation.ispartofWPES 2022 - Proceedings of the 21st Workshop on Privacy in the Electronic Society, co-located with CCS 2022
dc.subjectNeural networks (Neurobiology)
dc.subjectInstructional systems
dc.titleUnSplit: data-oblivious model inversion, model stealing, and label inference attacks against split learning
dc.typeConference Proceeding
dspace.entity.typePublication
local.contributor.kuauthorKüpçü, Alptekin
local.contributor.kuauthorErdoğan, Ege
local.publication.orgunit1College of Engineering
local.publication.orgunit2Department of Computer Engineering
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication.latestForDiscovery8e756b23-2d4a-4ce8-b1b3-62c794a8c164

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
IR05715.pdf
Size:
6.64 MB
Format:
Adobe Portable Document Format