Department of Computer Engineering2024-11-092021978-1-6654-1623-810.1109/TPSISA52974.2021.000522-s2.0-85128765442http://dx.doi.org/10.1109/TPSISA52974.2021.00052https://hdl.handle.net/20.500.14288/7878In this paper we will introduce our system for trust and (s) under bar eurity enhanced (c) under bar ustomizable (p) under bar rivate federated learning: TSC-PFed. We combine secure mUItiparty computation and differential privacy to allow participants to leverage known trust dynamics which allow for increased ML model accuracy while preserving privacy guarantees and introduce an update auditor to protect against malicious participants launching dangerous label Dipping data poisoning. We additionally introduce customizable modules into the TSC-PFed ecosystem which (a) allow users to customize the type of privacy protection provided and (b) provide a tiered participant selection approach which considers variation in privacy budgets.Computer scienceArtificial intelligenceInformation systemsTheory methodsThe tsc- pfed architecture for privacy-preserving flConference proceeding852717500024N/A3473