Separation of Powers in Federated Learning

, , , , , and
The 1st Workshop on Systems Challenges in Reliable and Secure Federated Learning (ResilientFL'21)
Virtual Event,
Abstract. In federated learning (FL), model updates from mutually distrusting parties are aggregated in a centralized fusion server. The concentration of model updates simplifies FL’s model building process, but might lead to unforeseeable information leakage. This problem has become acute due to recent FL attacks that can reconstruct large fractions of training data from ostensibly “sanitized” model updates. In this paper, we re-examine the current design of FL systems under the new security model of reconstruction attacks. To break down information concentration, we build TRUDA, a new cross-silo FL system, employing a trustworthy and decentralized aggregation architecture. Based on the unique computational properties of modelfusion algorithms, we disassemble all exchanged model updates at the parameter-granularity and re-stitch them to form random partitions designated for multiple hardware-protected aggregators. Thus, each aggregator only has a fragmentary and shuffled view of model updates and is oblivious to the model architecture. The deployed security mechanisms in TRUDA can effectively mitigate training data reconstruction attacks, while still preserving the accuracy of trained models and keeping performance overheads low
author = {Pau-Chen and Cheng and Kevin and Eykholt and Zhongshu and Gu and Hani and Jamjoom and K. R. and Jayaram and Enriquillo and Valdez and Ashish and Verma},
title = {{Separation of Powers in Federated Learning}},
booktitle = {The 1st Workshop on Systems Challenges in Reliable and Secure Federated Learning (ResilientFL'21)},
address = {Virtual Event},
month = {Oct},
year = {2021}