# Verifiable Coded Computing: Towards Fast, Secure and Private Distributed Machine Learning

@article{Tang2021VerifiableCC, title={Verifiable Coded Computing: Towards Fast, Secure and Private Distributed Machine Learning}, author={Tingting Tang and Ramy E. Ali and Hanieh Hashemi and Tynan Gangwani and Amir Salman Avestimehr and Murali Annavaram}, journal={ArXiv}, year={2021}, volume={abs/2107.12958} }

Stragglers, Byzantine workers, and data privacy are the main bottlenecks in distributed cloud computing. Several prior works proposed coded computing strategies to jointly address all three challenges. They require either a large number of workers, a significant communication cost or a significant computational complexity to tolerate malicious workers. Much of the overhead in prior schemes comes from the fact that they tightly couple coding for all three problems into a single framework. In… Expand

#### 3 Citations

List-Decodable Coded Computing: Breaking the Adversarial Toleration Barrier

- Computer Science, Mathematics
- IEEE Journal on Selected Areas in Information Theory
- 2021

The results show that FLCC outperforms LCC by breaking the barrier on the number of adversaries that can be tolerated, and the corresponding threshold in FLCC is improved by a factor of two compared to that of LCC. Expand

Secure Private and Adaptive Matrix Multiplication Beyond the Singleton Bound

- Computer Science, Mathematics
- ArXiv
- 2021

A framework for security against malicious adversaries in private matrix-matrix multiplication, called SRPM3, provides a computationally efficient security check that detects malicious workers with high probability and can tolerate the presence of an arbitrary number of malicious workers. Expand

ApproxIFER: A Model-Agnostic Approach to Resilient and Robust Prediction Serving Systems

- Computer Science, Mathematics
- ArXiv
- 2021

Approximate Coded Inference (ApproxIFER) is proposed, a different approach that does not require training of any parity models, hence it is agnostic to the model hosted by the cloud and can be readily applied to different data domains and model architectures. Expand

#### References

SHOWING 1-10 OF 33 REFERENCES

List-Decodable Coded Computing: Breaking the Adversarial Toleration Barrier

- Computer Science, Mathematics
- IEEE Journal on Selected Areas in Information Theory
- 2021

The results show that FLCC outperforms LCC by breaking the barrier on the number of adversaries that can be tolerated, and the corresponding threshold in FLCC is improved by a factor of two compared to that of LCC. Expand

Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

- Computer Science, Mathematics
- ICLR
- 2019

Slalom is proposed, a framework that securely delegates execution of all linear layers in a DNN from a TEE to a faster, yet untrusted, co-located processor, for high performance execution of Deep Neural Networks in TEEs. Expand

A Scalable Approach for Privacy-Preserving Collaborative Machine Learning

- Computer Science, Mathematics
- NeurIPS
- 2020

COPML, a fully-decentralized training framework that achieves scalability and privacy-protection simultaneously, is proposed and strong statistical privacy guarantees against colluding parties (adversaries) with unbounded computational power are provided. Expand

Verifiable local computation on distributed data

- Computer Science
- SCC '14
- 2014

This paper proposes a multi-server verifiable local computation (VLC) model where the client can privately outsource data blocks m=(m1, ..., mn) to cloud servers and later verify computations on any portion of the outsourced data. Expand

DRACO: Byzantine-resilient Distributed Training via Redundant Gradients

- Computer Science, Mathematics
- ICML
- 2018

DRACO is presented, a scalable framework for robust distributed training that uses ideas from coding theory and comes with problem-independent robustness guarantees, and is shown to be several times, to orders of magnitude faster than median-based approaches. Expand

Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent

- Computer Science
- NIPS
- 2017

Krum is proposed, an aggregation rule that satisfies the resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers, which is argued to be the first provably Byzantine-resilient algorithm for distributed SGD. Expand

INTERPOL: Information Theoretically Verifiable Polynomial Evaluation

- Computer Science, Mathematics
- 2019 IEEE International Symposium on Information Theory (ISIT)
- 2019

By generalizing INTERPOL to a multiparty setting consisting of a network of n untrusted nodes, where each node is interested in evaluating the same polynomial, it is demonstrated that it can achieve an overall computational complexity comparable to a trusted setup, while guaranteeing information-theoretic verification at each node. Expand

Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication

- Mathematics, Computer Science
- NIPS
- 2017

We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store parts… Expand

Speeding Up Distributed Machine Learning Using Codes

- Computer Science, Mathematics
- IEEE Transactions on Information Theory
- 2018

This paper focuses on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling, and uses codes to reduce communication bottlenecks, exploiting the excess in storage. Expand

Collaborative Decoding of Polynomial Codes for Distributed Computation

- Computer Science, Mathematics
- 2019 IEEE Information Theory Workshop (ITW)
- 2019

We show that Polynomial codes (and some related codes) used for distributed matrix multiplication are interleaved Generalized Reed-Solomon codes and hence, can be collaboratively decoded. We consider… Expand