This library is an implementation of the algorithm described in Distributed Trajectory Estimation with Privacy and Communication Constraints: a Two-Stage Distributed Gauss-Seidel Approach.
We present an algorithm to dynamically adjust the data assigned for each worker at every epoch during the training in a heterogeneous cluster. We empirically evaluate the performance of the dynamic partitioning by training deep neural networks on the CIFAR10 dataset.
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
This repository contains the code that produces the numeric section in On the Use of TensorFlow Computation Graphs in combination with Distributed Optimization to Solve Large-Scale Convex Problems
We present UDP-based aggregation algorithms for federated learning. We also present a scalable framework for practical federated learning. We empirically evaluate the performance by training deep convolutional neural networks on the MNIST dataset and the CIFAR10 dataset.