-
Updated
Nov 21, 2020
distributed-computing
Here are 964 public repositories matching this topic...
-
Updated
Jan 27, 2021 - Go
-
Updated
Jan 19, 2021 - Go
-
Updated
Mar 14, 2017 - Python
-
Updated
Feb 3, 2021 - Python
-
Updated
Feb 3, 2021 - Python
-
Updated
Feb 3, 2021 - PHP
-
Updated
Dec 31, 2020 - Python
-
Updated
Jan 29, 2021 - HTML
-
Updated
Feb 2, 2021 - C#
-
Updated
Jan 19, 2021 - C
-
Updated
May 12, 2020 - Java
-
Updated
Oct 6, 2020 - Python
-
Updated
Jan 15, 2021 - PHP
-
Updated
Dec 15, 2020 - C++
-
Updated
Nov 17, 2020 - C#
-
Updated
Jan 25, 2021
-
Updated
Feb 2, 2021 - R
-
Updated
Dec 16, 2018 - Rust
-
Updated
Mar 2, 2020 - Haskell
-
Updated
Jan 24, 2021 - JavaScript
-
Updated
Nov 3, 2020 - Go
-
Updated
Apr 6, 2020 - C++
If enter_data()
is called with the same train_path
twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
-
Updated
Nov 5, 2019
-
Updated
Nov 16, 2019 - C++
It seems that the number of joining clients (not the num of computing clients) is fixed in fedml_api/data_preprocessing/**/data_loader and cannot be changed except CIFAR10 datasets.
Here I mean that it seems the total clients is decided by the datasets, rather the input from run_fedavg_distributed_pytorch.sh.
https://github.com/FedML-AI/FedML/blob/3d9fda8d149c95f25ec4898e31df76f035a33b5d/fed
This could be example that uses these supported syntax and APIs:
https://github.com/couler-proj/couler/tree/d34a690/couler/core/syntax
-
Updated
Jan 28, 2021 - Ruby
Improve this page
Add a description, image, and links to the distributed-computing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the distributed-computing topic, visit your repo's landing page and select "manage topics."
What happened:
When creating a
LocalCluster
object the comm is started on a random high port, even if there are no other clusters running.What you expected to happen:
Should use port
8786
.Minimal Complete Verifiable Example:
The `d