-
Updated
Mar 28, 2021
distributed-computing
Here are 1,051 public repositories matching this topic...
-
Updated
May 17, 2021 - Go
-
Updated
May 13, 2021 - Go
-
Updated
Mar 14, 2017 - Python
-
Updated
Apr 23, 2021 - Python
-
Updated
May 19, 2021 - PHP
What happened:
When creating a LocalCluster
object the comm is started on a random high port, even if there are no other clusters running.
What you expected to happen:
Should use port 8786
.
Minimal Complete Verifiable Example:
$ conda create -n dask-lc-test -c conda-forge -y python=3.8 ipython dask distributed
$ conda activate dask-lc-test
The `d
-
Updated
May 19, 2021 - C#
-
Updated
May 10, 2021 - HTML
-
Updated
Dec 31, 2020 - Python
-
Updated
Apr 27, 2021 - C
-
Updated
May 12, 2020 - Java
-
Updated
Apr 30, 2021 - Python
-
Updated
May 18, 2021 - PHP
-
Updated
May 10, 2021
-
Updated
May 13, 2021 - C++
-
Updated
Nov 17, 2020 - C#
-
Updated
May 18, 2021 - R
-
Updated
Dec 16, 2018 - Rust
-
Updated
Mar 6, 2021 - Haskell
It seems that the number of joining clients (not the num of computing clients) is fixed in fedml_api/data_preprocessing/**/data_loader and cannot be changed except CIFAR10 datasets.
Here I mean that it seems the total clients is decided by the datasets, rather the input from run_fedavg_distributed_pytorch.sh.
https://github.com/FedML-AI/FedML/blob/3d9fda8d149c95f25ec4898e31df76f035a33b5d/fed
-
Updated
May 12, 2021 - JavaScript
-
Updated
May 19, 2021 - Rust
-
Updated
Nov 3, 2020 - Go
-
Updated
Nov 5, 2019
-
Updated
Apr 6, 2020 - C++
If enter_data()
is called with the same train_path
twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
-
Updated
Nov 16, 2019 - C++
Summary
What happened/what you expected to happen?
We expect the container use the existing volume defined inside the workflow. Such as:
volumes-existing.yaml
example for testing:
import os
import couler.argo as couler
from couler.argo_submitter import ArgoSubmitter
from couler.core.t
Improve this page
Add a description, image, and links to the distributed-computing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the distributed-computing topic, visit your repo's landing page and select "manage topics."
The
evaluate_loader
method for Python API. Similar to.train
and.predict_loader
Motivation