gpu
Here are 2,267 public repositories matching this topic...
-
Updated
Jul 4, 2021 - Jupyter Notebook
-
Updated
Jun 23, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
Jun 9, 2021 - JavaScript
-
Updated
Jul 4, 2021 - Python
-
Updated
Jun 14, 2021 - Python
Problem: the approximate method can still be slow for many trees
catboost version: master
Operating System: ubuntu 18.04
CPU: i9
GPU: RTX2080
Would be good to be able to specify how many trees to use for shapley. The model.predict and prediction_type versions allow this. lgbm/xgb allow this.
-
Updated
Jun 10, 2021 - Python
-
Updated
Jul 4, 2021 - Jupyter Notebook
-
Updated
Jun 16, 2021 - Python
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
-
Updated
Jul 2, 2021 - C++
-
Updated
Jul 3, 2021 - C++
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Jun 13, 2020 - HTML
Describe the bug
Clipping a DataFrame or Series using ints causes a cudf Failure because it won't handle the different dtypes (int and float)
Steps/Code to reproduce bug
data = cudf.Series([-0.43, 0.1234, 1.5, -1.31])
data.clip(0, 1)
...
File "cudf/_lib/replace.pyx", line 216, in cudf._lib.replace.clip
File "cudf/_lib/replace.pyx", line 198, in cudf._lib.replace.clamp
Describe the Problem
plot_model
currently has the save
argument which can be used to save the plots. It does not provide the functionality to decide where to save the plot and with what name. Right now it saves the plot with predefined names in the current working directory.
Describe the solution you'd like
We can have another argument save_path
which is used whenever the `
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
-
Updated
Jul 2, 2021 - CMake
PR NVIDIA/cub#218 fixes this CUB's radix sort. We should:
- Check whether Thrust's other backends handle this case correctly.
- Provide a guarantee of this in the stable_sort documentation.
- Add regression tests to enforce this on all backends.
-
Updated
Jul 4, 2021 - C++
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."
Add
drop_remainder: bool = True
support totorch.chunk
andtorch.split
drop_last
intorch.utils.data.DataLoader
Add
redistribute: bool = True
support totorch.chunk
andtorch.split