gpu
Here are 2,840 public repositories matching this topic...
-
Updated
May 11, 2022 - Jupyter Notebook
The taichi.lang.util.warning
function just prints the warning without consulting the current state of pythons standard library warnings
module.
For example:
import warnings
import taichi as ti
with warnings.catch_warnings():
warnings.simplefilter("ignore
-
Updated
May 3, 2022 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
May 10, 2022 - JavaScript
-
Updated
May 11, 2022 - Python
-
Updated
Mar 11, 2022 - Python
I want to preemptively start this thread to survey for suggestions. A cursory search lead me to this promising repository https://github.com/enigo-rs/enigo
Since closing the window is a common point of failure, that will be the focus for the first pass of testing as I learn how to use the library.
Components for testing:
- bridge
- editor
- renderer
- settings
- wind
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
May 11, 2022 - C++
Problem:
_catboost.pyx in _catboost._set_features_order_data_pd_data_frame()
_catboost.pyx in _catboost.get_cat_factor_bytes_representation()
CatBoostError: Invalid type for cat_feature[non-default value idx=1,feature_idx=336]=2.0 : cat_features must be integer or string, real number values and NaN values should be converted to string.
Could you also print a feature name, not o
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
-
Updated
Mar 12, 2022 - Python
Description
https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html
https://docs.cupy.dev/en/stable/reference/generated/cupy.corrcoef.html
Seems args are different
Additional Information
dtype
argument added in NumPy version 1.20.
-
Updated
May 11, 2022 - Jupyter Notebook
-
Updated
Mar 8, 2022 - Python
Is your feature request related to a problem? Please describe.
In time series plotting module, lot of plots are customized at the end - template, fig size, etc. Since the same code is repeated in all these plots, maybe this could be modularized and reused.
with fig.batch_update():
template = _resolve_dict_keys(
dict_=fig_kwargs, key="template", defaults=fig_default
-
Updated
May 11, 2022 - C++
Describe the bug
series.unique()
returns a cuDF.Series
while it returns a numpy.ndarray
for pandas.
Steps/Code to reproduce bug
In [1]: import cudf
In [2]: import pandas as pd
In [3]: type(pd.Series([1,1]).unique())
Out[3]: numpy.ndarray
In [4]: type(cudf.Series([1,1]).unique())
Out[4]: cudf.core.series.Series
Expected behavior
I would exp
Description
I'm trying to port an existing application using GLSL to wgpu
, so I have existing complex shaders I want to modify to be compatible. While trying to get them working, I have found that if the shader has (something which naga
considers) a syntax error, wgpu
will panic via .unwrap()
:
https://github.com/gfx-rs/wgpu/blob/326af60df8623e93b47a0de090e6cb449c8507f5/wgpu/src/bac
-
Updated
May 10, 2022
环境
1.系统环境:
2.MegEngine版本:1.6.0rc1
3.python版本:Python 3.8.10
The program stuck at net.load when I was trying to use the MegFlow. I wait for more than 10min and there is no sign of finishing it.
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Jun 13, 2020 - HTML
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."
Followup to pytorch/pytorch#74955 (comment).
It turns out that that cmake version was just bad and we can now unpin cmake once again.
cc @seemethere @malfet @pytorch/pytorch-dev-infra