gpu
Here are 2,928 public repositories matching this topic...
This is a nice first issue:
Add types/docments to the Learner.get_preds
function. This function is essential to any fastai
user and has almost no documentation. Add types and text to the variables as we have in many places now.
The taichi.lang.util.warning
function just prints the warning without consulting the current state of pythons standard library warnings
module.
For example:
import warnings
import taichi as ti
with warnings.catch_warnings():
warnings.simplefilter("ignore
-
Updated
Jun 20, 2022 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
May 31, 2022 - JavaScript
-
Updated
Jul 12, 2022 - Python
Is your feature request related to a problem? Please describe.
On MacOS, if Neovide is in the background, and I click on its window, that brings Neovide to foreground and also moves the cursor to where I clicked.
Describe the solution you'd like
On MacOS, it's common for apps to receive focus via mouse click, without the click being interpreted as an interaction with the UI elemen
-
Updated
Jul 3, 2022 - Python
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Jul 12, 2022 - C++
Problem:
_catboost.pyx in _catboost._set_features_order_data_pd_data_frame()
_catboost.pyx in _catboost.get_cat_factor_bytes_representation()
CatBoostError: Invalid type for cat_feature[non-default value idx=1,feature_idx=336]=2.0 : cat_features must be integer or string, real number values and NaN values should be converted to string.
Could you also print a feature name, not o
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
-
Updated
Jul 1, 2022 - Python
Description
https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html
https://docs.cupy.dev/en/stable/reference/generated/cupy.corrcoef.html
Seems args are different
Additional Information
dtype
argument added in NumPy version 1.20.
Is your feature request related to a problem? Please describe.
In time series plotting module, lot of plots are customized at the end - template, fig size, etc. Since the same code is repeated in all these plots, maybe this could be modularized and reused.
with fig.batch_update():
template = _resolve_dict_keys(
dict_=fig_kwargs, key="template", defaults=fig_default
-
Updated
Jul 12, 2022 - Jupyter Notebook
-
Updated
Jun 29, 2022 - Python
-
Updated
Jun 29, 2022
There are a number of items in wgpu
whose documentation contain examples using GLSL syntax or other references to GLSL elements. Since WGSL is now the standard shading language for WebGPU, it would be beneficial to readers if these examples were presented first in WGSL. (Keeping the GLSL would still be helpful for new users arriving from WebGL.)
Relevant occurrences of the text "GLSL" in docu
-
Updated
Jul 12, 2022 - C++
The API lists::drop_list_duplicates
operates on a pair of keys-values input lists columns with duplicate_keep_option
. This is Spark's specific feature request. Now we have lists::distinct
which purely extracts distinct list elements from the input lists column. This API is more standard and is used in both Python and Spark.
Therefore, we should remove lists::drop_list_duplicates
complet
如何导入pytorch训练好的模型或者权重文件?
-
Updated
Apr 24, 2020 - Jsonnet
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."
The current implementation of Zero Redundancy optimizer has its own implementation of object broadcasting.
We should replace it with c10d [broadcast_object_list](https://pytorch.org/docs/stable/distributed.html#torch.distributed.broadcast_object_lis