gpu
Here are 2,450 public repositories matching this topic...
-
Updated
Oct 10, 2021 - Jupyter Notebook
Currently if you run pylint python/taichi/
there're a lot of warnings/errors. We'd like to get rid of these and run pylint in CI.
Also in .pylintrc
we skip a few checkers but it'd be nice to enable some of them back, e.g. import-outside-toplevel.
This work is good for new contributors, and you're welcome to submit one PR per p
-
Updated
Sep 28, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
Jun 9, 2021 - JavaScript
-
Updated
Oct 15, 2021 - Python
-
Updated
Sep 8, 2021 - Python
usually, after trained model. i save model in cpp format with code:
cat_model.save_model('a', format="cpp")
cat_model.save_model('b', format="cpp")
but when my cpp need to use multi models.
in my main.cpp
#include "a.hpp"
#include "b.hpp"
int main() {
// do something
double a_pv = ApplyCatboostModel({1.2, 2.3}); // i want to a.hpp's model here
double b_pv
-
Updated
Oct 14, 2021 - C++
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Jun 10, 2021 - Python
the .pcd file format allows for fields to be extended. this means it can neatly hold data about the label or object of a point. this can be very handy for ML tasks. However, the open3d file io does not appear to be able to read other fields other than the xyz, rgb, normals etc . I haven't been able to find where in the open3d structure the code for the file io pcd loading is implemented to att
-
Updated
Oct 15, 2021 - Jupyter Notebook
-
Updated
Oct 15, 2021 - Python
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
Is your feature request related to a problem? Please describe.
I have to place nvim\bin into path environment variable just so that neovide can sense nvim.exe.
Describe the solution you'd like
Add an option to specify nvim path in neovide settings.
-
Updated
Oct 15, 2021 - C++
Is your feature request related to a problem? Please describe.
The row_equality_comparator
constructor currently accepts a bool
to indicate if nulls are considered equal or not. In libcudf, we prefer to use scoped enums instead of bools and we already have a [`
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Oct 15, 2021 - Jupyter Notebook
-
Updated
Jun 13, 2020 - HTML
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."
Add information to this error message which will help an end-user actually figure out what is wrong:
RuntimeError: Expected from <= to to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
Motivation
The error message itself told me to ask for an improvement... I cannot find any useful info on thi