gpu
Here are 2,415 public repositories matching this topic...
-
Updated
Sep 29, 2021 - Jupyter Notebook
Concisely describe the proposed feature
Currently, each IMGUI window in GGUI is declared by a window.GUI.begin()
and a window.GUI.end()
. For example, to create a subwindow with a single button in it, one would write
window.GUI.begin("window", 0.1, 0.1, 0.5, 0.5)
window.GUI.button("my button")
window.GUI.end()
This API can be made more elegant using python's with
cons
-
Updated
Sep 28, 2021 - Makefile
At this moment relu_layer op doesn't allow threshold configuration, and legacy RELU op allows that.
We should add configuration option to relu_layer.
-
Updated
Jun 9, 2021 - JavaScript
-
Updated
Sep 29, 2021 - Python
-
Updated
Sep 8, 2021 - Python
usually, after trained model. i save model in cpp format with code:
cat_model.save_model('a', format="cpp")
cat_model.save_model('b', format="cpp")
but when my cpp need to use multi models.
in my main.cpp
#include "a.hpp"
#include "b.hpp"
int main() {
// do something
double a_pv = ApplyCatboostModel({1.2, 2.3}); // i want to a.hpp's model here
double b_pv
-
Updated
Sep 28, 2021 - C++
-
Updated
Jun 10, 2021 - Python
Hi ,
I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.
Please look into this if you could.
-
Updated
Sep 29, 2021 - Jupyter Notebook
the .pcd file format allows for fields to be extended. this means it can neatly hold data about the label or object of a point. this can be very handy for ML tasks. However, the open3d file io does not appear to be able to read other fields other than the xyz, rgb, normals etc . I haven't been able to find where in the open3d structure the code for the file io pcd loading is implemented to att
-
Updated
Sep 11, 2021 - Python
Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)
I want to preemptively start this thread to survey for suggestions. A cursory search lead me to this promising repository https://github.com/enigo-rs/enigo
Since closing the window is a common point of failure, that will be the focus for the first pass of testing as I learn how to use the library.
Components for testing:
- bridge
- editor
- renderer
- settings
- wind
-
Updated
Sep 29, 2021 - C++
Is your feature request related to a problem? Please describe.
The row_equality_comparator
constructor currently accepts a bool
to indicate if nulls are considered equal or not. In libcudf, we prefer to use scoped enums instead of bools and we already have a [`
-
Updated
Apr 24, 2020 - Jsonnet
-
Updated
Sep 27, 2021 - Jupyter Notebook
-
Updated
Jun 13, 2020 - HTML
Current implementation of join can be improved by performing the operation in a single call to the backend kernel instead of multiple calls.
This is a fairly easy kernel and may be a good issue for someone getting to know CUDA/ArrayFire internals. Ping me if you want additional info.
Improve this page
Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."
When build fails,
sccache
stats are not relevantAnd any time during the testing they are not relevant either (or perhaps should not even been used)
cc @ezyang @seemethere @malfet @pytorch/pytorch-dev-infra