Skip to content
#

gpu

Here are 2,840 public repositories matching this topic...

j4qfrost
j4qfrost commented Apr 2, 2020

I want to preemptively start this thread to survey for suggestions. A cursory search lead me to this promising repository https://github.com/enigo-rs/enigo

Since closing the window is a common point of failure, that will be the focus for the first pass of testing as I learn how to use the library.

Components for testing:

  • bridge
  • editor
  • renderer
  • settings
  • wind
enhancement help wanted good first issue
rsn870
rsn870 commented Aug 21, 2020

Hi ,

I have tried out both loss.backward() and model_engine.backward(loss) for my code. There are several subtle differences that I have observed , for one retain_graph = True does not work for model_engine.backward(loss) . This is creating a problem since buffers are not being retained every time I run the code for some reason.

Please look into this if you could.

enhancement good first issue
fingoldo
fingoldo commented Mar 24, 2022

Problem:

_catboost.pyx in _catboost._set_features_order_data_pd_data_frame()

_catboost.pyx in _catboost.get_cat_factor_bytes_representation()

CatBoostError: Invalid type for cat_feature[non-default value idx=1,feature_idx=336]=2.0 : cat_features must be integer or string, real number values and NaN values should be converted to string.

Could you also print a feature name, not o

solardiz
solardiz commented Jul 19, 2019

Our users are often confused by the output from programs such as zip2john sometimes being very large (multi-gigabyte). Maybe we should identify and enhance these programs to output a message to stderr to explain to users that it's normal for the output to be very large - maybe always or maybe only when the output size is above a threshold (e.g., 1 million bytes?)

H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.

  • Updated May 11, 2022
  • Jupyter Notebook
ngupta23
ngupta23 commented Apr 16, 2022

Is your feature request related to a problem? Please describe.
In time series plotting module, lot of plots are customized at the end - template, fig size, etc. Since the same code is repeated in all these plots, maybe this could be modularized and reused.

with fig.batch_update():
    template = _resolve_dict_keys(
        dict_=fig_kwargs, key="template", defaults=fig_default
enhancement good first issue time_series plots
VibhuJawa
VibhuJawa commented May 5, 2022

Describe the bug
series.unique() returns a cuDF.Series while it returns a numpy.ndarray for pandas.

Steps/Code to reproduce bug

In [1]: import cudf
In [2]: import pandas as pd

In [3]: type(pd.Series([1,1]).unique())
Out[3]: numpy.ndarray

In [4]: type(cudf.Series([1,1]).unique())
Out[4]: cudf.core.series.Series

Expected behavior
I would exp

bug good first issue cuDF (Python)
wgpu
kpreid
kpreid commented Mar 21, 2022

Description
I'm trying to port an existing application using GLSL to wgpu, so I have existing complex shaders I want to modify to be compatible. While trying to get them working, I have found that if the shader has (something which naga considers) a syntax error, wgpu will panic via .unwrap():

https://github.com/gfx-rs/wgpu/blob/326af60df8623e93b47a0de090e6cb449c8507f5/wgpu/src/bac

type: bug help wanted good first issue area: validation

Improve this page

Add a description, image, and links to the gpu topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the gpu topic, visit your repo's landing page and select "manage topics."

Learn more