hyperparameter-optimization
Here are 578 public repositories matching this topic...
While trying to speedup my single shot detector, the following error comes up. Any way to fix this,
/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py in forward(self, *args)
363
364 def forward(self, *
-
Updated
Mar 3, 2022 - Python
Motivation
optuna.testing.integration.create_running_trial
creates an invalid trial which is RUNNING
but has a value
.
Description
We can use study.ask
to create a trial. create_running_trial
is no longer needed.
Alternatives (optional)
No response
We currently fit and predict upon loading autosklearn.experimental.askl2
for the first time. In environments with a non-persistent filesystem (autosklearn is installed into a new filesystem each time), this can add quite a bit of time delay as experienced in #1362
It seems more applicable to export the
Related: awslabs/autogluon#1479
Add a scikit-learn compatible API wrapper of TabularPredictor:
- TabularClassifier
- TabularRegressor
Required functionality (may need more than listed):
- init API
- fit API
- predict API
- works in sklearn pipelines
-
Updated
Apr 16, 2022 - Python
-
Updated
Jan 3, 2022
-
Updated
Feb 3, 2022 - Python
-
Updated
Nov 19, 2021 - Python
When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the L
I published a new v0.1.12 release of HCrystalBall, that updated some package dependencies and fixed some bugs in cross validation.
Should the original pin for 0.1.10 be updated? Unfortunately won't have time soon to submit a PR for this.
-
Updated
Apr 16, 2022 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Apr 16, 2022 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Feb 7, 2022 - Jupyter Notebook
-
Updated
Feb 6, 2021 - Python
-
Updated
Apr 4, 2022 - Jupyter Notebook
-
Updated
Feb 27, 2022 - Python
-
Updated
Jun 19, 2021
-
Updated
Oct 14, 2021 - JavaScript
-
Updated
Jan 20, 2021 - Python
-
Updated
Apr 14, 2022 - Python
-
Updated
Apr 13, 2022 - Python
-
Updated
Aug 15, 2018 - Python
-
Updated
Jan 31, 2021 - Python
If enter_data()
is called with the same train_path
twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
Describe the bug
Code could be more conform to pep8 and so forth.
Expected behavior
Less code st
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
Description
There are multiple user requests of using GraphNN data (node and edge lists) as sample batches into a custom RLlib model.
https://discuss.ray.io/t/rllib-variable-length-observation-spaces-without-padding/726
https://discuss.ray.io/t/working-with-graph-neural-networks-varying-state-space/5730/2
The recommended method today is to use Repeated observation space and VariableVal