hyperparameter-optimization
Here are 567 public repositories matching this topic...
While trying to speedup my single shot detector, the following error comes up. Any way to fix this,
/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py in forward(self, *args)
363
364 def forward(self, *
-
Updated
Mar 3, 2022 - Python
Motivation
We can reduce the number of calling storage.get_best_trial
.
Description
In _log_completed_trial
, best_trial
is called twice. https://github.com/optuna/optuna/blob/a07a36e124d6523677d718819cad61628e8621e7/optuna/study/study.py#L1051-L1052
Alternatives (optional)
No response
Additional context (optional)
No response
We currently fit and predict upon loading autosklearn.experimental.askl2
for the first time. In environments with a non-persistent filesystem (autosklearn is installed into a new filesystem each time), this can add quite a bit of time delay as experienced in #1362
It seems more applicable to export the
Related: awslabs/autogluon#1479
Add a scikit-learn compatible API wrapper of TabularPredictor:
- TabularClassifier
- TabularRegressor
Required functionality (may need more than listed):
- init API
- fit API
- predict API
- works in sklearn pipelines
-
Updated
Mar 25, 2022 - Python
-
Updated
Jan 3, 2022
-
Updated
Feb 3, 2022 - Python
-
Updated
Nov 19, 2021 - Python
When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the L
That is a good suggestion. Another option is to have a keyword argument on fit which is a dictionary of estimator to kwargs to eliminate any potential for unnamed kwargs.
Originally posted by @camer314 in microsoft/FLAML#451 (comment)
-
Updated
Mar 25, 2022 - Python
-
Updated
Feb 10, 2021 - Python
-
Updated
Mar 23, 2022 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Feb 7, 2022 - Jupyter Notebook
-
Updated
Feb 6, 2021 - Python
-
Updated
Feb 27, 2022 - Python
-
Updated
Mar 16, 2022 - Jupyter Notebook
-
Updated
Jun 19, 2021
-
Updated
Oct 14, 2021 - JavaScript
-
Updated
Jan 20, 2021 - Python
-
Updated
Mar 24, 2022 - Python
-
Updated
Mar 22, 2022 - Python
-
Updated
Aug 15, 2018 - Python
-
Updated
Jan 31, 2021 - Python
If enter_data()
is called with the same train_path
twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
Describe the bug
Code could be more conform to pep8 and so forth.
Expected behavior
Less code st
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
Here's the reproduction: