hyperparameter-optimization
Here are 339 public repositories matching this topic...
-
Updated
Sep 3, 2020 - Python
What would you like to be added: As title
Why is this needed: All pruning schedule except AGPPruner only support level, L1, L2. While there are FPGM, APoZ, MeanActivation and Taylor, it would be much better if we can choose any pruner with any pruning schedule.
**Without this feature, how does current nni
with the Power Transformer.
Motivation
From following resources, the search space of examples/xgboost_simple.py
seems not to be practical.
- https://www.analyticsvidhya.com/blog/2016/03/complete-guide-parameter-tuning-xgboost-with-codes-python/
- https://www.youtube.com/watch?v=VC8Jc9_lNoY
- https://www.amazon.co.jp/dp/B07YTDBC3Z/
Description
Improve the search space of examples/xgboost_simple.py
.
-
Updated
Sep 7, 2020
-
Updated
Jan 24, 2020 - Python
-
Updated
May 19, 2020 - Python
-
Updated
Jun 25, 2020 - Python
-
Updated
Jun 6, 2018 - Python
-
Updated
Aug 4, 2020 - Python
-
Updated
Sep 3, 2020
-
Updated
Sep 12, 2020 - Go
-
Updated
Jul 19, 2020 - Python
-
Updated
Apr 11, 2020 - JavaScript
-
Updated
Aug 15, 2018 - Python
-
Updated
Sep 12, 2020 - Python
There can be a situation when all features are dropped during feature selection. Need to handle it. Maybe by throwing exception or raising a warning.
Code to reproduce:
import numpy as np
from supervised import AutoML
X = np.random.uniform(size=(1000, 31))
y = np.random.randint(0, 2, size=(1000,))
automl = AutoML(
algorithms=["CatBoost", "Xgboost", "LightGBM"],
model_t
If enter_data()
is called with the same train_path
twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.
We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing
-
Updated
Jan 29, 2018 - Python
-
Updated
Sep 6, 2020 - Python
-
Updated
Jun 7, 2018 - Python
-
Updated
Aug 24, 2020 - JavaScript
-
Updated
Jul 19, 2019
-
Updated
Dec 6, 2016 - Jupyter Notebook
-
Updated
Jun 30, 2020 - Python
-
Updated
Aug 30, 2020 - Jupyter Notebook
-
Updated
Aug 27, 2020 - Python
-
Updated
Feb 4, 2020 - C++
Improve this page
Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."
I often run into issues by accidentally starting a new cluster when one is already running. Then, this later causes problems when I try to connect and there are two running clusters. I'm then forced to
ray stop
both clusters andray start
my new one again.My workflow would be improved if I just got an error when trying to start the second cluster and knew to immediately tear down the exist