Skip to content
#

hyperparameter-optimization

Here are 567 public repositories matching this topic...

architkulkarni
architkulkarni commented Mar 24, 2022

Here's the reproduction:

import os
import tempfile
from pathlib import Path
from ray._private.runtime_env.packaging import _zip_directory
from zipfile import ZipFile

with tempfile.TemporaryDirectory() as tmp_dir:
    # Prepare test directory
    path = Path(tmp_dir)
    subdir = path / "subdir"
    subdir.mkdir(parents=True)
    file1 = subdir / "file1.txt"
    with file
bug good first issue P2
nni
shenoynikhil98
shenoynikhil98 commented Mar 23, 2022

https://github.com/microsoft/nni/blob/8d5f643c64580bb26a7b10a3c4c9accf617f65b1/nni/compression/pytorch/speedup/jit_translate.py#L382

While trying to speedup my single shot detector, the following error comes up. Any way to fix this,

/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py in forward(self, *args)
    363 
    364         def forward(self, *
not522
not522 commented Mar 24, 2022

Motivation

We can reduce the number of calling storage.get_best_trial.

Description

In _log_completed_trial, best_trial is called twice. https://github.com/optuna/optuna/blob/a07a36e124d6523677d718819cad61628e8621e7/optuna/study/study.py#L1051-L1052

Alternatives (optional)

No response

Additional context (optional)

No response

code-fix contribution-welcome good first issue
mljar-supervised
moshe-rl
moshe-rl commented Nov 30, 2021

When using r2 as eval metric for regression task (with 'Explain' mode) the metric values reported in Leaderboard (at README.md file) are multiplied by -1.
For instance, the metric value for some model shown in the Leaderboard is -0.41, while when clicking the model name leads to the detailed results page - and there the value of r2 is 0.41.
I've noticed that when one of R2 metric values in the L

bug help wanted good first issue

Notes, programming assignments and quizzes from all courses within the Coursera Deep Learning specialization offered by deeplearning.ai: (i) Neural Networks and Deep Learning; (ii) Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization; (iii) Structuring Machine Learning Projects; (iv) Convolutional Neural Networks; (v) Sequence Models

  • Updated Feb 7, 2022
  • Jupyter Notebook
Gradient-Free-Optimizers

A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.

  • Updated Jun 19, 2021
bcyphers
bcyphers commented Jan 31, 2018

If enter_data() is called with the same train_path twice in a row and the data itself hasn't changed, a new Dataset does not need to be created.

We should add a column which stores some kind of hash of the actual data. When a Dataset would be created, if the metadata and data hash are exactly the same as an existing Dataset, nothing should be added to the ModelHub database and the existing

Neuraxle

Improve this page

Add a description, image, and links to the hyperparameter-optimization topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the hyperparameter-optimization topic, visit your repo's landing page and select "manage topics."

Learn more