Hyperparameter optimization for Pytorch model

27,992

Solution 1

Many researchers use RayTune. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.

import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test


def train_mnist(config):
    train_loader, test_loader = get_data_loaders()
    model = ConvNet()
    optimizer = optim.SGD(model.parameters(), lr=config["lr"])
    for i in range(10):
        train(model, optimizer, train_loader)
        acc = test(model, test_loader)
        tune.report(mean_accuracy=acc)


analysis = tune.run(
    train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})

print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))

# Get a dataframe for analyzing trial results.
df = analysis.dataframe()

[Disclaimer: I contribute actively to this project!]

Solution 2

What I found is following:

More young projects:

UPDATE something new:

Also, I found a useful table at post by @Richard Liaw:

enter image description here

Solution 3

You can use Bayesian optimization (full disclosure, I've contributed to this package) or Hyperband. Both of these methods attempt to automate the hyperparameter tuning stage. Hyperband is supposedly the state of the art in this space. Hyperband is the only parameter-free method that I've heard of other than random search. You can also look into using reinforcement learning to learn the optimal hyperparameters if you prefer.

Solution 4

The simplest parameter-free way to do black box optimisation is random search, and it will explore high dimensional spaces faster than a grid search. There are papers on this but tl;dr with random search you get different values on every dimension each time, while with grid search you don't.

Bayesian optimisation has good theoretical guarantees (despite the approximations), and implementations like Spearmint can wrap any script you have; there are hyperparameters but users don't see them in practice. Hyperband got a lot of attention by showing faster convergence than Naive Bayesian optimisation. It was able to do this by running different networks for different numbers of iterations, and Bayesian optimisation doesn't support that naively. While it is possible to do better with a Bayesian optimisation algorithm that can take this into account, such as FABOLAS, in practice hyperband is so simple you're probably better using it and watching it to tune the search space at intervals.

Share:
27,992
Alex
Author by

Alex

Updated on September 07, 2020

Comments

  • Alex
    Alex over 3 years

    What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?

  • Krishna Penukonda
    Krishna Penukonda almost 4 years
    Looks like you've linked to the wrong repo for HyperOpt. This is the correct URL.
  • LamaMo
    LamaMo almost 4 years
    I tried that example but I got an error in which I post here: stackoverflow.com/questions/62371787/… could please help me with that?
  • drevicko
    drevicko almost 4 years
    There is also SHERPA, which also has a nice table of comparisons.