Skip to main content

Bayesian Optimization for Supervised Models

When tuning hyperparameters, Grid Search and Random Search are common approaches, but they often require a large number of evaluations, especially as the number of hyperparameters grows. Bayesian Optimization offers a more efficient alternative by intelligently selecting hyperparameters to evaluate based on past performance, making it an ideal method for more complex supervised learning models.

In this article, we’ll explore how Bayesian Optimization works, its advantages over other tuning methods, and how to implement it using popular libraries like HyperOpt and Scikit-Optimize.


What is Bayesian Optimization?

Bayesian Optimization is a hyperparameter tuning technique that builds a probabilistic model of the objective function (the function that maps hyperparameters to model performance) and uses this model to select the next set of hyperparameters to evaluate. Unlike Grid Search or Random Search, which select hyperparameters blindly, Bayesian Optimization uses past evaluation results to make informed guesses about which hyperparameter combinations are likely to perform best.

Key Components:

  1. Surrogate Model: Bayesian Optimization uses a surrogate model (often a Gaussian Process) to approximate the objective function. The surrogate model is much cheaper to evaluate than the actual objective function (e.g., training and evaluating a model).
  2. Acquisition Function: The acquisition function decides which hyperparameter values to evaluate next, balancing exploration of new areas and exploitation of known promising regions.

The Process:

  1. Start by training the model on an initial set of randomly selected hyperparameters.
  2. Use the surrogate model to predict how the model will perform with different hyperparameter combinations.
  3. Use the acquisition function to select the next set of hyperparameters to evaluate based on the surrogate model’s predictions.
  4. Update the surrogate model with the new hyperparameter-performance data.
  5. Repeat the process until a stopping criterion is met (e.g., time limit, maximum iterations).

Why Bayesian Optimization is Efficient

Unlike Grid Search or Random Search, which treat each hyperparameter combination independently, Bayesian Optimization builds a probabilistic model that incorporates knowledge from previous evaluations. This allows it to focus on promising hyperparameter regions, reducing the total number of evaluations required to find the best combination.

Key Benefits:

  • Efficiency: Fewer evaluations are needed to find optimal or near-optimal hyperparameters, especially when the objective function is expensive to evaluate.
  • Exploration vs. Exploitation: The acquisition function intelligently balances exploration (trying new hyperparameter combinations) with exploitation (refining already promising areas).
  • Applicable to Complex Models: Bayesian Optimization works well for models with expensive training cycles, like deep learning networks, or models with many hyperparameters, like gradient boosting machines.

Bayesian Optimization with HyperOpt

HyperOpt is a popular Python library for implementing Bayesian Optimization. Let’s look at an example of tuning a Random Forest classifier using HyperOpt:

Example: Bayesian Optimization with HyperOpt

from hyperopt import fmin, tpe, hp, Trials
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

# Load dataset
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)

# Define the objective function
def objective(params):
clf = RandomForestClassifier(**params)
score = cross_val_score(clf, X_train, y_train, cv=5, scoring='accuracy').mean()
return -score # Minimize the negative accuracy

# Define the search space for hyperparameters
space = {
'n_estimators': hp.choice('n_estimators', [50, 100, 200]),
'max_depth': hp.choice('max_depth', [5, 10, 20, None]),
'min_samples_split': hp.uniform('min_samples_split', 0.1, 1.0),
'min_samples_leaf': hp.uniform('min_samples_leaf', 0.1, 0.5)
}

# Create a Trials object to track results
trials = Trials()

# Run Bayesian Optimization using Tree of Parzen Estimators (TPE)
best = fmin(fn=objective,
space=space,
algo=tpe.suggest,
max_evals=50, # Number of iterations
trials=trials)

print("Best Hyperparameters:", best)

In this example:

  • Objective function: This is the model training and evaluation function that we want to minimize (in this case, the negative accuracy).
  • Search space: We define a range of hyperparameters for the RandomForestClassifier.
  • TPE algorithm: HyperOpt uses the Tree of Parzen Estimators (TPE) as its surrogate model, and the tpe.suggest function picks the next hyperparameter values to try.

Bayesian Optimization with Scikit-Optimize

Scikit-Optimize (also known as skopt) is another popular library for Bayesian Optimization. It’s designed to be simple and easy to use, with an interface similar to scikit-learn.

Example: Bayesian Optimization with Scikit-Optimize

from skopt import gp_minimize
from skopt.space import Real, Integer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

# Load dataset
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)

# Define the objective function
def objective(params):
n_estimators, max_depth, min_samples_split, min_samples_leaf = params
clf = RandomForestClassifier(n_estimators=n_estimators, max_depth=max_depth,
min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf)
return -cross_val_score(clf, X_train, y_train, cv=5, scoring='accuracy').mean()

# Define the search space for hyperparameters
space = [
Integer(50, 200, name='n_estimators'),
Integer(5, 20, name='max_depth'),
Real(0.1, 1.0, name='min_samples_split'),
Real(0.1, 0.5, name='min_samples_leaf')
]

# Run Bayesian Optimization
result = gp_minimize(objective, space, n_calls=50, random_state=42)

print("Best Hyperparameters:", result.x)

Here, we use gp_minimize from Scikit-Optimize, which employs a Gaussian Process as the surrogate model to optimize the hyperparameters of a RandomForestClassifier.


Best Practices for Using Bayesian Optimization

  1. Limit the Number of Evaluations: While Bayesian Optimization is more efficient than Grid or Random Search, it’s important to set a limit on the number of iterations to avoid excessive computation.
  2. Define Reasonable Search Spaces: Narrowing down the search space to sensible ranges can help the optimization process converge faster and find better hyperparameters.
  3. Combine with Cross-Validation: Use cross-validation when evaluating hyperparameters to ensure that the model generalizes well to unseen data.
  4. Consider the Trade-off Between Exploration and Exploitation: Tuning the acquisition function to balance exploring new areas of the search space versus refining already promising areas is key to successful optimization.

Conclusion

Bayesian Optimization offers an efficient and intelligent approach to hyperparameter tuning, especially when training machine learning models is expensive or when there are many hyperparameters to explore. By using surrogate models and acquisition functions, Bayesian Optimization focuses on promising areas of the search space, reducing the number of evaluations needed to find the best hyperparameters. Whether you use HyperOpt or Scikit-Optimize, this method can be a valuable addition to your hyperparameter tuning toolkit.

In the next article, we’ll explore how cross-validation can be integrated into hyperparameter tuning to ensure robust and reliable performance evaluation.