Inside Optuna: How Its Core Components Enable Hyper‑Parameter Optimization

This article dissects Optuna’s internal design by building three miniature versions (Minituna v1‑v3) that illustrate its main components, storage layer, sampling APIs, pruning mechanisms, and joint‑sampling concepts, while comparing them with Optuna’s full implementation.

Code DAO
Code DAO
Code DAO
Inside Optuna: How Its Core Components Enable Hyper‑Parameter Optimization

Hyper‑parameter optimization is a key step for achieving high performance in machine‑learning models. Optuna is a Python library that provides a flexible framework for this task. The article introduces a pedagogical mini‑implementation called Minituna , which comes in three versions (≈100, 200, and 300 lines of code) to expose Optuna’s design in a step‑by‑step manner.

Minituna v1 – Core Components

The first version implements the essential classes Study, Trial, Sampler, Storage, and FrozenTrial. An example objective function shows how a Trial samples two uniform parameters and returns a simple quadratic loss. The code also demonstrates creating a study with minituna.create_study() and running ten trials.

import minituna_v1 as minituna

def objective(trial: minituna.Trial) -> float:
    x = trial.suggest_uniform("x", 0, 10)
    y = trial.suggest_uniform("y", 0, 10)
    return (x - 3) ** 2 + (y - 5) ** 2

if __name__ == "__main__":
    study = minituna.create_study()
    study.optimize(objective, 10)
    print("Best trial:", study.best_trial)

The article explains each component:

Study : manages the optimization task, holds a Storage and a Sampler.

Trial : represents a single evaluation, provides suggest_* APIs that delegate to the sampler and record parameters in storage.

Storage : persists FrozenTrial objects, enabling RDB storage and distributed optimization.

FrozenTrial : immutable snapshot of a trial’s parameters, value, and state.

Sampler : implements the sampling algorithm (random sampling in this minimal version).

Minituna v2 – Categorical, Integer, and Log‑Uniform APIs

Version 2 adds suggest_categorical, suggest_int, and suggest_loguniform. An Iris‑classification example illustrates conditional search spaces:

def objective(trial):
    iris = sklearn.datasets.load_iris()
    x, y = iris.data, iris.target
    classifier_name = trial.suggest_categorical("classifier", ["SVC", "RandomForest"])
    if classifier_name == "SVC":
        svc_c = trial.suggest_loguniform("svc_c", 1e-10, 1e10)
        classifier = sklearn.svm.SVC(C=svc_c, gamma="auto")
    else:
        rf_max_depth = trial.suggest_int("rf_max_depth", 2, 32)
        classifier = sklearn.ensemble.RandomForestClassifier(max_depth=rf_max_depth, n_estimators=10)
    score = sklearn.model_selection.cross_val_score(classifier, x, y, n_jobs=-1, cv=3)
    return 1 - score.mean()

The article introduces an abstract BaseDistribution class with to_internal_repr and to_external_repr methods, showing how Optuna stores every parameter as a float ( internal representation ) while exposing the original type ( external representation ).

Understanding the Storage Layer

Before diving into v3, the article reviews Optuna’s storage models (SQLAlchemy definitions for StudyModel, TrialModel, and TrialParamModel). It notes two design differences: Minituna stores only a single study, whereas Optuna supports multiple studies; consequently, trial_id in Optuna is not guaranteed to be monotonic within a study.

Minituna v3 – Median‑Stopping Pruning

Version 3 (≈300 lines) implements early stopping via a median‑pruning rule. The objective reports intermediate error values and raises TrialPruned when trial.should_prune() returns true.

def objective(trial):
    clf = MLPClassifier(hidden_layer_sizes=tuple(
        [trial.suggest_int(f"n_units_l{i}", 32, 64) for i in range(3)]),
        learning_rate_init=trial.suggest_loguniform("lr_init", 1e-5, 1e-1))
    for step in range(100):
        clf.partial_fit(x_train, y_train, classes=classes)
        error = 1 - clf.score(x_valid, y_valid)
        trial.report(error, step)
        if trial.should_prune():
            raise minituna.TrialPruned()
    return error

The pruning algorithm works as follows (as described in Optuna paper [1]): after a warm‑up period, the median of intermediate values from completed trials at the current step is computed; a trial is pruned if its intermediate value exceeds this median. The implementation uses NumPy’s nanmedian to handle missing values.

Joint Sampling and Define‑by‑Run

Optuna separates Sampler and Pruner, allowing arbitrary combinations. The article explains “joint sampling” – a mechanism used by SkoptSampler and CmaEsSampler – which extracts parameters that appear in all conditional branches (the “joint search space”) and samples them jointly, while other parameters fall back to independent samplers such as RandomSampler or TPESampler. Conditional search spaces are illustrated with two examples (SVC vs. RandomForest) showing how the search space changes at runtime.

Differences Between Optuna and Minituna

Optuna provides DiscreteUniformDistribution and IntLogUniformDistribution; Minituna implements only random sampling.

Optuna’s suggest_float API (with optional log flag) aligns with suggest_int, while Minituna’s v2 uses separate suggest_loguniform.

Optuna serializes trial data to JSON for RDB or Redis storage; Minituna’s storage is simplified and supports only a single study.

References

T. Akiba et al., “Optuna: A Next‑generation Hyperparameter Optimization Framework,” KDD 2019.

D. Golovin et al., “Google Vizier: A Service for Black‑Box Optimization,” KDD 2017.

K. Jamieson and A. Talwalkar, “Non‑stochastic Best Arm Identification and Hyperparameter Optimization,” AISTATS 2016.

L. Li et al., “A System for Massively Parallel Hyperparameter Tuning,” MLSys 2020.

L. Li et al., “Hyperband: A Novel Bandit‑Based Approach to Hyperparameter Optimization,” JMLR 2017.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Pythonpruningstoragesamplerhyperparameter optimizationMinitunaOptuna
Code DAO
Written by

Code DAO

We deliver AI algorithm tutorials and the latest news, curated by a team of researchers from Peking University, Shanghai Jiao Tong University, Central South University, and leading AI companies such as Huawei, Kuaishou, and SenseTime. Join us in the AI alchemy—making life better!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.