Skip to content

exporters

orchard.optimization.orchestrator.exporters

Study Result Export Functions.

Handles serialization of Optuna study results to various formats:

  • Best trial configuration (YAML)
  • Complete study metadata (JSON)
  • Top K trials comparison (Excel)

All export functions handle edge cases (no completed trials, missing timestamps) and provide informative logging with professional Excel formatting.

TrialData(number, value, params, datetime_start=None, datetime_complete=None, state=None, duration_seconds=None) dataclass

Immutable snapshot of Optuna trial metadata for serialization.

Attributes:

Name Type Description
number int

Trial number within the study.

value float | None

Objective value (None for incomplete trials).

params dict[str, Any]

Hyperparameter values sampled for this trial.

datetime_start str | None

ISO-formatted start timestamp.

datetime_complete str | None

ISO-formatted completion timestamp.

state str | None

Trial state name (COMPLETE, PRUNED, FAIL, etc.).

duration_seconds float | None

Wall-clock duration in seconds.

from_trial(trial) classmethod

Build from an Optuna FrozenTrial, computing duration if timestamps are available.

Parameters:

Name Type Description Default
trial FrozenTrial

Frozen trial from study.

required

Returns:

Type Description
TrialData

Immutable trial snapshot with computed duration.

Source code in orchard/optimization/orchestrator/exporters.py
@classmethod
def from_trial(cls, trial: optuna.trial.FrozenTrial) -> TrialData:
    """
    Build from an Optuna FrozenTrial, computing duration if timestamps are available.

    Args:
        trial: Frozen trial from study.

    Returns:
        Immutable trial snapshot with computed duration.
    """
    duration = None
    if trial.datetime_complete and trial.datetime_start:
        duration = (trial.datetime_complete - trial.datetime_start).total_seconds()
    return cls(
        number=trial.number,
        value=trial.value,
        params=trial.params,
        state=trial.state.name,
        datetime_start=trial.datetime_start.isoformat() if trial.datetime_start else None,
        datetime_complete=(
            trial.datetime_complete.isoformat() if trial.datetime_complete else None
        ),
        duration_seconds=duration,
    )

to_dict()

Serialize to plain dictionary for JSON export.

Returns:

Type Description
dict[str, Any]

Dictionary representation with all fields.

Source code in orchard/optimization/orchestrator/exporters.py
def to_dict(self) -> dict[str, Any]:
    """
    Serialize to plain dictionary for JSON export.

    Returns:
        Dictionary representation with all fields.
    """
    return asdict(self)

export_best_config(study, cfg, paths)

Export best trial configuration as YAML file.

Creates a new Config instance with best hyperparameters applied, validates it, and saves to reports/best_config.yaml.

Parameters:

Name Type Description Default
study Study

Completed Optuna study with at least one successful trial

required
cfg Config

Template configuration (used for non-optimized parameters)

required
paths RunPaths

RunPaths instance for output location

required

Returns:

Type Description
Path | None

Path to exported config file, or None if no completed trials

Note

Skips export with warning if no completed trials exist.

Example

export_best_config(study, cfg, paths)

Creates: {paths.reports}/best_config.yaml

Source code in orchard/optimization/orchestrator/exporters.py
def export_best_config(study: optuna.Study, cfg: Config, paths: RunPaths) -> Path | None:
    """
    Export best trial configuration as YAML file.

    Creates a new Config instance with best hyperparameters applied,
    validates it, and saves to reports/best_config.yaml.

    Args:
        study: Completed Optuna study with at least one successful trial
        cfg: Template configuration (used for non-optimized parameters)
        paths: RunPaths instance for output location

    Returns:
        Path to exported config file, or None if no completed trials

    Note:
        Skips export with warning if no completed trials exist.

    Example:
        >>> export_best_config(study, cfg, paths)
        # Creates: {paths.reports}/best_config.yaml
    """
    if not has_completed_trials(study):
        logger.warning("No completed trials. Cannot export best config.")
        return None

    # Build config dict with best parameters
    config_dict = build_best_config_dict(study.best_params, cfg)

    # Create and validate new config
    best_config = Config(**config_dict)

    # Save to YAML
    output_path = paths.reports / "best_config.yaml"
    save_config_as_yaml(best_config, output_path)

    return output_path

export_study_summary(study, paths)

Export complete study metadata to JSON.

Serializes all trials with parameters, values, states, timestamps, and durations. Handles studies with zero completed trials gracefully.

Parameters:

Name Type Description Default
study Study

Optuna study (may contain failed/pruned trials)

required
paths RunPaths

RunPaths instance for output location

required

Output structure::

{
    "study_name": str,
    "direction": str,
    "n_trials": int,
    "n_completed": int,
    "best_trial": {...} or null,
    "trials": [...]
}
Example

export_study_summary(study, paths)

Creates: {paths.reports}/study_summary.json

Source code in orchard/optimization/orchestrator/exporters.py
def export_study_summary(study: optuna.Study, paths: RunPaths) -> None:
    """
    Export complete study metadata to JSON.

    Serializes all trials with parameters, values, states, timestamps,
    and durations. Handles studies with zero completed trials gracefully.

    Args:
        study: Optuna study (may contain failed/pruned trials)
        paths: RunPaths instance for output location

    Output structure::

        {
            "study_name": str,
            "direction": str,
            "n_trials": int,
            "n_completed": int,
            "best_trial": {...} or null,
            "trials": [...]
        }

    Example:
        >>> export_study_summary(study, paths)
        # Creates: {paths.reports}/study_summary.json
    """
    completed = get_completed_trials(study)

    # Build best trial data (may be None if no completed trials)
    best_trial_data = build_best_trial_data(study, completed)

    summary = {
        "study_name": study.study_name,
        "direction": study.direction.name,
        "n_trials": len(study.trials),
        "n_completed": len(completed),
        "best_trial": best_trial_data.to_dict() if best_trial_data else None,
        "trials": [TrialData.from_trial(trial).to_dict() for trial in study.trials],
    }

    output_path = paths.reports / "study_summary.json"
    with open(output_path, "w") as f:
        json.dump(summary, f, indent=2)

    logger.info(
        "%s%s %-22s: %s",
        LogStyle.INDENT,
        LogStyle.ARROW,
        "Study Summary",
        Path(output_path).name,
    )

export_top_trials(study, paths, metric_name, top_k=10)

Export top K trials to Excel spreadsheet with professional formatting.

Creates human-readable comparison table of best-performing trials with hyperparameters, metric values, and durations. Applies professional Excel styling matching TrainingReport format.

Parameters:

Name Type Description Default
study Study

Completed Optuna study with at least one successful trial

required
paths RunPaths

RunPaths instance for output location

required
metric_name str

Name of optimization metric (for column header)

required
top_k int

Number of top trials to export (default: 10)

10

DataFrame Columns:

  • Rank: 1-based ranking
  • Trial: Trial number
  • {METRIC_NAME}: Objective value
  • {param_name}: Each hyperparameter
  • Duration (s): Trial duration if available
Example

export_top_trials(study, paths, "auc", top_k=10)

Creates: {paths.reports}/top_10_trials.xlsx

Source code in orchard/optimization/orchestrator/exporters.py
def export_top_trials(
    study: optuna.Study, paths: RunPaths, metric_name: str, top_k: int = 10
) -> None:
    """
    Export top K trials to Excel spreadsheet with professional formatting.

    Creates human-readable comparison table of best-performing trials
    with hyperparameters, metric values, and durations. Applies professional
    Excel styling matching TrainingReport format.

    Args:
        study: Completed Optuna study with at least one successful trial
        paths: RunPaths instance for output location
        metric_name: Name of optimization metric (for column header)
        top_k: Number of top trials to export (default: 10)

    DataFrame Columns:

    - Rank: 1-based ranking
    - Trial: Trial number
    - {METRIC_NAME}: Objective value
    - {param_name}: Each hyperparameter
    - Duration (s): Trial duration if available

    Example:
        >>> export_top_trials(study, paths, "auc", top_k=10)
        # Creates: {paths.reports}/top_10_trials.xlsx
    """
    completed = get_completed_trials(study)
    if not completed:
        logger.warning("No completed trials. Cannot export top trials.")
        return

    reverse = study.direction == optuna.study.StudyDirection.MAXIMIZE
    # Filter out trials with None or NaN values before sorting
    valid_trials = [
        t
        for t in completed
        if t.value is not None and not (isinstance(t.value, float) and math.isnan(t.value))
    ]
    sorted_trials = sorted(valid_trials, key=lambda t: cast(float, t.value), reverse=reverse)[
        :top_k
    ]

    df = build_top_trials_dataframe(sorted_trials, metric_name)

    output_path = paths.reports / "top_10_trials.xlsx"

    wb = Workbook()
    ws = wb.active
    ws.title = "Top Trials"

    _write_styled_rows(ws, df)
    _auto_adjust_column_widths(ws)

    wb.save(output_path)
    logger.info(
        "%s%s %-22s: %s (%d trials)",
        LogStyle.INDENT,
        LogStyle.ARROW,
        "Top Trials",
        Path(output_path).name,
        len(sorted_trials),
    )

build_best_config_dict(best_params, cfg)

Construct config dictionary from best trial parameters.

Maps Optuna parameters back to Config structure using map_param_to_config_path and restores full training epochs.

Parameters:

Name Type Description Default
best_params dict[str, Any]

Dictionary from study.best_params

required
cfg Config

Template config for structure and defaults

required

Returns:

Type Description
dict[str, Any]

Config dictionary ready for validation

Source code in orchard/optimization/orchestrator/exporters.py
def build_best_config_dict(best_params: dict[str, Any], cfg: Config) -> dict[str, Any]:
    """
    Construct config dictionary from best trial parameters.

    Maps Optuna parameters back to Config structure using
    map_param_to_config_path and restores full training epochs.

    Args:
        best_params: Dictionary from study.best_params
        cfg: Template config for structure and defaults

    Returns:
        Config dictionary ready for validation
    """
    config_dict = cfg.model_dump()

    for param_name, value in best_params.items():
        section, key = map_param_to_config_path(param_name)
        config_dict[section][key] = value

    # Restore normal epochs for final training (not Optuna short epochs)
    config_dict["training"]["epochs"] = cfg.training.epochs

    return cast(dict[str, Any], config_dict)

build_best_trial_data(study, completed)

Build best trial metadata as an immutable snapshot.

Parameters:

Name Type Description Default
study Study

Optuna study instance.

required
completed list[FrozenTrial]

List of completed trials.

required

Returns:

Type Description
TrialData | None

Immutable trial snapshot, or None if no completed trials.

Source code in orchard/optimization/orchestrator/exporters.py
def build_best_trial_data(
    study: optuna.Study, completed: list[optuna.trial.FrozenTrial]
) -> TrialData | None:
    """
    Build best trial metadata as an immutable snapshot.

    Args:
        study: Optuna study instance.
        completed: List of completed trials.

    Returns:
        Immutable trial snapshot, or None if no completed trials.
    """
    if not completed:
        return None

    try:
        return TrialData.from_trial(study.best_trial)
    except ValueError:
        # No best trial available
        return None

build_top_trials_dataframe(sorted_trials, metric_name)

Build DataFrame from sorted trials.

Parameters:

Name Type Description Default
sorted_trials list[FrozenTrial]

list of trials sorted by performance

required
metric_name str

Name of optimization metric (for column header)

required

Returns:

Type Description
DataFrame

Pandas DataFrame with trial comparison data

Source code in orchard/optimization/orchestrator/exporters.py
def build_top_trials_dataframe(
    sorted_trials: list[optuna.trial.FrozenTrial], metric_name: str
) -> pd.DataFrame:
    """
    Build DataFrame from sorted trials.

    Args:
        sorted_trials: list of trials sorted by performance
        metric_name: Name of optimization metric (for column header)

    Returns:
        Pandas DataFrame with trial comparison data
    """
    rows = []
    for rank, trial in enumerate(sorted_trials, 1):
        row = {
            "Rank": rank,
            "Trial": trial.number,
            f"{metric_name.upper()}": trial.value,
        }
        row.update(trial.params)

        # Add duration if available
        if trial.datetime_complete and trial.datetime_start:
            duration = (trial.datetime_complete - trial.datetime_start).total_seconds()
            row["Duration (s)"] = int(duration)

        rows.append(row)

    return pd.DataFrame(rows)