objective
orchard.optimization.objective
¶
Optuna objective components for the training pipeline.
This package provides the Optuna objective function and its supporting components, structured around single-responsibility modules for configuration building, metric extraction, and training execution.
TrialConfigBuilder(base_cfg)
¶
Builds trial-specific Config instances for Optuna trials.
Handles parameter mapping from Optuna's flat namespace to Config's hierarchical structure, preserves dataset metadata excluded from serialization, and validates via Pydantic.
Attributes:
| Name | Type | Description |
|---|---|---|
base_cfg |
Base configuration template |
|
optuna_epochs |
Number of epochs for Optuna trials (from cfg.optuna.epochs) |
|
base_metadata |
Cached dataset metadata |
Example
builder = TrialConfigBuilder(base_cfg) trial_params = {"learning_rate": 0.001, "dropout": 0.3} trial_cfg = builder.build(trial_params)
Initialize config builder.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_cfg
|
Config
|
Base configuration template |
required |
Source code in orchard/optimization/objective/config_builder.py
build(trial_params)
¶
Build trial-specific Config with parameter overrides.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trial_params
|
dict[str, Any]
|
Sampled hyperparameters from Optuna |
required |
Returns:
| Type | Description |
|---|---|
Config
|
Validated Config instance with trial parameters |
Source code in orchard/optimization/objective/config_builder.py
MetricExtractor(metric_name, direction='maximize')
¶
Extracts and tracks metrics from validation results.
Handles metric extraction with validation and maintains the best metric value achieved during training. Direction-aware: uses max() for maximize objectives, min() for minimize.
Attributes:
| Name | Type | Description |
|---|---|---|
metric_name |
Name of metric to track (e.g., 'auc', 'accuracy') |
|
direction |
Optimization direction ('maximize' or 'minimize') |
|
best_metric |
Best metric value achieved so far |
Example
extractor = MetricExtractor("auc", direction="maximize") val_metrics = {"loss": 0.5, "accuracy": 0.85, "auc": 0.92} current = extractor.extract(val_metrics) # 0.92 best = extractor.update_best(current) # 0.92
Initialize metric extractor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metric_name
|
str
|
Name of metric to track |
required |
direction
|
str
|
'maximize' or 'minimize' |
'maximize'
|
Source code in orchard/optimization/objective/metric_extractor.py
extract(val_metrics)
¶
Extract target metric from validation results.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
val_metrics
|
Mapping[str, float]
|
Dictionary of validation metrics |
required |
Returns:
| Type | Description |
|---|---|
float
|
Value of target metric |
Raises:
| Type | Description |
|---|---|
KeyError
|
If metric_name not found in val_metrics |
Source code in orchard/optimization/objective/metric_extractor.py
reset()
¶
update_best(current_metric)
¶
Update and return best metric achieved within current trial.
Direction-aware: uses max() for maximize, min() for minimize.
NaN values are ignored to prevent poisoning the best-metric state
(max(-inf, NaN) returns NaN in Python, which would permanently
corrupt comparisons).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
current_metric
|
float
|
Current metric value |
required |
Returns:
| Type | Description |
|---|---|
float
|
Best metric value achieved so far |
Source code in orchard/optimization/objective/metric_extractor.py
OptunaObjective(cfg, search_space, device, dataset_loader=None, dataloader_factory=None, model_factory=None, tracker=None)
¶
Optuna objective function with dependency injection.
Orchestrates hyperparameter optimization trials by:
- Building trial-specific configurations
- Creating data loaders, models, and optimizers
- Executing training with pruning
- Tracking and returning best metrics
All external dependencies are injectable for testability:
- dataset_loader: Dataset loading function
- dataloader_factory: DataLoader creation function
- model_factory: Model instantiation function
Attributes:
| Name | Type | Description |
|---|---|---|
cfg |
Base configuration (single source of truth) |
|
search_space |
Hyperparameter search space |
|
device |
Training device (CPU/CUDA/MPS) |
|
config_builder |
Builds trial-specific configs |
|
metric_extractor |
Handles metric extraction |
|
dataset_data |
Cached dataset (loaded once, reused across trials) |
Example
objective = OptunaObjective( ... cfg=config, ... search_space=search_space, ... device=torch.device("cuda"), ... ) study = optuna.create_study(direction="maximize") study.optimize(objective, n_trials=50)
Initialize Optuna objective.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cfg
|
Config
|
Base configuration (reads optuna.* settings) |
required |
search_space
|
Mapping[str, Any]
|
Hyperparameter search space |
required |
device
|
device
|
Training device |
required |
dataset_loader
|
DatasetLoaderProtocol | None
|
Dataset loading function (default: load_dataset) |
None
|
dataloader_factory
|
DataloaderFactoryProtocol | None
|
DataLoader factory (default: get_dataloaders) |
None
|
model_factory
|
ModelFactoryProtocol | None
|
Model factory (default: get_model) |
None
|
tracker
|
TrackerProtocol | None
|
Optional experiment tracker for nested trial logging |
None
|
Source code in orchard/optimization/objective/objective.py
__call__(trial)
¶
Execute single Optuna trial.
Samples hyperparameters, builds trial configuration, trains model, and returns best validation metric. Failed trials return the worst possible metric instead of crashing the study.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trial
|
Trial
|
Optuna trial object |
required |
Returns:
| Type | Description |
|---|---|
float
|
Best validation metric achieved during training, |
float
|
or worst-case metric if the trial fails. |
Raises:
| Type | Description |
|---|---|
TrialPruned
|
If trial is pruned during training |
Source code in orchard/optimization/objective/objective.py
189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 | |
TrialTrainingExecutor(model, train_loader, val_loader, optimizer, scheduler, criterion, training, optuna, log_interval, device, metric_extractor)
¶
Executes training loop with Optuna pruning integration.
Orchestrates a complete training cycle for a single Optuna trial, including:
- Training and validation epochs
- Metric extraction and tracking
- Pruning decisions with warmup period
- Learning rate scheduling
- Progress logging
Pruning and warmup parameters are read from the optuna sub-config;
training hyperparameters from training.
Attributes:
| Name | Type | Description |
|---|---|---|
model |
PyTorch model to train. |
|
train_loader |
Training data loader. |
|
val_loader |
Validation data loader. |
|
optimizer |
Optimizer instance. |
|
scheduler |
Learning rate scheduler. |
|
criterion |
Loss function. |
|
device |
Training device (CPU/CUDA/MPS). |
|
metric_extractor |
Handles metric extraction and best-value tracking. |
|
enable_pruning |
Whether to enable trial pruning. |
|
warmup_epochs |
Epochs before pruning activates. |
|
monitor_metric |
Name of the metric driving scheduling. |
|
scaler |
GradScaler | None
|
AMP gradient scaler (None when use_amp is False). |
mixup_fn |
callable | None
|
Mixup augmentation function (None when alpha is 0). |
epochs |
Total training epochs. |
|
log_interval |
Epoch interval for progress logging. |
|
_loop |
TrainingLoop
|
Shared epoch kernel for training steps (train only, no validation). |
Example
executor = TrialTrainingExecutor( ... model=model, ... train_loader=train_loader, ... val_loader=val_loader, ... optimizer=optimizer, ... scheduler=scheduler, ... criterion=criterion, ... training=trial_cfg.training, ... optuna=trial_cfg.optuna, ... log_interval=trial_cfg.telemetry.log_interval, ... device=device, ... metric_extractor=MetricExtractor("auc"), ... ) best_metric = executor.execute(trial)
Initialize training executor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Module
|
PyTorch model to train. |
required |
train_loader
|
DataLoader[Any]
|
Training data loader. |
required |
val_loader
|
DataLoader[Any]
|
Validation data loader. |
required |
optimizer
|
Optimizer
|
Optimizer instance. |
required |
scheduler
|
LRScheduler
|
Learning rate scheduler. |
required |
criterion
|
Module
|
Loss function. |
required |
training
|
TrainingConfig
|
Training hyperparameters sub-config. |
required |
optuna
|
OptunaConfig
|
Optuna pruning/warmup sub-config. |
required |
log_interval
|
int
|
Epoch interval for progress logging. |
required |
device
|
device
|
Training device. |
required |
metric_extractor
|
MetricExtractor
|
Metric extraction and tracking handler. |
required |
Source code in orchard/optimization/objective/training_executor.py
execute(trial)
¶
Execute full training loop with pruning.
Runs training for cfg.training.epochs, reporting metrics to Optuna after each epoch. Applies pruning logic after warmup period.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
trial
|
Trial
|
Optuna trial for reporting and pruning |
required |
Returns:
| Type | Description |
|---|---|
float
|
Best validation metric achieved during training |
Raises:
| Type | Description |
|---|---|
TrialPruned
|
If trial should terminate early |