setup
orchard.trainer.setup
¶
Optimization Setup Module.
This module provides factory functions to instantiate PyTorch optimization components (optimizers, schedulers, and loss functions) based on the training configuration sub-model.
compute_class_weights(labels, num_classes, device)
¶
Compute balanced class weights (sklearn formula: N / (n_classes * count_c)).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
labels
|
NDArray[Any]
|
Training set labels (1D array). |
required |
num_classes
|
int
|
Total number of classes. |
required |
device
|
device
|
Target device for the weight tensor. |
required |
Returns:
| Type | Description |
|---|---|
Tensor
|
1D tensor of per-class weights, shape |
Source code in orchard/trainer/setup.py
get_criterion(training, class_weights=None)
¶
Universal Vision Criterion Factory.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
training
|
TrainingConfig
|
Training sub-config with criterion parameters. |
required |
class_weights
|
Tensor | None
|
Optional per-class weights for imbalanced datasets. |
None
|
Returns:
| Type | Description |
|---|---|
Module
|
Loss module (CrossEntropyLoss or FocalLoss). |
Raises:
| Type | Description |
|---|---|
OrchardConfigError
|
If |
Source code in orchard/trainer/setup.py
get_optimizer(model, training)
¶
Factory function to instantiate optimizer from config.
Dispatches on training.optimizer_type:
- sgd — SGD with momentum, suited for convolutional architectures.
- adamw — AdamW with decoupled weight decay, suited for transformers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
Module
|
Network whose parameters will be optimised. |
required |
training
|
TrainingConfig
|
Training sub-config with optimizer hyper-parameters. |
required |
Returns:
| Type | Description |
|---|---|
Optimizer
|
Configured optimizer instance. |
Raises:
| Type | Description |
|---|---|
OrchardConfigError
|
If |
Source code in orchard/trainer/setup.py
get_scheduler(optimizer, training)
¶
Advanced Scheduler Factory.
Supports multiple LR decay strategies based on TrainingConfig:
- cosine — Smooth decay following a cosine curve.
- plateau — Reduces LR when
monitor_metricstops improving (mode="max"). - step — Periodic reduction by a fixed factor.
- none — Maintains a constant learning rate.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
optimizer
|
Optimizer
|
Optimizer whose learning rate will be scheduled. |
required |
training
|
TrainingConfig
|
Training sub-config with scheduler hyper-parameters. |
required |
Returns:
| Type | Description |
|---|---|
CosineAnnealingLR | ReduceLROnPlateau | StepLR | LambdaLR
|
Configured learning rate scheduler instance. |
Raises:
| Type | Description |
|---|---|
OrchardConfigError
|
If |