Skip to content

classification

orchard.tasks.classification

Classification Task Adapters.

Exports the three classification strategy adapters. Registration in the task registry is handled by :mod:orchard.tasks, which owns the relationship between adapters and core.task_registry.

ClassificationCriterionAdapter

Builds classification loss functions (CrossEntropy / Focal).

get_criterion(training, class_weights=None)

Delegate to the existing criterion factory.

Parameters:

Name Type Description Default
training TrainingConfig

Training sub-config with criterion parameters.

required
class_weights Tensor | None

Optional per-class weights for imbalanced datasets.

None

Returns:

Type Description
Module

Loss module (CrossEntropyLoss or FocalLoss).

Source code in orchard/tasks/classification/criterion_adapter.py
def get_criterion(
    self,
    training: TrainingConfig,
    class_weights: torch.Tensor | None = None,
) -> nn.Module:
    """
    Delegate to the existing criterion factory.

    Args:
        training: Training sub-config with criterion parameters.
        class_weights: Optional per-class weights for imbalanced datasets.

    Returns:
        Loss module (CrossEntropyLoss or FocalLoss).
    """
    return get_criterion(training, class_weights=class_weights)

ClassificationEvalPipelineAdapter

Orchestrates classification inference, visualization, and reporting.

run_evaluation(model, test_loader, train_losses, val_metrics_history, class_names, paths, training, dataset, augmentation, evaluation, arch_name, aug_info='N/A', tracker=None)

Delegate to the existing final evaluation pipeline.

Parameters:

Name Type Description Default
model Module

Trained model (already on target device).

required
test_loader DataLoader[Any]

DataLoader for test set.

required
train_losses list[float]

Training loss history per epoch.

required
val_metrics_history list[Mapping[str, float]]

Validation metrics history per epoch.

required
class_names list[str]

List of class label strings.

required
paths RunPaths

RunPaths for artifact output.

required
training TrainingConfig

Training sub-config.

required
dataset DatasetConfig

Dataset sub-config.

required
augmentation AugmentationConfig

Augmentation sub-config.

required
evaluation EvaluationConfig

Evaluation sub-config.

required
arch_name str

Architecture identifier.

required
aug_info str

Augmentation description string.

'N/A'
tracker TrackerProtocol | None

Optional experiment tracker for final metrics.

None

Returns:

Type Description
Mapping[str, float]

Mapping of metric names to float values.

Source code in orchard/tasks/classification/evaluation_adapter.py
def run_evaluation(
    self,
    model: nn.Module,
    test_loader: DataLoader[Any],
    train_losses: list[float],
    val_metrics_history: list[Mapping[str, float]],
    class_names: list[str],
    paths: RunPaths,
    training: TrainingConfig,
    dataset: DatasetConfig,
    augmentation: AugmentationConfig,
    evaluation: EvaluationConfig,
    arch_name: str,
    aug_info: str = "N/A",  # pragma: no mutate
    tracker: TrackerProtocol | None = None,
) -> Mapping[str, float]:
    """
    Delegate to the existing final evaluation pipeline.

    Args:
        model: Trained model (already on target device).
        test_loader: DataLoader for test set.
        train_losses: Training loss history per epoch.
        val_metrics_history: Validation metrics history per epoch.
        class_names: List of class label strings.
        paths: RunPaths for artifact output.
        training: Training sub-config.
        dataset: Dataset sub-config.
        augmentation: Augmentation sub-config.
        evaluation: Evaluation sub-config.
        arch_name: Architecture identifier.
        aug_info: Augmentation description string.
        tracker: Optional experiment tracker for final metrics.

    Returns:
        Mapping of metric names to float values.
    """
    macro_f1, test_acc, test_auc = run_final_evaluation(
        model=model,
        test_loader=test_loader,
        train_losses=train_losses,
        val_metrics_history=val_metrics_history,
        class_names=class_names,
        paths=paths,
        training=training,
        dataset=dataset,
        augmentation=augmentation,
        evaluation=evaluation,
        arch_name=arch_name,
        aug_info=aug_info,
        tracker=tracker,
    )
    return MappingProxyType(
        {
            METRIC_F1: macro_f1,
            METRIC_ACCURACY: test_acc,
            METRIC_AUC: test_auc,
        }
    )

ClassificationMetricsAdapter

Computes per-epoch classification metrics (loss, accuracy, AUC, F1).

compute_validation_metrics(model, val_loader, criterion, device)

Delegate to the existing validation engine.

Parameters:

Name Type Description Default
model Module

Neural network model to evaluate.

required
val_loader DataLoader[Any]

Validation data provider.

required
criterion Module

Loss function.

required
device device

Hardware target.

required

Returns:

Type Description
Mapping[str, float]

Immutable mapping with keys: loss, accuracy, auc, f1.

Source code in orchard/tasks/classification/metrics_adapter.py
def compute_validation_metrics(
    self,
    model: nn.Module,
    val_loader: DataLoader[Any],
    criterion: nn.Module,
    device: torch.device,
) -> Mapping[str, float]:
    """
    Delegate to the existing validation engine.

    Args:
        model: Neural network model to evaluate.
        val_loader: Validation data provider.
        criterion: Loss function.
        device: Hardware target.

    Returns:
        Immutable mapping with keys: loss, accuracy, auc, f1.
    """
    return validate_epoch(model, val_loader, criterion, device)

ClassificationTrainingStepAdapter

Computes classification training loss with optional MixUp blending.

compute_training_loss(model, inputs, targets, criterion, mixup_fn=None, device=None)

Execute classification forward pass and compute loss.

When mixup_fn is provided, inputs and targets are blended before the forward pass and the loss is computed as a convex combination of the two target sets.

Parameters:

Name Type Description Default
model Module

Neural network producing logits.

required
inputs Any

Batch of input tensors.

required
targets Any

Batch of target tensors.

required
criterion Module

Loss function (e.g. CrossEntropyLoss).

required
mixup_fn Callable[..., Any] | None

Optional MixUp augmentation callable.

None
device device | None

Target device for tensor placement.

None

Returns:

Type Description
Tensor

Scalar loss tensor for backward pass.

Source code in orchard/tasks/classification/training_step_adapter.py
def compute_training_loss(
    self,
    model: nn.Module,
    inputs: Any,
    targets: Any,
    criterion: nn.Module,
    mixup_fn: Callable[..., Any] | None = None,
    device: torch.device | None = None,
) -> torch.Tensor:
    """
    Execute classification forward pass and compute loss.

    When ``mixup_fn`` is provided, inputs and targets are blended
    before the forward pass and the loss is computed as a convex
    combination of the two target sets.

    Args:
        model: Neural network producing logits.
        inputs: Batch of input tensors.
        targets: Batch of target tensors.
        criterion: Loss function (e.g. CrossEntropyLoss).
        mixup_fn: Optional MixUp augmentation callable.
        device: Target device for tensor placement.

    Returns:
        Scalar loss tensor for backward pass.
    """
    if device is not None:
        inputs = inputs.to(device)
        targets = targets.to(device)
    if mixup_fn is not None:
        inputs, y_a, y_b, lam = mixup_fn(inputs, targets)
        outputs = model(inputs)
        loss: torch.Tensor = lam * criterion(outputs, y_a) + (1 - lam) * criterion(outputs, y_b)
        return loss
    outputs = model(inputs)
    result: torch.Tensor = criterion(outputs, targets)
    return result