Skip to content

tasks

orchard.tasks

Task Strategy Packages.

Each sub-package exports its adapter classes. Registration in the core task registry is handled by :mod:orchard (the top-level init), which is the natural junction point between core and tasks.

ClassificationCriterionAdapter

Builds classification loss functions (CrossEntropy / Focal).

get_criterion(training, class_weights=None)

Delegate to the existing criterion factory.

Parameters:

Name Type Description Default
training TrainingConfig

Training sub-config with criterion parameters.

required
class_weights Tensor | None

Optional per-class weights for imbalanced datasets.

None

Returns:

Type Description
Module

Loss module (CrossEntropyLoss or FocalLoss).

Source code in orchard/tasks/classification/criterion_adapter.py
def get_criterion(
    self,
    training: TrainingConfig,
    class_weights: torch.Tensor | None = None,
) -> nn.Module:
    """
    Delegate to the existing criterion factory.

    Args:
        training: Training sub-config with criterion parameters.
        class_weights: Optional per-class weights for imbalanced datasets.

    Returns:
        Loss module (CrossEntropyLoss or FocalLoss).
    """
    return get_criterion(training, class_weights=class_weights)

ClassificationEvalPipelineAdapter

Orchestrates classification inference, visualization, and reporting.

run_evaluation(model, test_loader, train_losses, val_metrics_history, class_names, paths, training, dataset, augmentation, evaluation, arch_name, aug_info='N/A', tracker=None)

Delegate to the existing final evaluation pipeline.

Parameters:

Name Type Description Default
model Module

Trained model (already on target device).

required
test_loader DataLoader[Any]

DataLoader for test set.

required
train_losses list[float]

Training loss history per epoch.

required
val_metrics_history list[Mapping[str, float]]

Validation metrics history per epoch.

required
class_names list[str]

List of class label strings.

required
paths RunPaths

RunPaths for artifact output.

required
training TrainingConfig

Training sub-config.

required
dataset DatasetConfig

Dataset sub-config.

required
augmentation AugmentationConfig

Augmentation sub-config.

required
evaluation EvaluationConfig

Evaluation sub-config.

required
arch_name str

Architecture identifier.

required
aug_info str

Augmentation description string.

'N/A'
tracker TrackerProtocol | None

Optional experiment tracker for final metrics.

None

Returns:

Type Description
tuple[float, float, float]

3-tuple of (macro_f1, test_acc, test_auc).

Source code in orchard/tasks/classification/evaluation_adapter.py
def run_evaluation(
    self,
    model: nn.Module,
    test_loader: DataLoader[Any],
    train_losses: list[float],
    val_metrics_history: list[Mapping[str, float]],
    class_names: list[str],
    paths: RunPaths,
    training: TrainingConfig,
    dataset: DatasetConfig,
    augmentation: AugmentationConfig,
    evaluation: EvaluationConfig,
    arch_name: str,
    aug_info: str = "N/A",  # pragma: no mutate
    tracker: TrackerProtocol | None = None,
) -> tuple[float, float, float]:
    """
    Delegate to the existing final evaluation pipeline.

    Args:
        model: Trained model (already on target device).
        test_loader: DataLoader for test set.
        train_losses: Training loss history per epoch.
        val_metrics_history: Validation metrics history per epoch.
        class_names: List of class label strings.
        paths: RunPaths for artifact output.
        training: Training sub-config.
        dataset: Dataset sub-config.
        augmentation: Augmentation sub-config.
        evaluation: Evaluation sub-config.
        arch_name: Architecture identifier.
        aug_info: Augmentation description string.
        tracker: Optional experiment tracker for final metrics.

    Returns:
        3-tuple of (macro_f1, test_acc, test_auc).
    """
    return run_final_evaluation(
        model=model,
        test_loader=test_loader,
        train_losses=train_losses,
        val_metrics_history=val_metrics_history,
        class_names=class_names,
        paths=paths,
        training=training,
        dataset=dataset,
        augmentation=augmentation,
        evaluation=evaluation,
        arch_name=arch_name,
        aug_info=aug_info,
        tracker=tracker,
    )

ClassificationMetricsAdapter

Computes per-epoch classification metrics (loss, accuracy, AUC, F1).

compute_validation_metrics(model, val_loader, criterion, device)

Delegate to the existing validation engine.

Parameters:

Name Type Description Default
model Module

Neural network model to evaluate.

required
val_loader DataLoader[Any]

Validation data provider.

required
criterion Module

Loss function.

required
device device

Hardware target.

required

Returns:

Type Description
Mapping[str, float]

Immutable mapping with keys: loss, accuracy, auc, f1.

Source code in orchard/tasks/classification/metrics_adapter.py
def compute_validation_metrics(
    self,
    model: nn.Module,
    val_loader: DataLoader[Any],
    criterion: nn.Module,
    device: torch.device,
) -> Mapping[str, float]:
    """
    Delegate to the existing validation engine.

    Args:
        model: Neural network model to evaluate.
        val_loader: Validation data provider.
        criterion: Loss function.
        device: Hardware target.

    Returns:
        Immutable mapping with keys: loss, accuracy, auc, f1.
    """
    return validate_epoch(model, val_loader, criterion, device)