Skip to content

metrics_adapter

orchard.tasks.detection.metrics_adapter

Detection Validation Metrics Adapter.

Computes mAP-family metrics using torchmetrics.detection.MeanAveragePrecision to satisfy :class:~orchard.core.task_protocols.TaskValidationMetrics.

DetectionMetricsAdapter

Computes mAP validation metrics for object detection.

compute_validation_metrics(model, val_loader, criterion, device)

Run detection inference and compute mAP metrics.

Iterates the validation loader, collects predictions and targets, then computes mean Average Precision at multiple IoU thresholds.

Detection models do not produce a single validation loss in eval mode, so "loss" is returned as 0.0.

Parameters:

Name Type Description Default
model Module

Detection model to evaluate.

required
val_loader DataLoader[Any]

Validation data provider.

required
criterion Module

Ignored (detection models compute losses internally).

required
device device

Hardware target for inference.

required

Returns:

Type Description
Mapping[str, float]

Immutable mapping with keys: loss, map, map_50, map_75.

Source code in orchard/tasks/detection/metrics_adapter.py
def compute_validation_metrics(
    self,
    model: nn.Module,
    val_loader: DataLoader[Any],
    criterion: nn.Module,  # noqa: ARG002
    device: torch.device,
) -> Mapping[str, float]:
    """
    Run detection inference and compute mAP metrics.

    Iterates the validation loader, collects predictions and targets,
    then computes mean Average Precision at multiple IoU thresholds.

    Detection models do not produce a single validation loss in eval
    mode, so ``"loss"`` is returned as ``0.0``.

    Args:
        model: Detection model to evaluate.
        val_loader: Validation data provider.
        criterion: Ignored (detection models compute losses internally).
        device: Hardware target for inference.

    Returns:
        Immutable mapping with keys: ``loss``, ``map``, ``map_50``, ``map_75``.
    """
    model.eval()
    metric = MeanAveragePrecision(iou_type="bbox")

    with torch.no_grad():
        for images, targets in val_loader:
            images = [img.to(device) for img in images]
            predictions = model(images)
            metric.update(
                [to_cpu(p) for p in predictions],
                [to_cpu(t) for t in targets],
            )

    result = metric.compute()

    return MappingProxyType(
        {
            METRIC_LOSS: 0.0,  # sentinel — detection models don't expose validation loss
            METRIC_MAP: float(result["map"]),
            METRIC_MAP_50: float(result["map_50"]),
            METRIC_MAP_75: float(result["map_75"]),
        }
    )