Skip to content

detection_visualization

orchard.evaluation.detection_visualization

Detection visualization utilities.

Renders bounding-box overlays on sample images from the test set, showing ground-truth boxes (green) and predicted boxes (red/blue) with confidence scores. Follows the same PlotContext / _finalize_figure pattern used by the classification visualization module.

show_detections(model, loader, device, classes, save_path=None, ctx=None, n=None, confidence_threshold=_DEFAULT_CONFIDENCE)

Visualize detection predictions with bounding-box overlays.

Renders a grid of sample images from the test loader with ground-truth boxes (green) and predicted boxes (red) above a configurable confidence threshold.

Parameters:

Name Type Description Default
model Module

Trained detection model in eval mode.

required
loader DataLoader[Any]

DataLoader yielding (list[Tensor], list[dict]) batches.

required
device device

Target device for inference.

required
classes list[str]

Human-readable class label names.

required
save_path Path | None

Output file path. If None, displays interactively.

None
ctx PlotContext | None

PlotContext with layout and normalization settings.

None
n int | None

Number of samples to display. Defaults to ctx.n_samples.

None
confidence_threshold float

Minimum score to display a predicted box.

_DEFAULT_CONFIDENCE
Source code in orchard/evaluation/detection_visualization.py
def show_detections(
    model: nn.Module,
    loader: DataLoader[Any],
    device: torch.device,
    classes: list[str],
    save_path: Path | None = None,
    ctx: PlotContext | None = None,
    n: int | None = None,
    confidence_threshold: float = _DEFAULT_CONFIDENCE,
) -> None:
    """
    Visualize detection predictions with bounding-box overlays.

    Renders a grid of sample images from the test loader with
    ground-truth boxes (green) and predicted boxes (red) above
    a configurable confidence threshold.

    Args:
        model: Trained detection model in eval mode.
        loader: DataLoader yielding ``(list[Tensor], list[dict])`` batches.
        device: Target device for inference.
        classes: Human-readable class label names.
        save_path: Output file path. If None, displays interactively.
        ctx: PlotContext with layout and normalization settings.
        n: Number of samples to display. Defaults to ``ctx.n_samples``.
        confidence_threshold: Minimum score to display a predicted box.
    """
    model.eval()
    style = ctx.plot_style if ctx else "seaborn-v0_8-muted"  # pragma: no mutate

    with plt.style.context(style):
        num_samples = n or (ctx.n_samples if ctx else 12)  # pragma: no mutate
        images, targets, predictions = _get_detection_batch(model, loader, device, num_samples)

        # pragma: no mutate start
        grid_cols = ctx.grid_cols if ctx else 4
        rows = int(np.ceil(len(images) / grid_cols))
        base_w, base_h = ctx.fig_size_predictions if ctx else (12, 8)
        # pragma: no mutate end

        _, axes = plt.subplots(
            rows,
            grid_cols,
            figsize=(base_w, (base_h / 3) * rows),
            constrained_layout=True,
        )
        axes_flat: npt.NDArray[Any] = np.atleast_1d(axes).flatten()

        for i, ax in enumerate(axes_flat):
            if i < len(images):
                _plot_single_detection(
                    ax,
                    images[i],
                    targets[i],
                    predictions[i],
                    classes,
                    ctx,
                    confidence_threshold,
                )
            ax.axis("off")

        if ctx:
            plt.suptitle(
                f"Detection Samples — {ctx.arch_name} | Resolution: {ctx.resolution}",
                fontsize=14,
            )

        _finalize_figure(plt, save_path, ctx)