loader
orchard.data_handler.loader
¶
Data Loader Orchestration Module.
Provides the DataLoaderFactory for building PyTorch DataLoaders with advanced features: class balancing via WeightedRandomSampler, hardware-aware configuration (workers, pinned memory), and Optuna-compatible resource management.
Architecture:
- Factory Pattern: Centralizes DataLoader construction logic
- Hardware Optimization: Adaptive workers and memory pinning (CUDA/MPS)
- Class Balancing: WeightedRandomSampler for imbalanced datasets
- Optuna Integration: Resource-conservative settings for hyperparameter tuning
Key Components:
DataLoaderFactory: Main orchestrator for train/val/test loader creationget_dataloaders: Convenience function for direct loader retrieval Example: >>> from orchard.data_handler import get_dataloaders, load_dataset >>> data = load_dataset(ds_meta) >>> train_loader, val_loader, test_loader = get_dataloaders( ... data, cfg.dataset, cfg.training, cfg.augmentation, cfg.num_workers ... ) >>> print(f"Batches: {len(train_loader)}")
DataLoaderFactory(dataset_cfg, training_cfg, aug_cfg, num_workers, metadata, task_type='classification')
¶
Orchestrates the creation of optimized PyTorch DataLoaders.
This factory centralizes the configuration of training, validation, and testing pipelines. It ensures that data transformations, class balancing, and hardware settings are synchronized across all splits.
Attributes:
| Name | Type | Description |
|---|---|---|
dataset_cfg |
DatasetConfig
|
Dataset sub-config. |
training_cfg |
TrainingConfig
|
Training sub-config. |
aug_cfg |
AugmentationConfig
|
Augmentation sub-config. |
num_workers |
int
|
Resolved worker count from hardware config. |
metadata |
DatasetData
|
Data path and raw format information. |
ds_meta |
DatasetMetadata
|
Official dataset registry specifications. |
logger |
Logger
|
Module-specific logger. |
Initializes the factory with environment and dataset metadata.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dataset_cfg
|
DatasetConfig
|
Dataset sub-config (splits, classes, resolution). |
required |
training_cfg
|
TrainingConfig
|
Training sub-config (batch size, seed). |
required |
aug_cfg
|
AugmentationConfig
|
Augmentation sub-config (transforms pipeline). |
required |
num_workers
|
int
|
Resolved worker count from hardware config. |
required |
metadata
|
DatasetData
|
Metadata from the data fetcher/downloader. |
required |
task_type
|
str
|
Task type ( |
'classification'
|
Source code in orchard/data_handler/loader.py
build(is_optuna=False)
¶
Constructs and returns the full suite of DataLoaders.
Assembles train/val/test splits with transforms, optional class balancing, and hardware-aware infrastructure settings.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
is_optuna
|
bool
|
If True, use memory-conservative settings for hyperparameter tuning (fewer workers, no persistent workers). |
False
|
Returns:
| Type | Description |
|---|---|
tuple[DataLoader[Any], DataLoader[Any], DataLoader[Any]]
|
A tuple of (train_loader, val_loader, test_loader). |
Source code in orchard/data_handler/loader.py
293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 | |
get_dataloaders(metadata, dataset_cfg, training_cfg, aug_cfg, num_workers, is_optuna=False, task_type='classification')
¶
Convenience function for creating train/val/test DataLoaders.
Wraps DataLoaderFactory for streamlined loader construction with automatic class balancing, hardware optimization, and Optuna support.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metadata
|
DatasetData
|
Dataset metadata from load_dataset (paths, splits). |
required |
dataset_cfg
|
DatasetConfig
|
Dataset sub-config (splits, classes, resolution). |
required |
training_cfg
|
TrainingConfig
|
Training sub-config (batch size, seed). |
required |
aug_cfg
|
AugmentationConfig
|
Augmentation sub-config (transforms pipeline). |
required |
num_workers
|
int
|
Resolved worker count from hardware config. |
required |
is_optuna
|
bool
|
If True, use memory-conservative settings for hyperparameter tuning. |
False
|
task_type
|
str
|
Task type ( |
'classification'
|
Returns:
| Type | Description |
|---|---|
tuple[DataLoader[Any], DataLoader[Any], DataLoader[Any]]
|
A 3-tuple of (train_loader, val_loader, test_loader). |
Example
data = load_dataset(ds_meta) loaders = get_dataloaders( ... data, cfg.dataset, cfg.training, cfg.augmentation, cfg.num_workers ... )