hardware
orchard.core.environment.hardware
¶
Hardware Acceleration & Computing Environment.
This module provides high-level abstractions for hardware discovery (CUDA/MPS), and compute resource optimization. It manages the detection of available accelerators and synchronizes PyTorch threading with system capabilities.
configure_system_libraries()
¶
Configures libraries for headless environments and reduces logging noise.
- Sets Matplotlib to 'Agg' backend on Linux/Docker (no GUI)
- Configures font embedding for PDF/PS exports
- Suppresses verbose Matplotlib warnings
Source code in orchard/core/environment/hardware.py
has_mps_backend()
¶
detect_best_device()
¶
Detects the most performant accelerator (CUDA > MPS > CPU).
Returns:
| Type | Description |
|---|---|
str
|
Device string: 'cuda', 'mps', or 'cpu' |
Source code in orchard/core/environment/hardware.py
to_device_obj(device_str, local_rank=0)
¶
Converts device string to PyTorch device object.
In distributed multi-GPU setups, uses local_rank to select the
correct GPU and calls torch.cuda.set_device() for CUDA affinity.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
device_str
|
str
|
'cuda', 'cpu', or 'auto' (auto-selects best available) |
required |
local_rank
|
int
|
Node-local process rank for GPU assignment (default 0).
Used to select |
0
|
Returns:
| Type | Description |
|---|---|
device
|
torch.device object |
Raises:
| Type | Description |
|---|---|
ValueError
|
If CUDA requested but unavailable, or invalid device string |
Source code in orchard/core/environment/hardware.py
get_accelerator_name()
¶
Returns accelerator model name (CUDA GPU or Apple Silicon) or empty string.
Source code in orchard/core/environment/hardware.py
get_vram_info(device_idx=0)
¶
Retrieves VRAM availability for a CUDA device.
Note
MPS (Apple Silicon) does not expose VRAM info via PyTorch —
torch.mps.mem_get_info() does not exist. Returns 'N/A' for
non-CUDA devices until Apple provides a public API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
device_idx
|
int
|
GPU index to query |
0
|
Returns:
| Type | Description |
|---|---|
str
|
Formatted string 'X.XX GB / Y.YY GB' or status message |
Source code in orchard/core/environment/hardware.py
get_num_workers()
¶
Determines optimal DataLoader workers with RAM stability cap.
Returns:
| Type | Description |
|---|---|
int
|
Recommended number of subprocesses (2-8 range) |
Source code in orchard/core/environment/hardware.py
apply_cpu_threads(num_workers)
¶
Sets optimal compute threads to avoid resource contention.
Synchronizes PyTorch, OMP, and MKL thread counts.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
num_workers
|
int
|
Active DataLoader workers |
required |
Returns:
| Type | Description |
|---|---|
int
|
Number of threads assigned to compute operations |