Hardware Package API Reference
Mindtrace Hardware Module
A comprehensive hardware abstraction layer providing unified access to cameras, PLCs, sensors, and other industrial hardware components with lazy imports to prevent cross-contamination between different backends.
Key Features
- Lazy import system to avoid loading all backends at startup
- Unified interface for different hardware types
- Async-first design for optimal performance
- Thread-safe operations across all components
- Comprehensive error handling and logging
- Configuration management system
- Mock backends for testing and development
Hardware Components
- CameraManager: Unified camera management (Basler, OpenCV)
- PLCManager: Unified PLC management (Allen-Bradley, Siemens, Modbus)
- SensorManager: Sensor data acquisition and monitoring (MQTT, HTTP, Serial)
- SensorManagerService: Service wrapper for SensorManager with MCP endpoints
- ActuatorManager: Actuator control and positioning (Future)
Design Philosophy
This module uses lazy imports to prevent SWIG warnings from pycomm3 appearing in camera tests, and to avoid loading heavy SDKs unless they are actually needed. Each manager is only imported when accessed.
Usage
Import managers only when needed
from mindtrace.hardware import CameraManager, PLCManager, SensorManager from mindtrace.hardware.services.sensors import SensorManagerService
Camera operations
async with CameraManager() as camera_manager: cameras = camera_manager.discover() camera = await camera_manager.open(cameras[0]) image = await camera.capture()
PLC operations
async with PLCManager() as plc_manager: await plc_manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await plc_manager.connect_plc("PLC1") values = await plc_manager.read_tag("PLC1", ["Tag1", "Tag2"])
Sensor operations (direct manager)
async with SensorManager() as sensor_manager: await sensor_manager.connect_sensor("temp1", "mqtt", config, "sensors/temp") data = await sensor_manager.read_sensor_data("temp1")
Sensor operations (service with MCP endpoints)
service = SensorManagerService() response = await service.connect_sensor(connection_request)
Configuration
All hardware components use the unified configuration system: - Environment variables with MINDTRACE_HW_ prefix - JSON configuration files - Programmatic configuration via dataclasses - Hierarchical configuration inheritance
Thread Safety
All hardware managers are thread-safe and can be used concurrently from multiple threads without interference.
cameras
Camera module for mindtrace hardware.
Provides unified camera management across different camera manufacturers with graceful SDK handling and comprehensive error management.
CameraBackend
CameraBackend(
camera_name: Optional[str] = None,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
)
Bases: MindtraceABC
Abstract base class for all camera implementations.
This class defines the async interface that all camera backends must implement to ensure consistent behavior across different camera types and manufacturers.
Thread Model
Backends declare their threading requirements via the REQUIRES_THREAD_AFFINITY
class attribute:
-
When
True, a dedicated single-thread executor is created per camera instance to ensure all SDK calls for that camera execute on the same OS thread. This is required by SDKs like Pypylon and Harvesters that bind camera objects to the thread that opened them. -
When
False, blocking calls are dispatched viaasyncio.to_thread()using the default shared thread pool. This is suitable for thread-safe SDKs like OpenCV.
All blocking SDK calls should use the _run_blocking() method, which automatically
selects the appropriate execution strategy based on REQUIRES_THREAD_AFFINITY.
Subclass Requirements
- Set
REQUIRES_THREAD_AFFINITY = Trueif the SDK requires thread affinity - Use
_run_blocking()for all SDK calls that may block - Call
await self._cleanup_executor()inclose()to release thread resources
Attributes:
| Name | Type | Description |
|---|---|---|
REQUIRES_THREAD_AFFINITY |
bool
|
Class attribute indicating thread affinity requirement |
camera_name |
Unique identifier for the camera |
|
camera_config_file |
Path to camera configuration file |
|
img_quality_enhancement |
Whether image quality enhancement is enabled |
|
retrieve_retry_count |
Number of retries for image retrieval |
|
camera |
Optional[Any]
|
The initialized camera object (implementation-specific) |
device_manager |
Optional[Any]
|
Device manager object (implementation-specific) |
initialized |
bool
|
Camera initialization status |
Initialize base camera with configuration integration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
Optional[str]
|
Unique identifier for the camera (auto-generated if None) |
None
|
camera_config
|
Optional[str]
|
Path to camera configuration file |
None
|
img_quality_enhancement
|
Optional[bool]
|
Whether to apply image quality enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of retries for image retrieval (uses config default if None) |
None
|
setup_camera
async
Common setup method for camera initialization.
This method provides a standardized setup pattern that can be used by all camera backends. It calls the abstract initialize() method and handles common initialization patterns.
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
set_bandwidth_limit
async
Set GigE camera bandwidth limit in Mbps.
set_inter_packet_delay
async
Set inter-packet delay for network traffic control.
set_capture_timeout
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Note
This is a runtime-configurable parameter that can be changed without reinitializing the camera.
get_capture_timeout
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
get_lens_status
async
Get liquid lens hardware state.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict with keys: |
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
set_optical_power
async
Set lens optical power in diopters (manual focus).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
diopters
|
float
|
Target optical power within the lens range. |
required |
get_optical_power_range
async
Get optical power range [min, max] in diopters.
trigger_autofocus
async
Trigger one-shot autofocus.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
accuracy
|
str
|
Autofocus accuracy mode — "Fast", "Normal", or "Accurate". |
'Normal'
|
Returns:
| Type | Description |
|---|---|
bool
|
True when autofocus completes successfully. |
get_focus_config
async
Get current focus/autofocus configuration.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict with keys: accuracy, stepper, stepper_lower_limit, stepper_upper_limit, |
Dict[str, Any]
|
roi_size, focus_source, edge_detection, roi_offset_x, roi_offset_y. |
backends
Camera backends for different manufacturers and types.
This module provides camera backend implementations for the Mindtrace hardware system. Each backend implements the CameraBackend interface for consistent camera operations.
Available Backends
- CameraBackend: Abstract base class defining the camera interface
- BaslerCameraBackend: Industrial cameras from Basler (when available)
- OpenCVCameraBackend: USB cameras and webcams via OpenCV (when available)
- GenICamCameraBackend: GenICam-compliant cameras via Harvesters (when available)
Usage: from mindtrace.hardware.cameras.backends import CameraBackend from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend from mindtrace.hardware.cameras.backends.genicam import GenICamCameraBackend
Configuration
Camera backends integrate with the Mindtrace configuration system to provide consistent default values and settings across all camera types.
CameraBackend
CameraBackend(
camera_name: Optional[str] = None,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
)
Bases: MindtraceABC
Abstract base class for all camera implementations.
This class defines the async interface that all camera backends must implement to ensure consistent behavior across different camera types and manufacturers.
Thread Model
Backends declare their threading requirements via the REQUIRES_THREAD_AFFINITY
class attribute:
-
When
True, a dedicated single-thread executor is created per camera instance to ensure all SDK calls for that camera execute on the same OS thread. This is required by SDKs like Pypylon and Harvesters that bind camera objects to the thread that opened them. -
When
False, blocking calls are dispatched viaasyncio.to_thread()using the default shared thread pool. This is suitable for thread-safe SDKs like OpenCV.
All blocking SDK calls should use the _run_blocking() method, which automatically
selects the appropriate execution strategy based on REQUIRES_THREAD_AFFINITY.
Subclass Requirements
- Set
REQUIRES_THREAD_AFFINITY = Trueif the SDK requires thread affinity - Use
_run_blocking()for all SDK calls that may block - Call
await self._cleanup_executor()inclose()to release thread resources
Attributes:
| Name | Type | Description |
|---|---|---|
REQUIRES_THREAD_AFFINITY |
bool
|
Class attribute indicating thread affinity requirement |
camera_name |
Unique identifier for the camera |
|
camera_config_file |
Path to camera configuration file |
|
img_quality_enhancement |
Whether image quality enhancement is enabled |
|
retrieve_retry_count |
Number of retries for image retrieval |
|
camera |
Optional[Any]
|
The initialized camera object (implementation-specific) |
device_manager |
Optional[Any]
|
Device manager object (implementation-specific) |
initialized |
bool
|
Camera initialization status |
Initialize base camera with configuration integration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
Optional[str]
|
Unique identifier for the camera (auto-generated if None) |
None
|
camera_config
|
Optional[str]
|
Path to camera configuration file |
None
|
img_quality_enhancement
|
Optional[bool]
|
Whether to apply image quality enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of retries for image retrieval (uses config default if None) |
None
|
async
Common setup method for camera initialization.
This method provides a standardized setup pattern that can be used by all camera backends. It calls the abstract initialize() method and handles common initialization patterns.
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
async
Set GigE camera bandwidth limit in Mbps.
async
Set inter-packet delay for network traffic control.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Note
This is a runtime-configurable parameter that can be changed without reinitializing the camera.
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Get liquid lens hardware state.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict with keys: |
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
async
Set lens optical power in diopters (manual focus).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
diopters
|
float
|
Target optical power within the lens range. |
required |
async
Get optical power range [min, max] in diopters.
async
Trigger one-shot autofocus.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
accuracy
|
str
|
Autofocus accuracy mode — "Fast", "Normal", or "Accurate". |
'Normal'
|
Returns:
| Type | Description |
|---|---|
bool
|
True when autofocus completes successfully. |
async
Get current focus/autofocus configuration.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict with keys: accuracy, stepper, stepper_lower_limit, stepper_upper_limit, |
Dict[str, Any]
|
roi_size, focus_source, edge_detection, roi_offset_x, roi_offset_y. |
basler
Basler Camera Backend
Provides support for Basler cameras via pypylon SDK with mock implementation for testing.
Components
- BaslerCameraBackend: Real Basler camera implementation (requires pypylon SDK)
- MockBaslerCameraBackend: Mock implementation for testing and development
Requirements
- Real cameras: pypylon SDK (Pylon SDK for Python)
- Mock cameras: No additional dependencies
Installation
- Install Pylon SDK from Basler
- pip install pypylon
- Configure camera permissions (Linux may require udev rules)
Usage
from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend, MockBaslerCameraBackend
Real camera
if BASLER_AVAILABLE: camera = BaslerCameraBackend("camera_name") success, cam_obj, remote_obj = await camera.initialize() # Initialize first if success: image = await camera.capture() await camera.close()
Mock camera (always available)
mock_camera = MockBaslerCameraBackend("mock_cam_0") success, cam_obj, remote_obj = await mock_camera.initialize() # Initialize first if success: image = await mock_camera.capture() await mock_camera.close()
BaslerCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
multicast_enabled: Optional[bool] = None,
target_ips: Optional[List[str]] = None,
multicast_group: Optional[str] = None,
multicast_port: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
Basler camera backend using the pypylon SDK.
This backend provides comprehensive support for Basler cameras including hardware triggers, exposure control, ROI settings, and image enhancement.
Thread Model
The pypylon SDK requires thread affinity - all SDK operations for a camera must execute on the same OS thread that opened it. This backend uses a dedicated single-thread executor per camera instance to satisfy this requirement, enabling reliable multi-camera concurrent operations.
Capture operations are atomic: the entire capture sequence (trigger, retrieve, convert) executes as a single blocking call on the dedicated thread, preventing thread-switching issues.
Features
- Full pypylon SDK integration for USB3 and GigE cameras
- Hardware trigger and continuous capture modes
- Region of Interest (ROI) control
- Automatic and manual exposure/gain control
- CLAHE image quality enhancement
- Pylon Feature Stream (.pfs) configuration import/export
- Multicast streaming support for GigE cameras
Requirements
- Basler Pylon SDK installed on system
- pypylon package (pip install pypylon)
- OpenCV for image processing
Example::
from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend
async with BaslerCameraBackend("cam1") as camera:
await camera.set_exposure(20000)
await camera.set_triggermode("continuous")
image = await camera.capture()
Attributes:
| Name | Type | Description |
|---|---|---|
camera |
Optional[Any]
|
Underlying pypylon InstantCamera object |
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
timeout_ms |
Capture timeout in milliseconds |
|
buffer_count |
Number of frame buffers for streaming |
|
converter |
Pypylon image format converter |
|
grabbing_mode |
Pylon grabbing strategy |
|
multicast_enabled |
Whether multicast streaming is enabled |
Initialize Basler camera with configurable parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier (serial number, IP, or user-defined name) |
required |
camera_config
|
Optional[str]
|
Path to Pylon Feature Stream (.pfs) file (optional) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable CLAHE image enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
multicast_enabled
|
Optional[bool]
|
Enable multicast streaming mode (uses config default if None) |
None
|
target_ips
|
Optional[List[str]]
|
List of target IP addresses for multicast discovery (optional) |
None
|
multicast_group
|
Optional[str]
|
Multicast group IP address (uses config default if None) |
None
|
multicast_port
|
Optional[int]
|
Multicast port number (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - pixel_format: Default pixel format (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) |
{}
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon SDK is not available |
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If camera initialization fails |
staticmethod
get_available_cameras(
include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available Basler cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
target_ips
|
Optional[List[str]]
|
Optional list of IP addresses to specifically discover |
None
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names (user-defined names preferred, serial numbers as fallback) or dict with details |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Basler SDK is not available |
HardwareOperationError
|
If camera discovery fails |
async
classmethod
discover_async(
include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]
Async wrapper for get_available_cameras() - runs discovery in threadpool.
Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return a dict of details per camera. |
False
|
target_ips
|
Optional[List[str]]
|
Optional list of specific IP addresses to target. |
None
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details. |
async
Initialize the camera connection.
This searches for the camera by name, serial number, or IP and establishes a connection if found. Uses multicast-aware discovery if enabled.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, camera object, None) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If no cameras found or specified camera not found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
async
Configure multicast streaming settings for the camera.
This method sets up multicast parameters when multicast mode is enabled. It configures the camera using the StreamGrabber interface for multicast streaming.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If multicast configuration fails |
HardwareOperationError
|
If streaming configuration fails |
Get image quality enhancement setting.
Set image quality enhancement setting.
async
Get the supported exposure time range in microseconds.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_exposure, max_exposure] in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure range retrieval fails |
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure retrieval fails |
async
Set the camera exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure_value
|
Exposure time in microseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If exposure value is out of range |
HardwareOperationError
|
If exposure setting fails |
async
Get current trigger mode.
Returns:
| Type | Description |
|---|---|
str
|
"continuous" or "trigger" |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If trigger mode retrieval fails |
async
Set the camera's trigger mode for image acquisition.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" or "trigger") |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If trigger mode is invalid |
HardwareOperationError
|
If trigger mode setting fails |
async
Capture a single image from the camera.
In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.
This method runs the entire capture operation atomically on a dedicated thread to ensure thread affinity for pypylon SDK calls. This is critical for multi-camera concurrent operations.
Returns:
| Type | Description |
|---|---|
ndarray
|
Image array in BGR format |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
async
Check if camera is connected and operational.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected and operational, False otherwise |
async
Import camera configuration from common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If configuration import fails |
async
Export current camera configuration to common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path where to save configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If configuration export fails |
async
Set the Region of Interest (ROI) for image acquisition.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
X offset from sensor top-left |
required |
y
|
int
|
Y offset from sensor top-left |
required |
width
|
int
|
ROI width |
required |
height
|
int
|
ROI height |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If ROI parameters are invalid |
HardwareOperationError
|
If ROI setting fails |
async
Get current Region of Interest settings.
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with x, y, width, height |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If ROI retrieval fails |
async
Reset ROI to maximum sensor area.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If ROI reset fails |
async
Set the camera's gain value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
float
|
Gain value (camera-specific range) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If gain value is out of range |
HardwareOperationError
|
If gain setting fails |
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If gain retrieval fails |
async
Get camera gain range.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List containing [min_gain, max_gain] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If gain range retrieval fails |
async
Set GigE camera bandwidth limit in Mbps.
async
Set inter-packet delay for network traffic control.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Get available white balance modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance modes (lowercase for API compatibility) |
async
Get camera width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If width range retrieval fails |
async
Get camera height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If height range retrieval fails |
async
Get available pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available pixel formats |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If pixel format range retrieval fails |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If pixel format retrieval fails |
async
Set pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format to set |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If pixel format is invalid |
HardwareOperationError
|
If pixel format setting fails |
async
Get the current white balance auto setting.
Returns:
| Type | Description |
|---|---|
str
|
White balance auto setting ("off", "once", "continuous") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If white balance retrieval fails |
async
Set the white balance auto mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode ("off", "once", "continuous") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If white balance mode is invalid |
HardwareOperationError
|
If white balance setting fails |
async
Get available trigger modes for Basler cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available trigger modes based on GenICam TriggerMode and TriggerSource |
async
Get bandwidth limit range for GigE cameras.
Returns:
| Type | Description |
|---|---|
List[float]
|
List containing [min_bandwidth, max_bandwidth] in Mbps |
async
Get packet size range for GigE cameras.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_packet_size, max_packet_size] in bytes |
async
Get inter-packet delay range for GigE cameras.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_delay, max_delay] in ticks |
async
Set lens optical power in diopters (manual focus).
async
Get optical power range [min, max] in diopters.
async
Trigger one-shot autofocus.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
accuracy
|
str
|
"Fast", "Normal", or "Accurate". |
'Normal'
|
Returns:
| Type | Description |
|---|---|
bool
|
True when autofocus completes successfully. |
async
Get current focus/autofocus configuration.
MockBaslerCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
Mock Basler Camera Backend Implementation
This class provides a mock implementation of the Basler camera backend for testing and development. It simulates Basler camera functionality without requiring actual hardware, with configurable behavior and error simulation.
Features
- Complete simulation of Basler camera API
- Configurable image generation with realistic patterns
- Error simulation for testing error handling
- Configuration import/export simulation
- Camera control features (exposure, ROI, trigger modes, etc.)
- Realistic timing and behavior simulation
Usage::
from mindtrace.hardware.cameras.backends.basler import MockBaslerCameraBackend
camera = MockBaslerCameraBackend("mock_camera_1")
await camera.set_exposure(20000)
image = await camera.capture()
await camera.close()
Error Simulation
Enable error simulation via environment variables: - MOCK_BASLER_FAIL_INIT: Simulate initialization failure - MOCK_BASLER_FAIL_CAPTURE: Simulate capture failure - MOCK_BASLER_TIMEOUT: Simulate timeout errors
Attributes:
| Name | Type | Description |
|---|---|---|
initialized |
Whether camera was successfully initialized |
|
camera_name |
Name/identifier of the mock camera |
|
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
img_quality_enhancement |
Current image enhancement setting |
|
timeout_ms |
Capture timeout in milliseconds |
|
retrieve_retry_count |
Number of capture retry attempts |
|
exposure_time |
Current exposure time in microseconds |
|
gain |
Current gain value |
|
roi |
Current region of interest settings |
|
white_balance_mode |
Current white balance mode |
|
image_counter |
Counter for generating unique images |
|
fail_init |
Whether to simulate initialization failure |
|
fail_capture |
Whether to simulate capture failure |
|
simulate_timeout |
Whether to simulate timeout errors |
Initialize mock Basler camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier |
required |
camera_config
|
Optional[str]
|
Path to configuration file (simulated) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable image enhancement simulation (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - pixel_format: Pixel format (simulated) - buffer_count: Buffer count (simulated) - timeout_ms: Timeout in milliseconds - fast_mode: If True, skip all sleep delays for fast unit tests (default: False) - simulate_fail_init: If True, simulate initialization failure (overrides env) - simulate_fail_capture: If True, simulate capture failure (overrides env) - simulate_timeout: If True, simulate timeout on capture (overrides env) - simulate_cancel: If True, simulate asyncio cancellation during capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_checker_size: Checker size (int) used when pattern is checkerboard - synthetic_overlay_text: If False, disables text overlays in synthetic images |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If initialization fails (when simulated) |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available mock Basler cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of mock camera names or dict with details |
async
Initialize the mock camera connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, mock camera object, None) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If no cameras found or specified camera not found |
CameraInitializationError
|
If initialization fails (when simulated) |
CameraConnectionError
|
If camera connection fails |
async
Capture a single image from the mock camera.
Returns:
| Type | Description |
|---|---|
ndarray
|
Captured BGR image array |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
Enter grabbing state, optionally updating grabbing mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grabbing_mode
|
Optional[str]
|
Optional grabbing mode string; if provided, updates current mode. |
None
|
Get image quality enhancement setting.
Returns:
| Type | Description |
|---|---|
bool
|
True if enhancement is enabled, otherwise False. |
Set image quality enhancement setting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
bool
|
True to enable enhancement, False to disable. |
required |
async
Get the supported exposure time range in microseconds.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_exposure, max_exposure] in microseconds |
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time |
async
Set the camera exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure
|
Union[int, float]
|
Exposure time in microseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If exposure value is out of range |
async
Get current trigger mode.
Returns:
| Type | Description |
|---|---|
str
|
Current trigger mode |
async
Set trigger mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" or "trigger") |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If trigger mode is invalid |
async
Check if mock camera is connected and operational.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected and operational, False otherwise |
async
Import camera configuration from common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration file is not found or invalid |
async
Export camera configuration to common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to save configuration file |
required |
async
Set Region of Interest (ROI).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
ROI x offset |
required |
y
|
int
|
ROI y offset |
required |
width
|
int
|
ROI width |
required |
height
|
int
|
ROI height |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If ROI parameters are invalid |
async
Get current Region of Interest (ROI).
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with ROI parameters |
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
Union[int, float]
|
Gain value |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If gain value is out of range |
async
Get the supported gain range.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_gain, max_gain] |
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
async
Get current white balance mode.
Returns:
| Type | Description |
|---|---|
str
|
Current white balance mode |
async
Set white balance mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode |
required |
async
Get available white balance modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance modes (lowercase for API compatibility) |
async
Get available pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available pixel formats |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format |
async
Set pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format to set |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If pixel format is not supported |
async
Get camera width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
async
Get camera height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
async
Set GigE camera bandwidth limit in Mbps (simulated).
async
Set GigE packet size for network optimization (simulated).
async
Set inter-packet delay for network traffic control (simulated).
async
Get current inter-packet delay (simulated).
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Get available trigger modes for mock Basler cameras.
async
Get bandwidth limit range for mock GigE cameras.
async
Get packet size range for mock GigE cameras.
async
Get inter-packet delay range for mock GigE cameras.
Basler Camera Backend Module
BaslerCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
multicast_enabled: Optional[bool] = None,
target_ips: Optional[List[str]] = None,
multicast_group: Optional[str] = None,
multicast_port: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
Basler camera backend using the pypylon SDK.
This backend provides comprehensive support for Basler cameras including hardware triggers, exposure control, ROI settings, and image enhancement.
Thread Model
The pypylon SDK requires thread affinity - all SDK operations for a camera must execute on the same OS thread that opened it. This backend uses a dedicated single-thread executor per camera instance to satisfy this requirement, enabling reliable multi-camera concurrent operations.
Capture operations are atomic: the entire capture sequence (trigger, retrieve, convert) executes as a single blocking call on the dedicated thread, preventing thread-switching issues.
Features
- Full pypylon SDK integration for USB3 and GigE cameras
- Hardware trigger and continuous capture modes
- Region of Interest (ROI) control
- Automatic and manual exposure/gain control
- CLAHE image quality enhancement
- Pylon Feature Stream (.pfs) configuration import/export
- Multicast streaming support for GigE cameras
Requirements
- Basler Pylon SDK installed on system
- pypylon package (pip install pypylon)
- OpenCV for image processing
Example::
from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend
async with BaslerCameraBackend("cam1") as camera:
await camera.set_exposure(20000)
await camera.set_triggermode("continuous")
image = await camera.capture()
Attributes:
| Name | Type | Description |
|---|---|---|
camera |
Optional[Any]
|
Underlying pypylon InstantCamera object |
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
timeout_ms |
Capture timeout in milliseconds |
|
buffer_count |
Number of frame buffers for streaming |
|
converter |
Pypylon image format converter |
|
grabbing_mode |
Pylon grabbing strategy |
|
multicast_enabled |
Whether multicast streaming is enabled |
Initialize Basler camera with configurable parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier (serial number, IP, or user-defined name) |
required |
camera_config
|
Optional[str]
|
Path to Pylon Feature Stream (.pfs) file (optional) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable CLAHE image enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
multicast_enabled
|
Optional[bool]
|
Enable multicast streaming mode (uses config default if None) |
None
|
target_ips
|
Optional[List[str]]
|
List of target IP addresses for multicast discovery (optional) |
None
|
multicast_group
|
Optional[str]
|
Multicast group IP address (uses config default if None) |
None
|
multicast_port
|
Optional[int]
|
Multicast port number (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - pixel_format: Default pixel format (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) |
{}
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon SDK is not available |
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If camera initialization fails |
staticmethod
get_available_cameras(
include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available Basler cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
target_ips
|
Optional[List[str]]
|
Optional list of IP addresses to specifically discover |
None
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names (user-defined names preferred, serial numbers as fallback) or dict with details |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Basler SDK is not available |
HardwareOperationError
|
If camera discovery fails |
async
classmethod
discover_async(
include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]
Async wrapper for get_available_cameras() - runs discovery in threadpool.
Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return a dict of details per camera. |
False
|
target_ips
|
Optional[List[str]]
|
Optional list of specific IP addresses to target. |
None
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details. |
async
Initialize the camera connection.
This searches for the camera by name, serial number, or IP and establishes a connection if found. Uses multicast-aware discovery if enabled.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, camera object, None) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If no cameras found or specified camera not found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
async
Configure multicast streaming settings for the camera.
This method sets up multicast parameters when multicast mode is enabled. It configures the camera using the StreamGrabber interface for multicast streaming.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If multicast configuration fails |
HardwareOperationError
|
If streaming configuration fails |
Get image quality enhancement setting.
Set image quality enhancement setting.
async
Get the supported exposure time range in microseconds.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_exposure, max_exposure] in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure range retrieval fails |
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure retrieval fails |
async
Set the camera exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure_value
|
Exposure time in microseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If exposure value is out of range |
HardwareOperationError
|
If exposure setting fails |
async
Get current trigger mode.
Returns:
| Type | Description |
|---|---|
str
|
"continuous" or "trigger" |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If trigger mode retrieval fails |
async
Set the camera's trigger mode for image acquisition.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" or "trigger") |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If trigger mode is invalid |
HardwareOperationError
|
If trigger mode setting fails |
async
Capture a single image from the camera.
In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.
This method runs the entire capture operation atomically on a dedicated thread to ensure thread affinity for pypylon SDK calls. This is critical for multi-camera concurrent operations.
Returns:
| Type | Description |
|---|---|
ndarray
|
Image array in BGR format |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
async
Check if camera is connected and operational.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected and operational, False otherwise |
async
Import camera configuration from common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If configuration import fails |
async
Export current camera configuration to common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path where to save configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If configuration export fails |
async
Set the Region of Interest (ROI) for image acquisition.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
X offset from sensor top-left |
required |
y
|
int
|
Y offset from sensor top-left |
required |
width
|
int
|
ROI width |
required |
height
|
int
|
ROI height |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If ROI parameters are invalid |
HardwareOperationError
|
If ROI setting fails |
async
Get current Region of Interest settings.
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with x, y, width, height |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If ROI retrieval fails |
async
Reset ROI to maximum sensor area.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If ROI reset fails |
async
Set the camera's gain value.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
float
|
Gain value (camera-specific range) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If gain value is out of range |
HardwareOperationError
|
If gain setting fails |
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If gain retrieval fails |
async
Get camera gain range.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List containing [min_gain, max_gain] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If gain range retrieval fails |
async
Set GigE camera bandwidth limit in Mbps.
async
Set inter-packet delay for network traffic control.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Get available white balance modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance modes (lowercase for API compatibility) |
async
Get camera width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If width range retrieval fails |
async
Get camera height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If height range retrieval fails |
async
Get available pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available pixel formats |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If pixel format range retrieval fails |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If pixel format retrieval fails |
async
Set pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format to set |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If pixel format is invalid |
HardwareOperationError
|
If pixel format setting fails |
async
Get the current white balance auto setting.
Returns:
| Type | Description |
|---|---|
str
|
White balance auto setting ("off", "once", "continuous") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If white balance retrieval fails |
async
Set the white balance auto mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode ("off", "once", "continuous") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If white balance mode is invalid |
HardwareOperationError
|
If white balance setting fails |
async
Get available trigger modes for Basler cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available trigger modes based on GenICam TriggerMode and TriggerSource |
async
Get bandwidth limit range for GigE cameras.
Returns:
| Type | Description |
|---|---|
List[float]
|
List containing [min_bandwidth, max_bandwidth] in Mbps |
async
Get packet size range for GigE cameras.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_packet_size, max_packet_size] in bytes |
async
Get inter-packet delay range for GigE cameras.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_delay, max_delay] in ticks |
async
Set lens optical power in diopters (manual focus).
async
Get optical power range [min, max] in diopters.
async
Trigger one-shot autofocus.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
accuracy
|
str
|
"Fast", "Normal", or "Accurate". |
'Normal'
|
Returns:
| Type | Description |
|---|---|
bool
|
True when autofocus completes successfully. |
async
Get current focus/autofocus configuration.
Mock Basler Camera Backend Module
MockBaslerCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
Mock Basler Camera Backend Implementation
This class provides a mock implementation of the Basler camera backend for testing and development. It simulates Basler camera functionality without requiring actual hardware, with configurable behavior and error simulation.
Features
- Complete simulation of Basler camera API
- Configurable image generation with realistic patterns
- Error simulation for testing error handling
- Configuration import/export simulation
- Camera control features (exposure, ROI, trigger modes, etc.)
- Realistic timing and behavior simulation
Usage::
from mindtrace.hardware.cameras.backends.basler import MockBaslerCameraBackend
camera = MockBaslerCameraBackend("mock_camera_1")
await camera.set_exposure(20000)
image = await camera.capture()
await camera.close()
Error Simulation
Enable error simulation via environment variables: - MOCK_BASLER_FAIL_INIT: Simulate initialization failure - MOCK_BASLER_FAIL_CAPTURE: Simulate capture failure - MOCK_BASLER_TIMEOUT: Simulate timeout errors
Attributes:
| Name | Type | Description |
|---|---|---|
initialized |
Whether camera was successfully initialized |
|
camera_name |
Name/identifier of the mock camera |
|
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
img_quality_enhancement |
Current image enhancement setting |
|
timeout_ms |
Capture timeout in milliseconds |
|
retrieve_retry_count |
Number of capture retry attempts |
|
exposure_time |
Current exposure time in microseconds |
|
gain |
Current gain value |
|
roi |
Current region of interest settings |
|
white_balance_mode |
Current white balance mode |
|
image_counter |
Counter for generating unique images |
|
fail_init |
Whether to simulate initialization failure |
|
fail_capture |
Whether to simulate capture failure |
|
simulate_timeout |
Whether to simulate timeout errors |
Initialize mock Basler camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier |
required |
camera_config
|
Optional[str]
|
Path to configuration file (simulated) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable image enhancement simulation (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - pixel_format: Pixel format (simulated) - buffer_count: Buffer count (simulated) - timeout_ms: Timeout in milliseconds - fast_mode: If True, skip all sleep delays for fast unit tests (default: False) - simulate_fail_init: If True, simulate initialization failure (overrides env) - simulate_fail_capture: If True, simulate capture failure (overrides env) - simulate_timeout: If True, simulate timeout on capture (overrides env) - simulate_cancel: If True, simulate asyncio cancellation during capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_checker_size: Checker size (int) used when pattern is checkerboard - synthetic_overlay_text: If False, disables text overlays in synthetic images |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If initialization fails (when simulated) |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available mock Basler cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of mock camera names or dict with details |
async
Initialize the mock camera connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, mock camera object, None) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If no cameras found or specified camera not found |
CameraInitializationError
|
If initialization fails (when simulated) |
CameraConnectionError
|
If camera connection fails |
async
Capture a single image from the mock camera.
Returns:
| Type | Description |
|---|---|
ndarray
|
Captured BGR image array |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
Enter grabbing state, optionally updating grabbing mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
grabbing_mode
|
Optional[str]
|
Optional grabbing mode string; if provided, updates current mode. |
None
|
Get image quality enhancement setting.
Returns:
| Type | Description |
|---|---|
bool
|
True if enhancement is enabled, otherwise False. |
Set image quality enhancement setting.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
bool
|
True to enable enhancement, False to disable. |
required |
async
Get the supported exposure time range in microseconds.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_exposure, max_exposure] in microseconds |
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time |
async
Set the camera exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure
|
Union[int, float]
|
Exposure time in microseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If exposure value is out of range |
async
Get current trigger mode.
Returns:
| Type | Description |
|---|---|
str
|
Current trigger mode |
async
Set trigger mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" or "trigger") |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If trigger mode is invalid |
async
Check if mock camera is connected and operational.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected and operational, False otherwise |
async
Import camera configuration from common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration file is not found or invalid |
async
Export camera configuration to common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to save configuration file |
required |
async
Set Region of Interest (ROI).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
ROI x offset |
required |
y
|
int
|
ROI y offset |
required |
width
|
int
|
ROI width |
required |
height
|
int
|
ROI height |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If ROI parameters are invalid |
async
Get current Region of Interest (ROI).
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with ROI parameters |
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
Union[int, float]
|
Gain value |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If gain value is out of range |
async
Get the supported gain range.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_gain, max_gain] |
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
async
Get current white balance mode.
Returns:
| Type | Description |
|---|---|
str
|
Current white balance mode |
async
Set white balance mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode |
required |
async
Get available white balance modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance modes (lowercase for API compatibility) |
async
Get available pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available pixel formats |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format |
async
Set pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format to set |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If pixel format is not supported |
async
Get camera width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
async
Get camera height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
async
Set GigE camera bandwidth limit in Mbps (simulated).
async
Set GigE packet size for network optimization (simulated).
async
Set inter-packet delay for network traffic control (simulated).
async
Get current inter-packet delay (simulated).
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Get available trigger modes for mock Basler cameras.
async
Get bandwidth limit range for mock GigE cameras.
async
Get packet size range for mock GigE cameras.
async
Get inter-packet delay range for mock GigE cameras.
camera_backend
CameraBackend(
camera_name: Optional[str] = None,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
)
Bases: MindtraceABC
Abstract base class for all camera implementations.
This class defines the async interface that all camera backends must implement to ensure consistent behavior across different camera types and manufacturers.
Thread Model
Backends declare their threading requirements via the REQUIRES_THREAD_AFFINITY
class attribute:
-
When
True, a dedicated single-thread executor is created per camera instance to ensure all SDK calls for that camera execute on the same OS thread. This is required by SDKs like Pypylon and Harvesters that bind camera objects to the thread that opened them. -
When
False, blocking calls are dispatched viaasyncio.to_thread()using the default shared thread pool. This is suitable for thread-safe SDKs like OpenCV.
All blocking SDK calls should use the _run_blocking() method, which automatically
selects the appropriate execution strategy based on REQUIRES_THREAD_AFFINITY.
Subclass Requirements
- Set
REQUIRES_THREAD_AFFINITY = Trueif the SDK requires thread affinity - Use
_run_blocking()for all SDK calls that may block - Call
await self._cleanup_executor()inclose()to release thread resources
Attributes:
| Name | Type | Description |
|---|---|---|
REQUIRES_THREAD_AFFINITY |
bool
|
Class attribute indicating thread affinity requirement |
camera_name |
Unique identifier for the camera |
|
camera_config_file |
Path to camera configuration file |
|
img_quality_enhancement |
Whether image quality enhancement is enabled |
|
retrieve_retry_count |
Number of retries for image retrieval |
|
camera |
Optional[Any]
|
The initialized camera object (implementation-specific) |
device_manager |
Optional[Any]
|
Device manager object (implementation-specific) |
initialized |
bool
|
Camera initialization status |
Initialize base camera with configuration integration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
Optional[str]
|
Unique identifier for the camera (auto-generated if None) |
None
|
camera_config
|
Optional[str]
|
Path to camera configuration file |
None
|
img_quality_enhancement
|
Optional[bool]
|
Whether to apply image quality enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of retries for image retrieval (uses config default if None) |
None
|
async
Common setup method for camera initialization.
This method provides a standardized setup pattern that can be used by all camera backends. It calls the abstract initialize() method and handles common initialization patterns.
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
async
Set GigE camera bandwidth limit in Mbps.
async
Set inter-packet delay for network traffic control.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Note
This is a runtime-configurable parameter that can be changed without reinitializing the camera.
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Get liquid lens hardware state.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict with keys: |
Dict[str, Any]
|
|
Dict[str, Any]
|
|
Dict[str, Any]
|
|
async
Set lens optical power in diopters (manual focus).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
diopters
|
float
|
Target optical power within the lens range. |
required |
async
Get optical power range [min, max] in diopters.
async
Trigger one-shot autofocus.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
accuracy
|
str
|
Autofocus accuracy mode — "Fast", "Normal", or "Accurate". |
'Normal'
|
Returns:
| Type | Description |
|---|---|
bool
|
True when autofocus completes successfully. |
async
Get current focus/autofocus configuration.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dict with keys: accuracy, stepper, stepper_lower_limit, stepper_upper_limit, |
Dict[str, Any]
|
roi_size, focus_source, edge_detection, roi_offset_x, roi_offset_y. |
genicam
GenICam Camera Backend Module
GenICamCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
GenICam camera backend using the Harvesters library.
This backend provides support for any GenICam-compliant camera via a GenTL Producer (.cti file), including cameras from Keyence, Allied Vision, FLIR, and others.
Thread Model
The Harvesters library requires thread affinity - ImageAcquirer operations must execute on the same OS thread that created the acquirer. This backend uses a dedicated single-thread executor per camera instance.
The Harvester instance (which manages the GenTL Producer) is shared as a singleton across all GenICam camera instances to prevent device conflicts. Only the per-camera ImageAcquirer operations use the dedicated executor.
Features
- GenICam-compliant camera support via Harvesters
- Matrix Vision GenTL Producer integration
- Hardware trigger and continuous capture modes
- Region of Interest (ROI) control
- Automatic and manual exposure/gain control
- CLAHE image quality enhancement
- Vendor-specific parameter handling (Keyence, Basler, etc.)
Requirements
- Matrix Vision mvIMPACT Acquire SDK (provides GenTL Producer)
- Harvesters package (pip install harvesters)
- OpenCV for image processing
Example::
from mindtrace.hardware.cameras.backends.genicam import GenICamCameraBackend
async with GenICamCameraBackend("device_serial") as camera:
await camera.set_exposure(50000)
image = await camera.capture()
Attributes:
| Name | Type | Description |
|---|---|---|
image_acquirer |
Optional[Any]
|
Harvesters ImageAcquirer for this camera |
harvester |
Optional[Harvester]
|
Shared Harvester instance (class-level singleton) |
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
timeout_ms |
Capture timeout in milliseconds |
|
cti_path |
Path to the GenTL Producer file |
|
vendor_quirks |
Dict[str, bool]
|
Vendor-specific parameter handling flags |
Initialize GenICam camera with configurable parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier (serial number, device ID, or user-defined name) |
required |
camera_config
|
Optional[str]
|
Path to JSON configuration file (optional) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable CLAHE image enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - cti_path: Path to GenTL Producer file (auto-detected if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None) |
{}
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters library is not available |
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If camera initialization fails |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available GenICam cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names (serial numbers or device IDs) or dict with details |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters library is not available |
HardwareOperationError
|
If camera discovery fails |
async
classmethod
Async wrapper for get_available_cameras() - runs discovery in threadpool.
Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera discovery.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed camera information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names or dict with details (same as get_available_cameras) |
async
Initialize the camera connection.
This searches for the camera by name, serial number, or device ID and establishes a connection if found.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, image_acquirer object, device_info) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If no cameras found or specified camera not found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
async
Get the supported exposure time range in microseconds.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_exposure, max_exposure] in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure range retrieval fails |
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure retrieval fails |
async
Set the camera exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure
|
Union[int, float]
|
Exposure time in microseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If exposure value is out of range |
HardwareOperationError
|
If exposure setting fails |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format string (e.g., "Mono8", "RGB8", "BayerRG8") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If pixel format retrieval fails |
async
Get camera width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If width range retrieval fails |
async
Get camera height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If height range retrieval fails |
async
Get list of supported pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of supported pixel format strings |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If pixel format list retrieval fails |
async
Set the camera pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format string (e.g., "Mono8", "RGB8") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If pixel format is not supported |
HardwareOperationError
|
If pixel format setting fails |
async
Get current trigger mode.
Returns:
| Type | Description |
|---|---|
str
|
"continuous" or "trigger" |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If trigger mode retrieval fails |
async
Set the camera's trigger mode for image acquisition.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" or "trigger") |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If trigger mode is invalid |
HardwareOperationError
|
If trigger mode setting fails |
async
Capture a single image from the camera.
In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.
Returns:
| Type | Description |
|---|---|
ndarray
|
Image array in BGR format |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
async
Check if camera is connected and operational.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected and operational, False otherwise |
async
Close the camera and release resources.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera closure fails |
async
Get current white balance mode using GenICam nodes.
Returns:
| Type | Description |
|---|---|
str
|
Current white balance mode string |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
async
Execute automatic white balance once using GenICam nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode ("auto", "once", "manual", "off") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If white balance setting fails |
async
Get available white balance modes using GenICam nodes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance mode strings |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
async
Import camera configuration from JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to JSON configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If configuration file is invalid |
HardwareOperationError
|
If configuration import fails |
async
Export camera configuration to JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to save JSON configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If configuration export fails |
async
Set Region of Interest using GenICam nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
Left offset |
required |
y
|
int
|
Top offset |
required |
width
|
int
|
Width of ROI |
required |
height
|
int
|
Height of ROI |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If ROI parameters are invalid |
HardwareOperationError
|
If ROI setting fails |
async
Get current ROI settings from GenICam nodes.
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with 'x', 'y', 'width', 'height' keys |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
async
Reset ROI to maximum sensor area using GenICam nodes.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If ROI reset fails |
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
MockGenICamCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
Mock GenICam Camera Backend Implementation
This class provides a mock implementation of the GenICam camera backend for testing and development. It simulates GenICam camera functionality without requiring actual hardware, Harvesters library, or GenTL Producer files.
Features
- Complete simulation of GenICam camera API
- Configurable image generation with realistic patterns
- Error simulation for testing error handling
- Configuration import/export simulation
- Camera control features (exposure, ROI, trigger modes, etc.)
- Vendor-specific quirks simulation (Keyence, Basler, etc.)
- Realistic timing and behavior simulation
Usage::
from mindtrace.hardware.cameras.backends.genicam import MockGenICamCameraBackend
camera = MockGenICamCameraBackend("mock_keyence_001", vendor="KEYENCE")
await camera.set_exposure(50000)
image = await camera.capture()
await camera.close()
Error Simulation
Enable error simulation via environment variables: - MOCK_GENICAM_FAIL_INIT: Simulate initialization failure - MOCK_GENICAM_FAIL_CAPTURE: Simulate capture failure - MOCK_GENICAM_TIMEOUT: Simulate timeout errors
Attributes:
| Name | Type | Description |
|---|---|---|
initialized |
bool
|
Whether camera was successfully initialized |
camera_name |
Name/identifier of the mock camera |
|
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
img_quality_enhancement |
Current image enhancement setting |
|
timeout_ms |
Capture timeout in milliseconds |
|
retrieve_retry_count |
Number of capture retry attempts |
|
exposure_time |
Current exposure time in microseconds |
|
gain |
Current gain value |
|
roi |
Current region of interest settings |
|
vendor |
Simulated camera vendor |
|
model |
Simulated camera model |
|
serial_number |
Simulated serial number |
|
vendor_quirks |
Vendor-specific parameter handling flags |
|
image_counter |
Counter for generating unique images |
|
fail_init |
Whether to simulate initialization failure |
|
fail_capture |
Whether to simulate capture failure |
|
simulate_timeout |
Whether to simulate timeout errors |
Initialize mock GenICam camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier |
required |
camera_config
|
Optional[str]
|
Path to configuration file (simulated) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable image enhancement simulation (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - vendor: Simulated vendor ("KEYENCE", "BASLER", "FLIR", etc.) - model: Simulated model name - serial_number: Simulated serial number - cti_path: Simulated CTI path (ignored in mock) - timeout_ms: Timeout in milliseconds - buffer_count: Buffer count (simulated) - simulate_fail_init: If True, simulate initialization failure - simulate_fail_capture: If True, simulate capture failure - simulate_timeout: If True, simulate timeout on capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_overlay_text: If False, disables text overlays in synthetic images |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If initialization fails (when simulated) |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available mock GenICam cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names or dict with details |
async
Initialize the mock camera connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, mock camera object, device_info) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera not found (when simulated) |
CameraInitializationError
|
If initialization fails (when simulated) |
CameraConnectionError
|
If connection fails (when simulated) |
async
Get the simulated exposure time range in microseconds.
async
Set the simulated camera exposure time in microseconds.
async
Set the simulated camera's trigger mode.
async
Check if mock camera is connected and operational.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Import simulated camera configuration from JSON file.
GenICam Camera Backend Module
GenICamCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
GenICam camera backend using the Harvesters library.
This backend provides support for any GenICam-compliant camera via a GenTL Producer (.cti file), including cameras from Keyence, Allied Vision, FLIR, and others.
Thread Model
The Harvesters library requires thread affinity - ImageAcquirer operations must execute on the same OS thread that created the acquirer. This backend uses a dedicated single-thread executor per camera instance.
The Harvester instance (which manages the GenTL Producer) is shared as a singleton across all GenICam camera instances to prevent device conflicts. Only the per-camera ImageAcquirer operations use the dedicated executor.
Features
- GenICam-compliant camera support via Harvesters
- Matrix Vision GenTL Producer integration
- Hardware trigger and continuous capture modes
- Region of Interest (ROI) control
- Automatic and manual exposure/gain control
- CLAHE image quality enhancement
- Vendor-specific parameter handling (Keyence, Basler, etc.)
Requirements
- Matrix Vision mvIMPACT Acquire SDK (provides GenTL Producer)
- Harvesters package (pip install harvesters)
- OpenCV for image processing
Example::
from mindtrace.hardware.cameras.backends.genicam import GenICamCameraBackend
async with GenICamCameraBackend("device_serial") as camera:
await camera.set_exposure(50000)
image = await camera.capture()
Attributes:
| Name | Type | Description |
|---|---|---|
image_acquirer |
Optional[Any]
|
Harvesters ImageAcquirer for this camera |
harvester |
Optional[Harvester]
|
Shared Harvester instance (class-level singleton) |
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
timeout_ms |
Capture timeout in milliseconds |
|
cti_path |
Path to the GenTL Producer file |
|
vendor_quirks |
Dict[str, bool]
|
Vendor-specific parameter handling flags |
Initialize GenICam camera with configurable parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier (serial number, device ID, or user-defined name) |
required |
camera_config
|
Optional[str]
|
Path to JSON configuration file (optional) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable CLAHE image enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - cti_path: Path to GenTL Producer file (auto-detected if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None) |
{}
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters library is not available |
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If camera initialization fails |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available GenICam cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names (serial numbers or device IDs) or dict with details |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters library is not available |
HardwareOperationError
|
If camera discovery fails |
async
classmethod
Async wrapper for get_available_cameras() - runs discovery in threadpool.
Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera discovery.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed camera information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names or dict with details (same as get_available_cameras) |
async
Initialize the camera connection.
This searches for the camera by name, serial number, or device ID and establishes a connection if found.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, image_acquirer object, device_info) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If no cameras found or specified camera not found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
async
Get the supported exposure time range in microseconds.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_exposure, max_exposure] in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure range retrieval fails |
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If exposure retrieval fails |
async
Set the camera exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure
|
Union[int, float]
|
Exposure time in microseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If exposure value is out of range |
HardwareOperationError
|
If exposure setting fails |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format string (e.g., "Mono8", "RGB8", "BayerRG8") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If pixel format retrieval fails |
async
Get camera width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If width range retrieval fails |
async
Get camera height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If height range retrieval fails |
async
Get list of supported pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of supported pixel format strings |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If pixel format list retrieval fails |
async
Set the camera pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format string (e.g., "Mono8", "RGB8") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If pixel format is not supported |
HardwareOperationError
|
If pixel format setting fails |
async
Get current trigger mode.
Returns:
| Type | Description |
|---|---|
str
|
"continuous" or "trigger" |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
HardwareOperationError
|
If trigger mode retrieval fails |
async
Set the camera's trigger mode for image acquisition.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" or "trigger") |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraConfigurationError
|
If trigger mode is invalid |
HardwareOperationError
|
If trigger mode setting fails |
async
Capture a single image from the camera.
In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.
Returns:
| Type | Description |
|---|---|
ndarray
|
Image array in BGR format |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
async
Check if camera is connected and operational.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected and operational, False otherwise |
async
Close the camera and release resources.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera closure fails |
async
Get current white balance mode using GenICam nodes.
Returns:
| Type | Description |
|---|---|
str
|
Current white balance mode string |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
async
Execute automatic white balance once using GenICam nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode ("auto", "once", "manual", "off") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If white balance setting fails |
async
Get available white balance modes using GenICam nodes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance mode strings |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
async
Import camera configuration from JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to JSON configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If configuration file is invalid |
HardwareOperationError
|
If configuration import fails |
async
Export camera configuration to JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to save JSON configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If configuration export fails |
async
Set Region of Interest using GenICam nodes.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
Left offset |
required |
y
|
int
|
Top offset |
required |
width
|
int
|
Width of ROI |
required |
height
|
int
|
Height of ROI |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If ROI parameters are invalid |
HardwareOperationError
|
If ROI setting fails |
async
Get current ROI settings from GenICam nodes.
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with 'x', 'y', 'width', 'height' keys |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
async
Reset ROI to maximum sensor area using GenICam nodes.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If ROI reset fails |
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
Mock GenICam Camera Backend Module
MockGenICamCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
Mock GenICam Camera Backend Implementation
This class provides a mock implementation of the GenICam camera backend for testing and development. It simulates GenICam camera functionality without requiring actual hardware, Harvesters library, or GenTL Producer files.
Features
- Complete simulation of GenICam camera API
- Configurable image generation with realistic patterns
- Error simulation for testing error handling
- Configuration import/export simulation
- Camera control features (exposure, ROI, trigger modes, etc.)
- Vendor-specific quirks simulation (Keyence, Basler, etc.)
- Realistic timing and behavior simulation
Usage::
from mindtrace.hardware.cameras.backends.genicam import MockGenICamCameraBackend
camera = MockGenICamCameraBackend("mock_keyence_001", vendor="KEYENCE")
await camera.set_exposure(50000)
image = await camera.capture()
await camera.close()
Error Simulation
Enable error simulation via environment variables: - MOCK_GENICAM_FAIL_INIT: Simulate initialization failure - MOCK_GENICAM_FAIL_CAPTURE: Simulate capture failure - MOCK_GENICAM_TIMEOUT: Simulate timeout errors
Attributes:
| Name | Type | Description |
|---|---|---|
initialized |
bool
|
Whether camera was successfully initialized |
camera_name |
Name/identifier of the mock camera |
|
triggermode |
Current trigger mode ("continuous" or "trigger") |
|
img_quality_enhancement |
Current image enhancement setting |
|
timeout_ms |
Capture timeout in milliseconds |
|
retrieve_retry_count |
Number of capture retry attempts |
|
exposure_time |
Current exposure time in microseconds |
|
gain |
Current gain value |
|
roi |
Current region of interest settings |
|
vendor |
Simulated camera vendor |
|
model |
Simulated camera model |
|
serial_number |
Simulated serial number |
|
vendor_quirks |
Vendor-specific parameter handling flags |
|
image_counter |
Counter for generating unique images |
|
fail_init |
Whether to simulate initialization failure |
|
fail_capture |
Whether to simulate capture failure |
|
simulate_timeout |
Whether to simulate timeout errors |
Initialize mock GenICam camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier |
required |
camera_config
|
Optional[str]
|
Path to configuration file (simulated) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Enable image enhancement simulation (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of capture retry attempts (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - vendor: Simulated vendor ("KEYENCE", "BASLER", "FLIR", etc.) - model: Simulated model name - serial_number: Simulated serial number - cti_path: Simulated CTI path (ignored in mock) - timeout_ms: Timeout in milliseconds - buffer_count: Buffer count (simulated) - simulate_fail_init: If True, simulate initialization failure - simulate_fail_capture: If True, simulate capture failure - simulate_timeout: If True, simulate timeout on capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_overlay_text: If False, disables text overlays in synthetic images |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If initialization fails (when simulated) |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Get available mock GenICam cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return detailed information |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
List of camera names or dict with details |
async
Initialize the mock camera connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success status, mock camera object, device_info) |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera not found (when simulated) |
CameraInitializationError
|
If initialization fails (when simulated) |
CameraConnectionError
|
If connection fails (when simulated) |
async
Get the simulated exposure time range in microseconds.
async
Set the simulated camera exposure time in microseconds.
async
Set the simulated camera's trigger mode.
async
Check if mock camera is connected and operational.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
async
Import simulated camera configuration from JSON file.
opencv
OpenCV Camera Backend
Provides support for USB cameras and webcams via OpenCV with comprehensive error handling.
Components
- OpenCVCameraBackend: OpenCV camera implementation (requires opencv-python)
Requirements
- opencv-python: For camera access and image processing
- numpy: For image array operations
Installation
pip install opencv-python numpy
Usage
from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend
USB camera (index 0)
if OPENCV_AVAILABLE: camera = OpenCVCameraBackend("0") success, cam_obj, remote_obj = await camera.initialize() # Initialize first if success: image = await camera.capture() await camera.close()
OpenCVCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
OpenCV camera implementation for USB cameras and webcams.
This backend provides a comprehensive interface to USB cameras, webcams, and other video capture devices using
OpenCV's VideoCapture with robust error handling and resource management. It works across Windows, Linux, and
macOS with platform-aware discovery.
Thread Model
OpenCV's VideoCapture is thread-safe and does not require thread affinity. This backend
uses the default shared thread pool via asyncio.to_thread() for blocking operations,
avoiding the overhead of per-camera dedicated executors. A per-instance asyncio.Lock
serializes mutating operations to prevent concurrent set/read races.
Features
- USB camera and webcam support across Windows, Linux, and macOS
- Automatic camera discovery and enumeration
- Configurable resolution, frame rate, and exposure settings
- Optional image quality enhancement (CLAHE)
- Robust error handling with retries and bounded timeouts
- BGR to RGB conversion for consistency
- Platform-specific optimizations
Configuration
All parameters are configurable via the hardware configuration system:
- MINDTRACE_CAMERA_OPENCV_DEFAULT_WIDTH: Default frame width (1280)
- MINDTRACE_CAMERA_OPENCV_DEFAULT_HEIGHT: Default frame height (720)
- MINDTRACE_CAMERA_OPENCV_DEFAULT_FPS: Default frame rate (30)
- MINDTRACE_CAMERA_OPENCV_DEFAULT_EXPOSURE: Default exposure (-1 for auto)
- MINDTRACE_CAMERA_OPENCV_MAX_CAMERA_INDEX: Maximum camera index to test (10)
- MINDTRACE_CAMERA_IMAGE_QUALITY_ENHANCEMENT: Enable CLAHE enhancement
- MINDTRACE_CAMERA_RETRIEVE_RETRY_COUNT: Number of capture retry attempts
- MINDTRACE_CAMERA_TIMEOUT_MS: Capture timeout in milliseconds
Attributes:
| Name | Type | Description |
|---|---|---|
camera_index |
Camera device index or path |
|
cap |
Optional[VideoCapture]
|
OpenCV VideoCapture object |
initialized |
bool
|
Camera initialization status |
width |
bool
|
Current frame width |
height |
bool
|
Current frame height |
fps |
bool
|
Current frame rate |
exposure |
bool
|
Current exposure setting |
timeout_ms |
Capture timeout in milliseconds |
Example::
from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend
async def main():
camera = OpenCVCameraBackend("0", width=1280, height=720)
ok, cap, _ = await camera.initialize()
if ok:
image = await camera.capture()
await camera.close()
Initialize OpenCV camera with configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier (index number or device path) |
required |
camera_config
|
Optional[str]
|
Path to camera config file (not used for OpenCV) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Whether to apply image quality enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of times to retry capture (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - width: Frame width (uses config default if None) - height: Frame height (uses config default if None) - fps: Frame rate (uses config default if None) - exposure: Exposure value (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) |
{}
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If OpenCV is not installed |
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If camera initialization fails |
async
Initialize the camera and establish connection.
Returns:
| Type | Description |
|---|---|
bool
|
Tuple[bool, Any, Any]: (success, camera_object, remote_control_object). For OpenCV |
Any
|
cameras, both objects are the same |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be opened |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Discover cameras with backend-aware probing.
- Linux: prefer CAP_V4L2 probing across indices
- Windows: try CAP_DSHOW then CAP_MSMF
- macOS: try CAP_AVFOUNDATION
- Fallback: default backend probing
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return a dict of details per camera. |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
Union[List[str], Dict[str, Dict[str, str]]]: List of camera names (e.g., |
Union[List[str], Dict[str, Dict[str, str]]]
|
|
async
classmethod
Async wrapper for get_available_cameras() - runs discovery in threadpool.
Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return a dict of details per camera. |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details. |
async
Capture an image from the camera.
Implements retry logic and proper error handling for robust image capture. Converts OpenCV's default BGR format to RGB for consistency.
Returns:
| Type | Description |
|---|---|
ndarray
|
np.ndarray: Captured image as an RGB numpy array. |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
async
Check if camera connection is active and healthy.
Returns:
| Type | Description |
|---|---|
bool
|
True if camera is connected and responsive, False otherwise |
async
Close camera connection and cleanup resources.
Properly releases the VideoCapture object and resets camera state.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera closure fails |
async
Check if exposure control is actually supported for this camera. Tests both reading and setting exposure to verify true support. Returns: True if exposure control is supported, False otherwise
async
Set camera exposure time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure
|
Union[int, float]
|
Exposure value (OpenCV uses log scale, typically -13 to -1) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If exposure value is invalid or unsupported |
HardwareOperationError
|
If exposure setting fails |
async
Get current camera exposure time.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If exposure retrieval fails |
async
Get camera exposure time range.
Returns:
| Type | Description |
|---|---|
Optional[List[Union[int, float]]]
|
List containing [min_exposure, max_exposure] in OpenCV log scale, or None if exposure control not supported |
async
Get supported width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
async
Get supported height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
async
Get the supported gain range.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_gain, max_gain] |
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
Union[int, float]
|
Gain value |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If gain value is out of range or setting fails |
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
async
Set Region of Interest (ROI).
Note: OpenCV cameras typically don't support hardware ROI; implement in software if needed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
ROI x offset |
required |
y
|
int
|
ROI y offset |
required |
width
|
int
|
ROI width |
required |
height
|
int
|
ROI height |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
ROI is not supported by the OpenCV backend |
async
Get current Region of Interest (ROI).
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with full frame dimensions (ROI not supported) |
async
Reset ROI to full sensor size.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
ROI reset is not supported by the OpenCV backend |
async
Get current white balance mode.
Returns:
| Type | Description |
|---|---|
str
|
Current white balance mode ("auto" or "manual") |
async
Set white balance mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode ("auto", "manual", "off") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If value is invalid |
HardwareOperationError
|
If the operation fails |
async
Get available white balance modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance modes |
async
Get available pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available pixel formats (OpenCV always uses BGR internally) |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (always BGR8 for OpenCV, converted to RGB8 in capture) |
async
Set pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format to set |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If pixel format is not supported |
async
Get trigger mode (always continuous for USB cameras).
Returns:
| Type | Description |
|---|---|
str
|
"continuous" (USB cameras only support continuous mode) |
async
Set trigger mode.
USB cameras only support continuous mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" only) |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If trigger mode is not supported |
Get image quality enhancement status.
Set image quality enhancement.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img_quality_enhancement
|
bool
|
Whether to enable image quality enhancement |
required |
Raises:
| Type | Description |
|---|---|
HardwareOperationError
|
If setting cannot be applied |
async
Export current camera configuration to common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to save configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not connected |
CameraConfigurationError
|
If configuration export fails |
async
Import camera configuration from common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not connected |
CameraConfigurationError
|
If configuration import fails |
async
Bandwidth limiting not applicable for OpenCV cameras.
async
Inter-packet delay not applicable for OpenCV cameras.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
OpenCV camera backend module.
OpenCVCameraBackend(
camera_name: str,
camera_config: Optional[str] = None,
img_quality_enhancement: Optional[bool] = None,
retrieve_retry_count: Optional[int] = None,
**backend_kwargs
)
Bases: CameraBackend
OpenCV camera implementation for USB cameras and webcams.
This backend provides a comprehensive interface to USB cameras, webcams, and other video capture devices using
OpenCV's VideoCapture with robust error handling and resource management. It works across Windows, Linux, and
macOS with platform-aware discovery.
Thread Model
OpenCV's VideoCapture is thread-safe and does not require thread affinity. This backend
uses the default shared thread pool via asyncio.to_thread() for blocking operations,
avoiding the overhead of per-camera dedicated executors. A per-instance asyncio.Lock
serializes mutating operations to prevent concurrent set/read races.
Features
- USB camera and webcam support across Windows, Linux, and macOS
- Automatic camera discovery and enumeration
- Configurable resolution, frame rate, and exposure settings
- Optional image quality enhancement (CLAHE)
- Robust error handling with retries and bounded timeouts
- BGR to RGB conversion for consistency
- Platform-specific optimizations
Configuration
All parameters are configurable via the hardware configuration system:
- MINDTRACE_CAMERA_OPENCV_DEFAULT_WIDTH: Default frame width (1280)
- MINDTRACE_CAMERA_OPENCV_DEFAULT_HEIGHT: Default frame height (720)
- MINDTRACE_CAMERA_OPENCV_DEFAULT_FPS: Default frame rate (30)
- MINDTRACE_CAMERA_OPENCV_DEFAULT_EXPOSURE: Default exposure (-1 for auto)
- MINDTRACE_CAMERA_OPENCV_MAX_CAMERA_INDEX: Maximum camera index to test (10)
- MINDTRACE_CAMERA_IMAGE_QUALITY_ENHANCEMENT: Enable CLAHE enhancement
- MINDTRACE_CAMERA_RETRIEVE_RETRY_COUNT: Number of capture retry attempts
- MINDTRACE_CAMERA_TIMEOUT_MS: Capture timeout in milliseconds
Attributes:
| Name | Type | Description |
|---|---|---|
camera_index |
Camera device index or path |
|
cap |
Optional[VideoCapture]
|
OpenCV VideoCapture object |
initialized |
bool
|
Camera initialization status |
width |
bool
|
Current frame width |
height |
bool
|
Current frame height |
fps |
bool
|
Current frame rate |
exposure |
bool
|
Current exposure setting |
timeout_ms |
Capture timeout in milliseconds |
Example::
from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend
async def main():
camera = OpenCVCameraBackend("0", width=1280, height=720)
ok, cap, _ = await camera.initialize()
if ok:
image = await camera.capture()
await camera.close()
Initialize OpenCV camera with configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera_name
|
str
|
Camera identifier (index number or device path) |
required |
camera_config
|
Optional[str]
|
Path to camera config file (not used for OpenCV) |
None
|
img_quality_enhancement
|
Optional[bool]
|
Whether to apply image quality enhancement (uses config default if None) |
None
|
retrieve_retry_count
|
Optional[int]
|
Number of times to retry capture (uses config default if None) |
None
|
**backend_kwargs
|
Backend-specific parameters: - width: Frame width (uses config default if None) - height: Frame height (uses config default if None) - fps: Frame rate (uses config default if None) - exposure: Exposure value (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) |
{}
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If OpenCV is not installed |
CameraConfigurationError
|
If configuration is invalid |
CameraInitializationError
|
If camera initialization fails |
async
Initialize the camera and establish connection.
Returns:
| Type | Description |
|---|---|
bool
|
Tuple[bool, Any, Any]: (success, camera_object, remote_control_object). For OpenCV |
Any
|
cameras, both objects are the same |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be opened |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
staticmethod
get_available_cameras(
include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]
Discover cameras with backend-aware probing.
- Linux: prefer CAP_V4L2 probing across indices
- Windows: try CAP_DSHOW then CAP_MSMF
- macOS: try CAP_AVFOUNDATION
- Fallback: default backend probing
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return a dict of details per camera. |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
Union[List[str], Dict[str, Dict[str, str]]]: List of camera names (e.g., |
Union[List[str], Dict[str, Dict[str, str]]]
|
|
async
classmethod
Async wrapper for get_available_cameras() - runs discovery in threadpool.
Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_details
|
bool
|
If True, return a dict of details per camera. |
False
|
Returns:
| Type | Description |
|---|---|
Union[List[str], Dict[str, Dict[str, str]]]
|
Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details. |
async
Capture an image from the camera.
Implements retry logic and proper error handling for robust image capture. Converts OpenCV's default BGR format to RGB for consistency.
Returns:
| Type | Description |
|---|---|
ndarray
|
np.ndarray: Captured image as an RGB numpy array. |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized or accessible |
CameraCaptureError
|
If image capture fails |
CameraTimeoutError
|
If capture times out |
async
Check if camera connection is active and healthy.
Returns:
| Type | Description |
|---|---|
bool
|
True if camera is connected and responsive, False otherwise |
async
Close camera connection and cleanup resources.
Properly releases the VideoCapture object and resets camera state.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera closure fails |
async
Check if exposure control is actually supported for this camera. Tests both reading and setting exposure to verify true support. Returns: True if exposure control is supported, False otherwise
async
Set camera exposure time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
exposure
|
Union[int, float]
|
Exposure value (OpenCV uses log scale, typically -13 to -1) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If exposure value is invalid or unsupported |
HardwareOperationError
|
If exposure setting fails |
async
Get current camera exposure time.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
HardwareOperationError
|
If exposure retrieval fails |
async
Get camera exposure time range.
Returns:
| Type | Description |
|---|---|
Optional[List[Union[int, float]]]
|
List containing [min_exposure, max_exposure] in OpenCV log scale, or None if exposure control not supported |
async
Get supported width range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_width, max_width] |
async
Get supported height range.
Returns:
| Type | Description |
|---|---|
List[int]
|
List containing [min_height, max_height] |
async
Get the supported gain range.
Returns:
| Type | Description |
|---|---|
List[Union[int, float]]
|
List with [min_gain, max_gain] |
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
Union[int, float]
|
Gain value |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If gain value is out of range or setting fails |
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
async
Set Region of Interest (ROI).
Note: OpenCV cameras typically don't support hardware ROI; implement in software if needed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
int
|
ROI x offset |
required |
y
|
int
|
ROI y offset |
required |
width
|
int
|
ROI width |
required |
height
|
int
|
ROI height |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
ROI is not supported by the OpenCV backend |
async
Get current Region of Interest (ROI).
Returns:
| Type | Description |
|---|---|
Dict[str, int]
|
Dictionary with full frame dimensions (ROI not supported) |
async
Reset ROI to full sensor size.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
ROI reset is not supported by the OpenCV backend |
async
Get current white balance mode.
Returns:
| Type | Description |
|---|---|
str
|
Current white balance mode ("auto" or "manual") |
async
Set white balance mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
str
|
White balance mode ("auto", "manual", "off") |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not initialized |
CameraConfigurationError
|
If value is invalid |
HardwareOperationError
|
If the operation fails |
async
Get available white balance modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available white balance modes |
async
Get available pixel formats.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available pixel formats (OpenCV always uses BGR internally) |
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (always BGR8 for OpenCV, converted to RGB8 in capture) |
async
Set pixel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
pixel_format
|
str
|
Pixel format to set |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If pixel format is not supported |
async
Get trigger mode (always continuous for USB cameras).
Returns:
| Type | Description |
|---|---|
str
|
"continuous" (USB cameras only support continuous mode) |
async
Set trigger mode.
USB cameras only support continuous mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
triggermode
|
str
|
Trigger mode ("continuous" only) |
'continuous'
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If trigger mode is not supported |
Get image quality enhancement status.
Set image quality enhancement.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img_quality_enhancement
|
bool
|
Whether to enable image quality enhancement |
required |
Raises:
| Type | Description |
|---|---|
HardwareOperationError
|
If setting cannot be applied |
async
Export current camera configuration to common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to save configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not connected |
CameraConfigurationError
|
If configuration export fails |
async
Import camera configuration from common JSON format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config_path
|
str
|
Path to configuration file |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera is not connected |
CameraConfigurationError
|
If configuration import fails |
async
Bandwidth limiting not applicable for OpenCV cameras.
async
Inter-packet delay not applicable for OpenCV cameras.
async
Set capture timeout in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Timeout value in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If timeout_ms is negative |
async
Get current capture timeout in milliseconds.
Returns:
| Type | Description |
|---|---|
int
|
Current timeout value in milliseconds |
homography
Homography-based planar measurement system.
This module provides homography calibration and measurement capabilities for converting pixel-space object detections to real-world metric dimensions on planar surfaces.
Features
- Automatic checkerboard calibration
- Manual point correspondence calibration
- RANSAC-based robust homography estimation
- Multi-unit measurement support (mm, cm, m, in, ft)
- Batch processing for multiple objects
- Framework-integrated logging and configuration
Typical Usage::
from mindtrace.hardware import HomographyCalibrator, HomographyMeasurer
# One-time calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(12, 12),
square_size=25.0,
world_unit="mm"
)
calibration.save("camera_calibration.json")
# Repeated measurement
measurer = HomographyMeasurer(calibration)
detections = yolo.detect(frame)
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")
for measured in measurements:
print(f"Size: {measured.width_world:.1f} × {measured.height_world:.1f} cm")
HomographyCalibrator
Bases: Mindtrace
Calibrates planar homography for pixel-to-world coordinate mapping.
Establishes a homography matrix H that maps planar world coordinates (X, Y, Z=0) in metric units to image pixel coordinates (u, v). Supports both automatic checkerboard-based calibration and manual point correspondence calibration.
The homography enables real-world measurements from camera images for objects lying on a known planar surface (e.g., overhead cameras, objects on tables/floors).
Features
- Automatic checkerboard pattern detection and calibration
- Manual point correspondence calibration
- RANSAC-based robust homography estimation
- Sub-pixel corner refinement for improved accuracy
- Lens distortion correction support
- Camera intrinsics estimation from FOV
Typical Workflow
- Place calibration target (checkerboard) on measurement plane
- Capture image with known world coordinates
- Calibrate to obtain homography matrix
- Use calibration for repeated measurements
Usage::
from mindtrace.hardware import HomographyCalibrator
# Automatic checkerboard calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(12, 12), # Inner corners
square_width=25.0, # mm width per square
square_height=25.0, # mm height per square
world_unit="mm"
)
# Manual point correspondence calibration
calibration = calibrator.calibrate_from_correspondences(
world_points=[(0, 0), (300, 0), (300, 200), (0, 200)], # mm
image_points=[(100, 50), (500, 50), (500, 400), (100, 400)], # pixels
world_unit="mm"
)
# Save for later use
calibration.save("camera_calibration.json")
Configuration
All parameters can be configured via hardware config: - ransac_threshold: RANSAC reprojection error threshold (default: 3.0 pixels) - refine_corners: Enable sub-pixel corner refinement (default: True) - corner_refinement_window: Refinement window size (default: 11) - min_correspondences: Minimum points needed (default: 4) - default_world_unit: Default measurement unit (default: "mm")
Limitations
- Only works for planar surfaces (Z=0 assumption)
- Requires camera to remain fixed after calibration
- Accuracy degrades with severe viewing angles
Initialize homography calibrator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional arguments passed to Mindtrace base class |
{}
|
estimate_intrinsics_from_fov(
image_size: Tuple[int, int],
fov_horizontal_deg: float,
fov_vertical_deg: float,
principal_point: Optional[Tuple[float, float]] = None,
) -> np.ndarray
Estimate camera intrinsics matrix from field-of-view parameters.
Computes a simple pinhole camera model intrinsics matrix (K) from the camera's horizontal and vertical field of view angles and image dimensions. Useful when full camera calibration is not available.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_size
|
Tuple[int, int]
|
Image dimensions as (width, height) in pixels |
required |
fov_horizontal_deg
|
float
|
Horizontal field of view in degrees |
required |
fov_vertical_deg
|
float
|
Vertical field of view in degrees |
required |
principal_point
|
Optional[Tuple[float, float]]
|
Optional (cx, cy) principal point in pixels. Defaults to image center if not provided. |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
3x3 camera intrinsics matrix K |
Example::
K = calibrator.estimate_intrinsics_from_fov(
image_size=(1920, 1080),
fov_horizontal_deg=70.0,
fov_vertical_deg=45.0
)
calibrate_from_correspondences(
world_points: ndarray,
image_points: ndarray,
world_unit: Optional[str] = None,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
) -> CalibrationData
Compute homography from known point correspondences.
Establishes the homography matrix H given known world coordinates (on Z=0 plane) and their corresponding image pixel coordinates. Uses RANSAC for robust estimation in the presence of outliers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
world_points
|
ndarray
|
Nx2 array of world coordinates in metric units (X, Y on Z=0 plane) |
required |
image_points
|
ndarray
|
Nx2 array of corresponding image coordinates in pixels (u, v) |
required |
world_unit
|
Optional[str]
|
Unit of world coordinates (e.g., 'mm', 'cm', 'm'). Uses config default if None. |
None
|
camera_matrix
|
Optional[ndarray]
|
Optional 3x3 camera intrinsics matrix for undistortion |
None
|
dist_coeffs
|
Optional[ndarray]
|
Optional distortion coefficients for undistortion |
None
|
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData containing homography matrix and metadata |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If inputs are invalid (wrong shape, too few points) |
HardwareOperationError
|
If homography estimation fails |
Example::
# Four corner correspondences (world in mm, image in pixels)
world_pts = np.array([[0, 0], [300, 0], [300, 200], [0, 200]])
image_pts = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])
calibration = calibrator.calibrate_from_correspondences(
world_points=world_pts,
image_points=image_pts,
world_unit="mm"
)
calibrate_checkerboard(
image: Union[Image, ndarray],
board_size: Optional[Tuple[int, int]] = None,
square_size: Optional[float] = None,
world_unit: Optional[str] = None,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
refine_corners: Optional[bool] = None,
) -> CalibrationData
Automatic calibration from checkerboard pattern detection.
Detects a checkerboard calibration pattern in the image, extracts corner correspondences, and computes the homography matrix. The checkerboard is assumed to lie on the Z=0 plane with known square dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
Union[Image, ndarray]
|
PIL Image or BGR numpy array containing checkerboard pattern |
required |
board_size
|
Optional[Tuple[int, int]]
|
Number of inner corners as (columns, rows). Uses config default if None. For a standard 8x8 checkerboard, use (7, 7). |
None
|
square_size
|
Optional[float]
|
Physical size of one checkerboard square in world units. Uses config default if None. |
None
|
world_unit
|
Optional[str]
|
Unit of square_size (e.g., 'mm', 'cm', 'm'). Uses config default if None. |
None
|
camera_matrix
|
Optional[ndarray]
|
Optional 3x3 camera intrinsics matrix for undistortion |
None
|
dist_coeffs
|
Optional[ndarray]
|
Optional distortion coefficients for undistortion |
None
|
refine_corners
|
Optional[bool]
|
Enable sub-pixel corner refinement. Uses config default if None. |
None
|
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData containing homography matrix and metadata |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If image format is unsupported |
HardwareOperationError
|
If checkerboard detection fails |
Example::
# Use config defaults for standard calibration board
calibration = calibrator.calibrate_checkerboard(image=checkerboard_image)
# Or override specific parameters
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(9, 6), # Custom board size
square_size=30.0, # Custom square size
world_unit="mm"
)
Notes
- board_size is the number of INNER corners, not squares
- A standard 8x8 checkerboard has 7x7 inner corners
- Ensure good lighting and focus for accurate detection
- Checkerboard should fill significant portion of image
- If using a standard calibration board, configure dimensions via: MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_COLS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_ROWS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_SQUARE
calibrate_checkerboard_multi_view(
images: list[Union[Image, ndarray]],
positions: list[Tuple[float, float]],
board_size: Optional[Tuple[int, int]] = None,
square_width: Optional[float] = None,
square_height: Optional[float] = None,
world_unit: Optional[str] = None,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
refine_corners: Optional[bool] = None,
) -> CalibrationData
Calibrate from multiple checkerboard positions on the same plane.
Combines corner detections from multiple images where the checkerboard is placed at different positions on the measurement plane. This provides better calibration coverage over large areas without requiring an oversized calibration target.
Ideal for calibrating long surfaces (e.g., metallic bars, conveyor belts) using a standard-sized checkerboard moved to multiple positions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
list[Union[Image, ndarray]]
|
List of images, each containing the checkerboard at a different position |
required |
positions
|
list[Tuple[float, float]]
|
List of (x_offset, y_offset) tuples in world units specifying the checkerboard's origin position in each image. The first position is typically (0, 0), and subsequent positions indicate how far the checkerboard was moved. |
required |
board_size
|
Optional[Tuple[int, int]]
|
Number of inner corners as (columns, rows). Uses config default if None. |
None
|
square_width
|
Optional[float]
|
Physical width of one checkerboard square in world units. Uses config default if None. |
None
|
square_height
|
Optional[float]
|
Physical height of one checkerboard square in world units. Uses config default if None. |
None
|
world_unit
|
Optional[str]
|
Unit of positions and square_width/height. Uses config default if None. |
None
|
camera_matrix
|
Optional[ndarray]
|
Optional 3x3 camera intrinsics matrix for undistortion |
None
|
dist_coeffs
|
Optional[ndarray]
|
Optional distortion coefficients for undistortion |
None
|
refine_corners
|
Optional[bool]
|
Enable sub-pixel corner refinement. Uses config default if None. |
None
|
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData containing homography matrix computed from all positions |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If inputs are invalid or inconsistent |
HardwareOperationError
|
If checkerboard detection fails in any image |
Example::
# Calibrate a 2-meter long bar using 3 checkerboard positions
images = [image1, image2, image3]
positions = [
(0, 0), # Start of bar
(1000, 0), # Middle (1000mm from start)
(2000, 0) # End (2000mm from start)
]
calibration = calibrator.calibrate_checkerboard_multi_view(
images=images,
positions=positions,
board_size=(12, 12),
square_width=25.0, # 25mm wide squares
square_height=25.0, # 25mm tall squares
world_unit="mm"
)
Notes
- All images must show the same plane (Z=0)
- Positions specify where the checkerboard origin (top-left corner) is located
- Use more positions for better coverage of large measurement areas
- Typical usage: 3-5 positions for long surfaces
- RANSAC automatically handles slight inaccuracies in position measurements
CalibrationData
dataclass
CalibrationData(
H: ndarray,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
world_unit: str = "mm",
plane_normal_camera: Optional[ndarray] = None,
)
Immutable container for homography calibration data.
Holds the homography matrix and optional camera intrinsics derived or provided during calibration. The homography maps world plane coordinates (Z=0) in metric units to image pixel coordinates.
Attributes:
| Name | Type | Description |
|---|---|---|
H |
ndarray
|
3x3 homography matrix from world plane (Z=0) to image pixels |
camera_matrix |
Optional[ndarray]
|
3x3 camera intrinsics matrix (K) if known or estimated |
dist_coeffs |
Optional[ndarray]
|
Lens distortion coefficients if available |
world_unit |
str
|
Unit used for world coordinates (e.g., 'mm', 'cm', 'm', 'in', 'ft') |
plane_normal_camera |
Optional[ndarray]
|
Optional 3D normal of the plane in camera frame if recovered |
Save calibration data to JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to save the calibration data |
required |
Note
NumPy arrays are converted to lists for JSON serialization.
classmethod
Load calibration data from JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to the calibration data file |
required |
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData instance loaded from file |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the file doesn't exist |
ValueError
|
If the file format is invalid |
MeasuredBox
dataclass
MeasuredBox(
corners_world: ndarray,
width_world: float,
height_world: float,
area_world: float,
unit: str,
)
Immutable container for metric-space measurement of a bounding box.
Stores the result of projecting a pixel-space bounding box to world coordinates on a planar surface using homography inversion. Contains the projected corner points and computed physical dimensions.
Attributes:
| Name | Type | Description |
|---|---|---|
corners_world |
ndarray
|
4x2 array of corner coordinates in world units (top-left, top-right, bottom-right, bottom-left) |
width_world |
float
|
Width in world units (distance between top-left and top-right) |
height_world |
float
|
Height in world units (distance between top-left and bottom-left) |
area_world |
float
|
Area in square world units (computed via shoelace formula) |
unit |
str
|
Unit of measurement (e.g., 'mm', 'cm', 'm', 'in', 'ft') |
HomographyMeasurer
Bases: Mindtrace
Measures physical dimensions of objects using planar homography.
Projects pixel-space bounding boxes from object detection to real-world metric coordinates on a planar surface using a pre-calibrated homography matrix. Enables accurate physical size measurements from camera images.
The measurer uses the inverse homography (H⁻¹) to map image pixels back to world coordinates, then computes Euclidean distances and polygon areas for size measurements.
Features
- Pixel-to-world coordinate projection
- Bounding box dimension measurement (width, height, area)
- Multi-unit support with automatic conversion
- Batch processing for multiple detections
- Pre-computed inverse homography for performance
Typical Workflow
- Calibrate camera view to obtain HomographyCalibrator.calibrate_*()
- Create measurer with calibration data
- Detect objects with vision model (YOLO, etc.)
- Measure physical dimensions from bounding boxes
- Apply size-based filtering or quality control
Usage::
from mindtrace.hardware import HomographyCalibrator, HomographyMeasurer
from mindtrace.core.types.bounding_box import BoundingBox
# One-time calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(12, 12),
square_size=25.0,
world_unit="mm"
)
# Create measurer (reuse for all measurements)
measurer = HomographyMeasurer(calibration)
# Measure objects from detection results
detections = yolo.detect(frame) # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")
for measured in measurements:
print(f"Width: {measured.width_world:.2f} cm")
print(f"Height: {measured.height_world:.2f} cm")
print(f"Area: {measured.area_world:.2f} cm²")
# Size-based filtering
if measured.width_world > 10.0:
reject_oversized_part(measured)
Configuration
- Supported units: mm, cm, m, in, ft (configurable via hardware config)
- Default world unit: Inherited from calibration data
Limitations
- Only works for planar surfaces (Z=0 assumption)
- Accuracy depends on calibration quality and viewing angle
- Assumes objects lie flat on the calibrated plane
- Camera must remain fixed after calibration
Initialize homography measurer with calibration data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
calibration
|
CalibrationData
|
CalibrationData from HomographyCalibrator |
required |
**kwargs
|
Additional arguments passed to Mindtrace base class |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If homography matrix is invalid |
Project pixel coordinates to world plane coordinates.
Maps Nx2 pixel coordinates to world plane coordinates using the inverse homography matrix H⁻¹. This is the core projection operation for all measurement functionality.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
points_px
|
ndarray
|
Nx2 array of pixel coordinates (u, v) |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Nx2 array of world coordinates (X, Y) in calibration world unit |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If input array has wrong shape |
Example::
# Project single point
world_point = measurer.pixels_to_world(np.array([[320, 240]]))
# Project multiple points
pixel_corners = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])
world_corners = measurer.pixels_to_world(pixel_corners)
Measure physical dimensions of a bounding box on the calibrated plane.
Projects the four corners of a pixel-space bounding box to world coordinates, then computes width, height, and area in the specified unit.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
box
|
BoundingBox
|
BoundingBox from object detection (x, y, width, height in pixels) |
required |
target_unit
|
Optional[str]
|
Unit for output measurements (e.g., 'cm', 'm'). Uses calibration unit if None. |
None
|
Returns:
| Type | Description |
|---|---|
MeasuredBox
|
MeasuredBox with physical dimensions and corner coordinates |
Example::
# From object detection
detection = BoundingBox(x=100, y=50, width=400, height=350)
measured = measurer.measure_bounding_box(detection, target_unit="cm")
print(f"Object is {measured.width_world:.1f} × {measured.height_world:.1f} cm")
print(f"Area: {measured.area_world:.1f} cm²")
measure_bounding_boxes(
boxes: Sequence[BoundingBox], target_unit: Optional[str] = None
) -> List[MeasuredBox]
Measure physical dimensions of multiple bounding boxes.
Batch processing of multiple object detections. More efficient than calling measure_bounding_box() in a loop.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
boxes
|
Sequence[BoundingBox]
|
Sequence of BoundingBox objects from object detection |
required |
target_unit
|
Optional[str]
|
Unit for output measurements. Uses calibration unit if None. |
None
|
Returns:
| Type | Description |
|---|---|
List[MeasuredBox]
|
List of MeasuredBox objects with physical dimensions |
Example::
# Batch measurement from multiple detections
detections = yolo.detect(frame) # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")
# Size-based filtering
large_objects = [m for m in measurements if m.width_world > 15.0]
# Quality control
for measured in measurements:
if not (10.0 <= measured.width_world <= 20.0):
reject_part(measured)
measure_distance(
point1: Union[Tuple[float, float], ndarray],
point2: Union[Tuple[float, float], ndarray],
target_unit: Optional[str] = None,
) -> Tuple[float, str]
Measure Euclidean distance between two points on the calibrated plane.
Converts pixel coordinates to world coordinates and computes the distance. Useful for measuring gaps, spacing, or verifying known distances.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
point1
|
Union[Tuple[float, float], ndarray]
|
First point as (x, y) pixel coordinates |
required |
point2
|
Union[Tuple[float, float], ndarray]
|
Second point as (x, y) pixel coordinates |
required |
target_unit
|
Optional[str]
|
Unit for output distance. Uses calibration unit if None. |
None
|
Returns:
| Type | Description |
|---|---|
Tuple[float, str]
|
Tuple of (distance, unit) |
Raises:
| Type | Description |
|---|---|
ValueError
|
If target_unit is not supported |
Example::
# Measure distance between two detected points
point1 = (150, 200) # pixels
point2 = (350, 200) # pixels
distance, unit = measurer.measure_distance(point1, point2, target_unit="mm")
print(f"Distance: {distance:.2f} {unit}")
# Verify calibration accuracy
known_distance_mm = 100.0
measured_distance, _ = measurer.measure_distance(ref_point1, ref_point2, "mm")
error_percent = abs(measured_distance - known_distance_mm) / known_distance_mm * 100
print(f"Calibration error: {error_percent:.2f}%")
calibrator
Homography calibration for planar surface measurement.
This module provides calibration methods for establishing the relationship between image pixel coordinates and real-world metric coordinates on a planar surface.
Bases: Mindtrace
Calibrates planar homography for pixel-to-world coordinate mapping.
Establishes a homography matrix H that maps planar world coordinates (X, Y, Z=0) in metric units to image pixel coordinates (u, v). Supports both automatic checkerboard-based calibration and manual point correspondence calibration.
The homography enables real-world measurements from camera images for objects lying on a known planar surface (e.g., overhead cameras, objects on tables/floors).
Features
- Automatic checkerboard pattern detection and calibration
- Manual point correspondence calibration
- RANSAC-based robust homography estimation
- Sub-pixel corner refinement for improved accuracy
- Lens distortion correction support
- Camera intrinsics estimation from FOV
Typical Workflow
- Place calibration target (checkerboard) on measurement plane
- Capture image with known world coordinates
- Calibrate to obtain homography matrix
- Use calibration for repeated measurements
Usage::
from mindtrace.hardware import HomographyCalibrator
# Automatic checkerboard calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(12, 12), # Inner corners
square_width=25.0, # mm width per square
square_height=25.0, # mm height per square
world_unit="mm"
)
# Manual point correspondence calibration
calibration = calibrator.calibrate_from_correspondences(
world_points=[(0, 0), (300, 0), (300, 200), (0, 200)], # mm
image_points=[(100, 50), (500, 50), (500, 400), (100, 400)], # pixels
world_unit="mm"
)
# Save for later use
calibration.save("camera_calibration.json")
Configuration
All parameters can be configured via hardware config: - ransac_threshold: RANSAC reprojection error threshold (default: 3.0 pixels) - refine_corners: Enable sub-pixel corner refinement (default: True) - corner_refinement_window: Refinement window size (default: 11) - min_correspondences: Minimum points needed (default: 4) - default_world_unit: Default measurement unit (default: "mm")
Limitations
- Only works for planar surfaces (Z=0 assumption)
- Requires camera to remain fixed after calibration
- Accuracy degrades with severe viewing angles
Initialize homography calibrator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional arguments passed to Mindtrace base class |
{}
|
estimate_intrinsics_from_fov(
image_size: Tuple[int, int],
fov_horizontal_deg: float,
fov_vertical_deg: float,
principal_point: Optional[Tuple[float, float]] = None,
) -> np.ndarray
Estimate camera intrinsics matrix from field-of-view parameters.
Computes a simple pinhole camera model intrinsics matrix (K) from the camera's horizontal and vertical field of view angles and image dimensions. Useful when full camera calibration is not available.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image_size
|
Tuple[int, int]
|
Image dimensions as (width, height) in pixels |
required |
fov_horizontal_deg
|
float
|
Horizontal field of view in degrees |
required |
fov_vertical_deg
|
float
|
Vertical field of view in degrees |
required |
principal_point
|
Optional[Tuple[float, float]]
|
Optional (cx, cy) principal point in pixels. Defaults to image center if not provided. |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
3x3 camera intrinsics matrix K |
Example::
K = calibrator.estimate_intrinsics_from_fov(
image_size=(1920, 1080),
fov_horizontal_deg=70.0,
fov_vertical_deg=45.0
)
calibrate_from_correspondences(
world_points: ndarray,
image_points: ndarray,
world_unit: Optional[str] = None,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
) -> CalibrationData
Compute homography from known point correspondences.
Establishes the homography matrix H given known world coordinates (on Z=0 plane) and their corresponding image pixel coordinates. Uses RANSAC for robust estimation in the presence of outliers.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
world_points
|
ndarray
|
Nx2 array of world coordinates in metric units (X, Y on Z=0 plane) |
required |
image_points
|
ndarray
|
Nx2 array of corresponding image coordinates in pixels (u, v) |
required |
world_unit
|
Optional[str]
|
Unit of world coordinates (e.g., 'mm', 'cm', 'm'). Uses config default if None. |
None
|
camera_matrix
|
Optional[ndarray]
|
Optional 3x3 camera intrinsics matrix for undistortion |
None
|
dist_coeffs
|
Optional[ndarray]
|
Optional distortion coefficients for undistortion |
None
|
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData containing homography matrix and metadata |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If inputs are invalid (wrong shape, too few points) |
HardwareOperationError
|
If homography estimation fails |
Example::
# Four corner correspondences (world in mm, image in pixels)
world_pts = np.array([[0, 0], [300, 0], [300, 200], [0, 200]])
image_pts = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])
calibration = calibrator.calibrate_from_correspondences(
world_points=world_pts,
image_points=image_pts,
world_unit="mm"
)
calibrate_checkerboard(
image: Union[Image, ndarray],
board_size: Optional[Tuple[int, int]] = None,
square_size: Optional[float] = None,
world_unit: Optional[str] = None,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
refine_corners: Optional[bool] = None,
) -> CalibrationData
Automatic calibration from checkerboard pattern detection.
Detects a checkerboard calibration pattern in the image, extracts corner correspondences, and computes the homography matrix. The checkerboard is assumed to lie on the Z=0 plane with known square dimensions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
image
|
Union[Image, ndarray]
|
PIL Image or BGR numpy array containing checkerboard pattern |
required |
board_size
|
Optional[Tuple[int, int]]
|
Number of inner corners as (columns, rows). Uses config default if None. For a standard 8x8 checkerboard, use (7, 7). |
None
|
square_size
|
Optional[float]
|
Physical size of one checkerboard square in world units. Uses config default if None. |
None
|
world_unit
|
Optional[str]
|
Unit of square_size (e.g., 'mm', 'cm', 'm'). Uses config default if None. |
None
|
camera_matrix
|
Optional[ndarray]
|
Optional 3x3 camera intrinsics matrix for undistortion |
None
|
dist_coeffs
|
Optional[ndarray]
|
Optional distortion coefficients for undistortion |
None
|
refine_corners
|
Optional[bool]
|
Enable sub-pixel corner refinement. Uses config default if None. |
None
|
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData containing homography matrix and metadata |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If image format is unsupported |
HardwareOperationError
|
If checkerboard detection fails |
Example::
# Use config defaults for standard calibration board
calibration = calibrator.calibrate_checkerboard(image=checkerboard_image)
# Or override specific parameters
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(9, 6), # Custom board size
square_size=30.0, # Custom square size
world_unit="mm"
)
Notes
- board_size is the number of INNER corners, not squares
- A standard 8x8 checkerboard has 7x7 inner corners
- Ensure good lighting and focus for accurate detection
- Checkerboard should fill significant portion of image
- If using a standard calibration board, configure dimensions via: MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_COLS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_ROWS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_SQUARE
calibrate_checkerboard_multi_view(
images: list[Union[Image, ndarray]],
positions: list[Tuple[float, float]],
board_size: Optional[Tuple[int, int]] = None,
square_width: Optional[float] = None,
square_height: Optional[float] = None,
world_unit: Optional[str] = None,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
refine_corners: Optional[bool] = None,
) -> CalibrationData
Calibrate from multiple checkerboard positions on the same plane.
Combines corner detections from multiple images where the checkerboard is placed at different positions on the measurement plane. This provides better calibration coverage over large areas without requiring an oversized calibration target.
Ideal for calibrating long surfaces (e.g., metallic bars, conveyor belts) using a standard-sized checkerboard moved to multiple positions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
images
|
list[Union[Image, ndarray]]
|
List of images, each containing the checkerboard at a different position |
required |
positions
|
list[Tuple[float, float]]
|
List of (x_offset, y_offset) tuples in world units specifying the checkerboard's origin position in each image. The first position is typically (0, 0), and subsequent positions indicate how far the checkerboard was moved. |
required |
board_size
|
Optional[Tuple[int, int]]
|
Number of inner corners as (columns, rows). Uses config default if None. |
None
|
square_width
|
Optional[float]
|
Physical width of one checkerboard square in world units. Uses config default if None. |
None
|
square_height
|
Optional[float]
|
Physical height of one checkerboard square in world units. Uses config default if None. |
None
|
world_unit
|
Optional[str]
|
Unit of positions and square_width/height. Uses config default if None. |
None
|
camera_matrix
|
Optional[ndarray]
|
Optional 3x3 camera intrinsics matrix for undistortion |
None
|
dist_coeffs
|
Optional[ndarray]
|
Optional distortion coefficients for undistortion |
None
|
refine_corners
|
Optional[bool]
|
Enable sub-pixel corner refinement. Uses config default if None. |
None
|
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData containing homography matrix computed from all positions |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If inputs are invalid or inconsistent |
HardwareOperationError
|
If checkerboard detection fails in any image |
Example::
# Calibrate a 2-meter long bar using 3 checkerboard positions
images = [image1, image2, image3]
positions = [
(0, 0), # Start of bar
(1000, 0), # Middle (1000mm from start)
(2000, 0) # End (2000mm from start)
]
calibration = calibrator.calibrate_checkerboard_multi_view(
images=images,
positions=positions,
board_size=(12, 12),
square_width=25.0, # 25mm wide squares
square_height=25.0, # 25mm tall squares
world_unit="mm"
)
Notes
- All images must show the same plane (Z=0)
- Positions specify where the checkerboard origin (top-left corner) is located
- Use more positions for better coverage of large measurement areas
- Typical usage: 3-5 positions for long surfaces
- RANSAC automatically handles slight inaccuracies in position measurements
data
Data structures for homography calibration and measurement.
This module defines immutable data containers for homography-based measurement operations.
dataclass
CalibrationData(
H: ndarray,
camera_matrix: Optional[ndarray] = None,
dist_coeffs: Optional[ndarray] = None,
world_unit: str = "mm",
plane_normal_camera: Optional[ndarray] = None,
)
Immutable container for homography calibration data.
Holds the homography matrix and optional camera intrinsics derived or provided during calibration. The homography maps world plane coordinates (Z=0) in metric units to image pixel coordinates.
Attributes:
| Name | Type | Description |
|---|---|---|
H |
ndarray
|
3x3 homography matrix from world plane (Z=0) to image pixels |
camera_matrix |
Optional[ndarray]
|
3x3 camera intrinsics matrix (K) if known or estimated |
dist_coeffs |
Optional[ndarray]
|
Lens distortion coefficients if available |
world_unit |
str
|
Unit used for world coordinates (e.g., 'mm', 'cm', 'm', 'in', 'ft') |
plane_normal_camera |
Optional[ndarray]
|
Optional 3D normal of the plane in camera frame if recovered |
Save calibration data to JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to save the calibration data |
required |
Note
NumPy arrays are converted to lists for JSON serialization.
classmethod
Load calibration data from JSON file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filepath
|
str
|
Path to the calibration data file |
required |
Returns:
| Type | Description |
|---|---|
CalibrationData
|
CalibrationData instance loaded from file |
Raises:
| Type | Description |
|---|---|
FileNotFoundError
|
If the file doesn't exist |
ValueError
|
If the file format is invalid |
dataclass
MeasuredBox(
corners_world: ndarray,
width_world: float,
height_world: float,
area_world: float,
unit: str,
)
Immutable container for metric-space measurement of a bounding box.
Stores the result of projecting a pixel-space bounding box to world coordinates on a planar surface using homography inversion. Contains the projected corner points and computed physical dimensions.
Attributes:
| Name | Type | Description |
|---|---|---|
corners_world |
ndarray
|
4x2 array of corner coordinates in world units (top-left, top-right, bottom-right, bottom-left) |
width_world |
float
|
Width in world units (distance between top-left and top-right) |
height_world |
float
|
Height in world units (distance between top-left and bottom-left) |
area_world |
float
|
Area in square world units (computed via shoelace formula) |
unit |
str
|
Unit of measurement (e.g., 'mm', 'cm', 'm', 'in', 'ft') |
measurer
Homography-based measurement for planar objects.
This module provides measurement operations that project pixel-space bounding boxes to real-world metric coordinates using calibrated homography transformations.
Bases: Mindtrace
Measures physical dimensions of objects using planar homography.
Projects pixel-space bounding boxes from object detection to real-world metric coordinates on a planar surface using a pre-calibrated homography matrix. Enables accurate physical size measurements from camera images.
The measurer uses the inverse homography (H⁻¹) to map image pixels back to world coordinates, then computes Euclidean distances and polygon areas for size measurements.
Features
- Pixel-to-world coordinate projection
- Bounding box dimension measurement (width, height, area)
- Multi-unit support with automatic conversion
- Batch processing for multiple detections
- Pre-computed inverse homography for performance
Typical Workflow
- Calibrate camera view to obtain HomographyCalibrator.calibrate_*()
- Create measurer with calibration data
- Detect objects with vision model (YOLO, etc.)
- Measure physical dimensions from bounding boxes
- Apply size-based filtering or quality control
Usage::
from mindtrace.hardware import HomographyCalibrator, HomographyMeasurer
from mindtrace.core.types.bounding_box import BoundingBox
# One-time calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
image=checkerboard_image,
board_size=(12, 12),
square_size=25.0,
world_unit="mm"
)
# Create measurer (reuse for all measurements)
measurer = HomographyMeasurer(calibration)
# Measure objects from detection results
detections = yolo.detect(frame) # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")
for measured in measurements:
print(f"Width: {measured.width_world:.2f} cm")
print(f"Height: {measured.height_world:.2f} cm")
print(f"Area: {measured.area_world:.2f} cm²")
# Size-based filtering
if measured.width_world > 10.0:
reject_oversized_part(measured)
Configuration
- Supported units: mm, cm, m, in, ft (configurable via hardware config)
- Default world unit: Inherited from calibration data
Limitations
- Only works for planar surfaces (Z=0 assumption)
- Accuracy depends on calibration quality and viewing angle
- Assumes objects lie flat on the calibrated plane
- Camera must remain fixed after calibration
Initialize homography measurer with calibration data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
calibration
|
CalibrationData
|
CalibrationData from HomographyCalibrator |
required |
**kwargs
|
Additional arguments passed to Mindtrace base class |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If homography matrix is invalid |
Project pixel coordinates to world plane coordinates.
Maps Nx2 pixel coordinates to world plane coordinates using the inverse homography matrix H⁻¹. This is the core projection operation for all measurement functionality.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
points_px
|
ndarray
|
Nx2 array of pixel coordinates (u, v) |
required |
Returns:
| Type | Description |
|---|---|
ndarray
|
Nx2 array of world coordinates (X, Y) in calibration world unit |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If input array has wrong shape |
Example::
# Project single point
world_point = measurer.pixels_to_world(np.array([[320, 240]]))
# Project multiple points
pixel_corners = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])
world_corners = measurer.pixels_to_world(pixel_corners)
Measure physical dimensions of a bounding box on the calibrated plane.
Projects the four corners of a pixel-space bounding box to world coordinates, then computes width, height, and area in the specified unit.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
box
|
BoundingBox
|
BoundingBox from object detection (x, y, width, height in pixels) |
required |
target_unit
|
Optional[str]
|
Unit for output measurements (e.g., 'cm', 'm'). Uses calibration unit if None. |
None
|
Returns:
| Type | Description |
|---|---|
MeasuredBox
|
MeasuredBox with physical dimensions and corner coordinates |
Example::
# From object detection
detection = BoundingBox(x=100, y=50, width=400, height=350)
measured = measurer.measure_bounding_box(detection, target_unit="cm")
print(f"Object is {measured.width_world:.1f} × {measured.height_world:.1f} cm")
print(f"Area: {measured.area_world:.1f} cm²")
measure_bounding_boxes(
boxes: Sequence[BoundingBox], target_unit: Optional[str] = None
) -> List[MeasuredBox]
Measure physical dimensions of multiple bounding boxes.
Batch processing of multiple object detections. More efficient than calling measure_bounding_box() in a loop.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
boxes
|
Sequence[BoundingBox]
|
Sequence of BoundingBox objects from object detection |
required |
target_unit
|
Optional[str]
|
Unit for output measurements. Uses calibration unit if None. |
None
|
Returns:
| Type | Description |
|---|---|
List[MeasuredBox]
|
List of MeasuredBox objects with physical dimensions |
Example::
# Batch measurement from multiple detections
detections = yolo.detect(frame) # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")
# Size-based filtering
large_objects = [m for m in measurements if m.width_world > 15.0]
# Quality control
for measured in measurements:
if not (10.0 <= measured.width_world <= 20.0):
reject_part(measured)
measure_distance(
point1: Union[Tuple[float, float], ndarray],
point2: Union[Tuple[float, float], ndarray],
target_unit: Optional[str] = None,
) -> Tuple[float, str]
Measure Euclidean distance between two points on the calibrated plane.
Converts pixel coordinates to world coordinates and computes the distance. Useful for measuring gaps, spacing, or verifying known distances.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
point1
|
Union[Tuple[float, float], ndarray]
|
First point as (x, y) pixel coordinates |
required |
point2
|
Union[Tuple[float, float], ndarray]
|
Second point as (x, y) pixel coordinates |
required |
target_unit
|
Optional[str]
|
Unit for output distance. Uses calibration unit if None. |
None
|
Returns:
| Type | Description |
|---|---|
Tuple[float, str]
|
Tuple of (distance, unit) |
Raises:
| Type | Description |
|---|---|
ValueError
|
If target_unit is not supported |
Example::
# Measure distance between two detected points
point1 = (150, 200) # pixels
point2 = (350, 200) # pixels
distance, unit = measurer.measure_distance(point1, point2, target_unit="mm")
print(f"Distance: {distance:.2f} {unit}")
# Verify calibration accuracy
known_distance_mm = 100.0
measured_distance, _ = measurer.measure_distance(ref_point1, ref_point2, "mm")
error_percent = abs(measured_distance - known_distance_mm) / known_distance_mm * 100
print(f"Calibration error: {error_percent:.2f}%")
setup
Camera Setup Module
This module provides setup scripts for various camera SDKs and utilities for configuring camera hardware in the Mindtrace system.
Available CLI commands (after package installation): mindtrace-camera-setup install # Install all camera SDKs mindtrace-camera-setup uninstall # Uninstall all camera SDKs mindtrace-camera-basler install # Install Basler Pylon SDK mindtrace-camera-basler uninstall # Uninstall Basler Pylon SDK mindtrace-camera-genicam install # Install GenICam CTI files mindtrace-camera-genicam uninstall # Uninstall GenICam SDK mindtrace-camera-genicam verify # Verify GenICam installation
Each setup script uses Typer for CLI and can be run independently.
PylonSDKInstaller
Bases: Mindtrace
Basler Pylon SDK installer with guided wizard.
This class provides an interactive installation wizard that guides users through downloading and installing the Basler Pylon SDK from the official Basler website.
Initialize the Pylon SDK installer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
package_path
|
Optional[str]
|
Optional path to pre-downloaded package file |
None
|
Install the Pylon SDK using interactive wizard or pre-downloaded package.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
CameraSystemSetup
Bases: Mindtrace
Unified camera system setup and configuration manager.
This class handles the installation and configuration of all camera SDKs and related network settings for the Mindtrace hardware system.
Initialize the camera system setup manager.
Install all camera SDKs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
release_version
|
str
|
SDK release version to install |
'v1.0-stable'
|
Returns:
| Type | Description |
|---|---|
bool
|
True if all installations successful, False otherwise |
Uninstall all camera SDKs.
Returns:
| Type | Description |
|---|---|
bool
|
True if all uninstallations successful, False otherwise |
Configure firewall rules to allow camera communication.
This method configures platform-specific firewall rules to allow communication with GigE Vision cameras on the specified IP range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ip_range
|
Optional[str]
|
IP range to allow (uses config default if None) |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if firewall configuration successful, False otherwise |
GenICamCTIInstaller
Bases: Mindtrace
Matrix Vision GenICam CTI installer and manager.
This class handles the download, installation, and uninstallation of the Matrix Vision Impact Acquire SDK and GenTL Producer across different platforms.
Initialize the GenICam CTI installer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
release_version
|
str
|
SDK release version to download |
'latest'
|
Get the expected CTI file path for the current platform.
Returns:
| Type | Description |
|---|---|
str
|
Path to the CTI file for the current platform |
Verify that the CTI file is properly installed.
Returns:
| Type | Description |
|---|---|
bool
|
True if CTI file exists and is accessible, False otherwise |
Install the Matrix Vision Impact Acquire SDK for the current platform.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
configure_firewall_helper
Configure firewall rules to allow camera communication.
This function provides a simple interface to configure firewall rules for camera network communication. It works on both Windows and Linux.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ip_range
|
Optional[str]
|
IP range to allow (uses config default if None) |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if firewall configuration successful, False otherwise |
setup_basler
Basler Pylon SDK Setup Script
This script provides a guided installation wizard for the Basler Pylon SDK for both Linux and Windows systems. The Pylon SDK provides tools like Pylon Viewer and IP Configurator for camera management.
Note: pypylon (the Python package) is self-contained for camera operations. This SDK installation is only needed for the GUI tools.
Features: - Interactive guided wizard with browser integration - Platform-specific installation instructions - Support for pre-downloaded packages (--package flag) - Comprehensive logging and error handling - Uninstallation support
Usage
python setup_basler.py # Interactive wizard python setup_basler.py --package /path/to/file # Use pre-downloaded file python setup_basler.py --uninstall # Uninstall SDK mindtrace-camera-basler-install # Console script (install) mindtrace-camera-basler-uninstall # Console script (uninstall)
Bases: Mindtrace
Basler Pylon SDK installer with guided wizard.
This class provides an interactive installation wizard that guides users through downloading and installing the Basler Pylon SDK from the official Basler website.
Initialize the Pylon SDK installer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
package_path
|
Optional[str]
|
Optional path to pre-downloaded package file |
None
|
Install the Pylon SDK using interactive wizard or pre-downloaded package.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
install(
package: Optional[Path] = typer.Option(
None,
"--package",
"-p",
help="Path to pre-downloaded Pylon SDK package file",
exists=True,
dir_okay=False,
),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
) -> None
Install the Basler Pylon SDK using an interactive wizard.
The wizard will guide you through downloading and installing the SDK from Basler's official website where you'll accept their EULA.
For CI/automation, use --package to provide a pre-downloaded file.
setup_cameras
Camera Setup and Configuration Script
This script provides a unified interface for installing and configuring all camera SDKs and related network settings for the Mindtrace hardware system. It combines Basler SDK installation with firewall configuration for camera network communication.
Features: - Combined installation of all camera SDKs (Basler Pylon, Matrix Vision GenICam CTI) - Firewall configuration for camera network communication - Cross-platform support (Windows, Linux, and macOS) - Individual SDK uninstallation support - Comprehensive logging and error handling - Configurable IP range and firewall settings - Integration with Mindtrace configuration system
Configuration
The script uses the Mindtrace hardware configuration system for default values. Settings can be customized via:
- Environment Variables:
- MINDTRACE_HW_NETWORK_CAMERA_IP_RANGE: IP range for firewall rules (default: 192.168.50.0/24)
-
MINDTRACE_HW_NETWORK_FIREWALL_RULE_NAME: Name for firewall rules (default: "Allow Camera Network")
-
Configuration File (hardware_config.json): { "network": { "camera_ip_range": "192.168.50.0/24", "firewall_rule_name": "Allow Camera Network" } }
-
Command Line Arguments (highest priority)
Usage
python setup_cameras.py # Install all SDKs python setup_cameras.py --uninstall # Uninstall all SDKs python setup_cameras.py --configure-firewall # Configure firewall only python setup_cameras.py --ip-range 10.0.0.0/24 # Use custom IP range mindtrace-setup-cameras # Console script
Network Configuration
The script configures firewall rules to allow camera communication on the specified IP range. This is essential for GigE Vision cameras that communicate over Ethernet. The default IP range (192.168.50.0/24) follows industrial camera networking standards.
Bases: Mindtrace
Unified camera system setup and configuration manager.
This class handles the installation and configuration of all camera SDKs and related network settings for the Mindtrace hardware system.
Initialize the camera system setup manager.
Install all camera SDKs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
release_version
|
str
|
SDK release version to install |
'v1.0-stable'
|
Returns:
| Type | Description |
|---|---|
bool
|
True if all installations successful, False otherwise |
Uninstall all camera SDKs.
Returns:
| Type | Description |
|---|---|
bool
|
True if all uninstallations successful, False otherwise |
Configure firewall rules to allow camera communication.
This method configures platform-specific firewall rules to allow communication with GigE Vision cameras on the specified IP range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ip_range
|
Optional[str]
|
IP range to allow (uses config default if None) |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if firewall configuration successful, False otherwise |
Configure firewall rules to allow camera communication.
This function provides a simple interface to configure firewall rules for camera network communication. It works on both Windows and Linux.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
ip_range
|
Optional[str]
|
IP range to allow (uses config default if None) |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if firewall configuration successful, False otherwise |
install(
version: str = typer.Option(
"v1.0-stable", "--version", help="SDK release version to install"
),
ip_range: Optional[str] = typer.Option(
None,
"--ip-range",
help="IP range to allow in firewall (uses config default if not specified)",
),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
) -> None
Install all camera SDKs and configure firewall.
Installs Basler Pylon SDK and Matrix Vision GenICam CTI files, then configures firewall rules for GigE Vision camera communication.
uninstall(
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
)
) -> None
Uninstall all camera SDKs.
configure_firewall(
ip_range: Optional[str] = typer.Option(
None,
"--ip-range",
help="IP range to allow in firewall (uses config default if not specified)",
),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
) -> None
Configure firewall rules for camera network communication.
Configures platform-specific firewall rules to allow GigE Vision camera communication on the specified IP range.
Windows: Uses netsh advfirewall commands Linux: Uses UFW (Uncomplicated Firewall)
setup_genicam
Matrix Vision GenICam CTI Setup Script
This script automates the download and installation of the Matrix Vision Impact Acquire SDK and GenTL Producer (CTI files) for Linux, Windows, and macOS. The CTI files are required for GenICam camera communication via Harvesters.
Features: - Automatic SDK download from Matrix Vision or GitHub releases - Platform-specific installation (Linux .deb/.tar.gz, Windows .exe, macOS .dmg/.pkg) - CTI file detection and verification - Administrative privilege handling - Comprehensive logging and error handling - Uninstallation support - Harvesters CTI path configuration
CTI File Locations: - Linux: /opt/ImpactAcquire/lib/x86_64/mvGenTLProducer.cti - Windows: C:\Program Files\MATRIX VISION\mvIMPACT Acquire\bin\x64\mvGenTLProducer.cti - macOS: /Applications/mvIMPACT_Acquire.app/Contents/Libraries/x86_64/mvGenTLProducer.cti
Usage
python setup_genicam.py # Install CTI files python setup_genicam.py --uninstall # Uninstall SDK python setup_genicam.py --verify # Verify CTI installation mindtrace-camera-genicam-install # Console script (install) mindtrace-camera-genicam-uninstall # Console script (uninstall) mindtrace-camera-genicam-verify # Console script (verify)
Bases: Mindtrace
Matrix Vision GenICam CTI installer and manager.
This class handles the download, installation, and uninstallation of the Matrix Vision Impact Acquire SDK and GenTL Producer across different platforms.
Initialize the GenICam CTI installer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
release_version
|
str
|
SDK release version to download |
'latest'
|
Get the expected CTI file path for the current platform.
Returns:
| Type | Description |
|---|---|
str
|
Path to the CTI file for the current platform |
Verify that the CTI file is properly installed.
Returns:
| Type | Description |
|---|---|
bool
|
True if CTI file exists and is accessible, False otherwise |
Install the Matrix Vision Impact Acquire SDK for the current platform.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
Install the Matrix Vision GenICam CTI files.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
release_version
|
str
|
SDK release version to install |
'latest'
|
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
Uninstall the Matrix Vision Impact Acquire SDK.
Returns:
| Type | Description |
|---|---|
bool
|
True if uninstallation successful, False otherwise |
Verify the Matrix Vision CTI installation.
Returns:
| Type | Description |
|---|---|
bool
|
True if CTI files are properly installed, False otherwise |
install(
version: str = typer.Option(
"latest", "--version", help="SDK release version to install"
),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
) -> None
Install the Matrix Vision Impact Acquire SDK and CTI files.
Downloads and installs the SDK from the official Balluff/Matrix Vision servers. The CTI files are required for GenICam camera communication via Harvesters.
uninstall(
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
)
) -> None
Uninstall the Matrix Vision Impact Acquire SDK.
cli
Mindtrace Hardware CLI - Command-line interface for hardware management.
commands
CLI command modules.
camera
Camera service commands.
start(
api_host: Annotated[
str,
Option(--api - host, help="API service host", envvar=CAMERA_API_HOST),
] = "localhost",
api_port: Annotated[
int,
Option(--api - port, help="API service port", envvar=CAMERA_API_PORT),
] = 8002,
include_mocks: Annotated[
bool, Option(--include - mocks, help="Include mock cameras")
] = False,
open_docs: Annotated[
bool, Option(--open - docs, help="Open API documentation in browser")
] = False,
)
Start camera API service (headless).
plc
scanner
3D Scanner service commands.
start(
api_host: Annotated[
str,
Option(
--api - host,
help="3D Scanner API service host",
envvar=SCANNER_3D_API_HOST,
),
] = "localhost",
api_port: Annotated[
int,
Option(
--api - port,
help="3D Scanner API service port",
envvar=SCANNER_3D_API_PORT,
),
] = 8005,
open_docs: Annotated[
bool, Option(--open - docs, help="Open API documentation in browser")
] = False,
)
Start 3D scanner API service.
status
stereo
Stereo camera service commands.
start(
api_host: Annotated[
str,
Option(
--api - host,
help="Stereo Camera API service host",
envvar=STEREO_CAMERA_API_HOST,
),
] = "localhost",
api_port: Annotated[
int,
Option(
--api - port,
help="Stereo Camera API service port",
envvar=STEREO_CAMERA_API_PORT,
),
] = 8004,
open_docs: Annotated[
bool, Option(--open - docs, help="Open API documentation in browser")
] = False,
)
Start stereo camera API service.
core
Core CLI functionality.
ProcessManager
Manages hardware service processes.
Initialize process manager.
start_camera_api(
host: str = None, port: int = None, include_mocks: bool = False
) -> subprocess.Popen
Launch camera API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: CAMERA_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: CAMERA_API_PORT env var or 8002) |
None
|
include_mocks
|
bool
|
Include mock cameras in discovery |
False
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Launch PLC API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: PLC_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: PLC_API_PORT env var or 8003) |
None
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Launch Stereo Camera API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: STEREO_CAMERA_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: STEREO_CAMERA_API_PORT env var or 8004) |
None
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Launch 3D Scanner API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: SCANNER_3D_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: SCANNER_3D_API_PORT env var or 8005) |
None
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Stop a service by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
service_name
|
str
|
Name of the service to stop |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if stopped successfully |
Get status of all services.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with service status information |
setup_logger
setup_logger(
name: str = "mindtrace-hw-cli",
log_file: Optional[Path] = None,
verbose: bool = False,
) -> logging.Logger
Set up logger for the CLI.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Logger name |
'mindtrace-hw-cli'
|
log_file
|
Optional[Path]
|
Optional log file path |
None
|
verbose
|
bool
|
Enable verbose logging |
False
|
Returns:
| Type | Description |
|---|---|
Logger
|
Configured logger instance |
logger
Logging configuration for the CLI using Rich.
Logger that uses Rich Console for professional output.
Initialize RichLogger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
console
|
Optional[Console]
|
Optional Rich Console instance. Creates new one if not provided. |
None
|
Log info message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Message to log |
required |
Log success message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Success message to log |
required |
Log warning message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Warning message to log |
required |
Log error message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
message
|
str
|
Error message to log |
required |
setup_logger(
name: str = "mindtrace-hw-cli",
log_file: Optional[Path] = None,
verbose: bool = False,
) -> logging.Logger
Set up logger for the CLI.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
str
|
Logger name |
'mindtrace-hw-cli'
|
log_file
|
Optional[Path]
|
Optional log file path |
None
|
verbose
|
bool
|
Enable verbose logging |
False
|
Returns:
| Type | Description |
|---|---|
Logger
|
Configured logger instance |
process_manager
Process management for hardware services.
Manages hardware service processes.
Initialize process manager.
start_camera_api(
host: str = None, port: int = None, include_mocks: bool = False
) -> subprocess.Popen
Launch camera API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: CAMERA_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: CAMERA_API_PORT env var or 8002) |
None
|
include_mocks
|
bool
|
Include mock cameras in discovery |
False
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Launch PLC API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: PLC_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: PLC_API_PORT env var or 8003) |
None
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Launch Stereo Camera API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: STEREO_CAMERA_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: STEREO_CAMERA_API_PORT env var or 8004) |
None
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Launch 3D Scanner API service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
host
|
str
|
Host to bind the service to (default: SCANNER_3D_API_HOST env var or 'localhost') |
None
|
port
|
int
|
Port to run the service on (default: SCANNER_3D_API_PORT env var or 8005) |
None
|
Returns:
| Type | Description |
|---|---|
Popen
|
The subprocess handle |
Stop a service by name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
service_name
|
str
|
Name of the service to stop |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if stopped successfully |
Get status of all services.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with service status information |
utils
CLI utility functions.
format_status
Format and display service status.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
status
|
Dict[str, Any]
|
Status dictionary from ProcessManager |
required |
print_table
Print data as a formatted table.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
List[Dict[str, Any]]
|
List of dictionaries to display |
required |
headers
|
Optional[List[str]]
|
Optional header names |
None
|
display
Display utilities for CLI output using Rich.
Format and display service status.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
status
|
Dict[str, Any]
|
Status dictionary from ProcessManager |
required |
Print data as a formatted table.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
List[Dict[str, Any]]
|
List of dictionaries to display |
required |
headers
|
Optional[List[str]]
|
Optional header names |
None
|
Print services in a nice panel format.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
title
|
str
|
Panel title |
required |
services
|
Dict[str, Dict[str, Any]]
|
Service information dictionary |
required |
plcs
PLC module for Mindtrace hardware system.
Provides unified interface for managing PLCs from different manufacturers with support for discovery, registration, and batch operations.
PLCManager
Bases: Mindtrace
Unified PLC management system for industrial automation.
This manager provides a comprehensive interface for managing PLCs from different manufacturers with support for discovery, registration, connection management, and batch tag operations. It handles multiple PLC backends transparently and provides thread-safe operations with proper error handling.
The manager supports: - Automatic PLC discovery across multiple backends - Dynamic PLC registration and connection management - Batch tag read/write operations for optimal performance - Connection monitoring and automatic reconnection - Comprehensive error handling and logging - Thread-safe operations with proper resource management
Supported PLC Types: - Allen-Bradley: ControlLogix, CompactLogix, MicroLogix PLCs - Siemens: S7-300, S7-400, S7-1200, S7-1500 PLCs (Future) - Modbus: Modbus TCP/RTU devices (Future) - Mock PLCs: For testing and development
Attributes:
| Name | Type | Description |
|---|---|---|
plcs |
Dict[str, BasePLC]
|
Dictionary mapping PLC names to PLC instances |
config |
Hardware configuration manager instance |
|
logger |
Centralized logger for PLC operations |
Example
Basic usage
async with PLCManager() as manager: # Discover available PLCs discovered = await manager.discover_plcs()
# Register and connect to a PLC
await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100")
await manager.connect_plc("PLC1")
# Read and write tags
values = await manager.read_tag("PLC1", ["Temperature", "Pressure"])
await manager.write_tag("PLC1", [("Setpoint", 75.0)])
Batch operations
async with PLCManager() as manager: # Register multiple PLCs await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.register_plc("PLC2", "AllenBradley", "192.168.1.101")
# Batch read from multiple PLCs
read_requests = [
("PLC1", ["Temperature", "Pressure"]),
("PLC2", ["Speed", "Position"])
]
results = await manager.read_tags_batch(read_requests)
Initialize the PLC manager.
discover_plcs
async
Discover available PLCs from all enabled backends.
Returns:
| Type | Description |
|---|---|
Dict[str, List[str]]
|
Dictionary mapping backend names to lists of discovered PLCs |
register_plc
async
register_plc(
plc_name: str,
backend: str,
ip_address: str,
plc_type: Optional[str] = None,
**kwargs
) -> bool
Register a PLC with the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
backend
|
str
|
Backend type ("AllenBradley", "Siemens", "Modbus") |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_type
|
Optional[str]
|
Specific PLC type (backend-dependent) |
None
|
**kwargs
|
Additional backend-specific parameters |
{}
|
Returns:
| Type | Description |
|---|---|
bool
|
True if registration successful, False otherwise |
unregister_plc
async
Unregister a PLC from the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC to unregister |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if unregistration successful, False otherwise |
connect_plc
async
Connect to a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC to connect |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
disconnect_plc
async
Disconnect from a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC to disconnect |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
connect_all_plcs
async
Connect to all registered PLCs.
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping PLC names to connection success status |
disconnect_all_plcs
async
Disconnect from all registered PLCs.
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping PLC names to disconnection success status |
read_tag
async
Read tags from a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
write_tag
async
Write tags to a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
read_tags_batch
async
Read tags from multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[str, List[str]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping PLC names to their tag read results |
write_tags_batch
async
write_tags_batch(
requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]
Write tags to multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, bool]]
|
Dictionary mapping PLC names to their tag write results |
get_plc_status
async
Get status information for a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with PLC status information |
get_all_plc_status
async
Get status information for all registered PLCs.
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping PLC names to their status information |
get_plc_tags
async
Get list of available tags for a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available tag names |
get_registered_plcs
Get list of registered PLC names.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of registered PLC names |
get_backend_info
Get information about available PLC backends.
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping backend names to their information |
backends
PLC backends for different manufacturers and protocols.
This module contains implementations for various PLC types including Allen Bradley, Siemens, Modbus, and other industrial protocols.
allen_bradley
Allen Bradley PLC Backend.
Implements PLC communication for Allen Bradley PLCs using the pycomm3 library.
AllenBradleyPLC(
plc_name: str,
ip_address: str,
plc_type: Optional[str] = None,
plc_config_file: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
)
Bases: BasePLC
Allen Bradley PLC implementation using pycomm3.
Supports multiple PLC types and Ethernet/IP devices: - ControlLogix, CompactLogix, Micro800 (LogixDriver) - SLC500, MicroLogix (SLCDriver) - Generic Ethernet/IP devices (CIPDriver)
Attributes:
| Name | Type | Description |
|---|---|---|
plc |
pycomm3 driver instance (LogixDriver, SLCDriver, or CIPDriver) |
|
driver_type |
Type of driver being used |
|
plc_type |
Type of PLC (auto-detected or specified) |
|
_tags_cache |
Optional[List[str]]
|
Cached list of available tags |
_cache_timestamp |
float
|
Timestamp of last tag cache update |
_cache_ttl |
float
|
Time-to-live for tag cache in seconds |
Initialize Allen Bradley PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_type
|
Optional[str]
|
PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection) |
None
|
plc_config_file
|
Optional[str]
|
Path to PLC configuration file |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pycomm3 is not installed |
async
Initialize the Allen Bradley PLC connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success, plc_object, device_manager) |
async
Establish connection to the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
async
Disconnect from the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
async
Check if Allen Bradley PLC is currently connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected, False otherwise |
async
Read values from Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
Raises:
| Type | Description |
|---|---|
PLCTagReadError
|
If tag reading fails |
async
Write values to Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
Raises:
| Type | Description |
|---|---|
PLCTagWriteError
|
If tag writing fails |
async
Get list of all available tags on the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
async
Get detailed information about a specific tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tag_name
|
str
|
Name of the tag |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with tag information (type, description, etc.) |
async
Get detailed information about the connected PLC using proper pycomm3 methods.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with PLC information |
staticmethod
Discover available Allen Bradley PLCs using proper pycomm3 discovery methods.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers in format "AllenBradley:IP:Type" |
async
classmethod
Async wrapper for get_available_plcs() - runs discovery in threadpool.
Use this instead of get_available_plcs() when calling from async context to avoid blocking the event loop during PLC network discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List[str]: List of discovered PLC IP addresses. |
MockAllenBradleyPLC(
plc_name: str,
ip_address: str,
plc_type: Optional[str] = None,
plc_config_file: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
)
Bases: BasePLC
Mock implementation of Allen Bradley PLC for testing and development.
This class provides a complete simulation of the Allen Bradley PLC API without requiring actual hardware. It simulates all three driver types and provides realistic tag behavior for comprehensive testing.
Attributes:
| Name | Type | Description |
|---|---|---|
plc_name |
User-defined PLC identifier |
|
ip_address |
Simulated IP address |
|
plc_type |
PLC type ("logix", "slc", "cip", or "auto") |
|
driver_type |
Detected/simulated driver type |
|
_is_connected |
Connection status simulation |
|
_tag_values |
Dict[str, Any]
|
Simulated tag values storage |
_tag_types |
Dict[str, str]
|
Tag type mapping for different driver types |
_cache_ttl |
Tag cache time-to-live |
|
_tags_cache |
Optional[List[str]]
|
Cached list of available tags |
_cache_timestamp |
float
|
Timestamp of last cache update |
Initialize mock Allen Bradley PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
ip_address
|
str
|
Simulated IP address |
required |
plc_type
|
Optional[str]
|
PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection) |
None
|
plc_config_file
|
Optional[str]
|
Path to configuration file (simulated) |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
async
Initialize the mock Allen Bradley PLC connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success, mock_plc_object, mock_device_manager) |
async
Simulate connection to the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
async
Simulate disconnection from the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
async
Check if mock Allen Bradley PLC is currently connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected, False otherwise |
async
Simulate reading values from Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
async
Simulate writing values to Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
async
Get list of all available mock tags.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
async
Get detailed information about a mock tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tag_name
|
str
|
Name of the tag |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with tag information |
async
Get detailed information about the mock PLC.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with PLC information |
staticmethod
Discover available mock Allen Bradley PLCs.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers in format "AllenBradley:IP:Type" |
Allen Bradley PLC implementation using pycomm3.
Provides communication interface for Allen Bradley PLCs and other Ethernet/IP devices using CIPDriver, LogixDriver, and SLCDriver from pycomm3 library.
AllenBradleyPLC(
plc_name: str,
ip_address: str,
plc_type: Optional[str] = None,
plc_config_file: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
)
Bases: BasePLC
Allen Bradley PLC implementation using pycomm3.
Supports multiple PLC types and Ethernet/IP devices: - ControlLogix, CompactLogix, Micro800 (LogixDriver) - SLC500, MicroLogix (SLCDriver) - Generic Ethernet/IP devices (CIPDriver)
Attributes:
| Name | Type | Description |
|---|---|---|
plc |
pycomm3 driver instance (LogixDriver, SLCDriver, or CIPDriver) |
|
driver_type |
Type of driver being used |
|
plc_type |
Type of PLC (auto-detected or specified) |
|
_tags_cache |
Optional[List[str]]
|
Cached list of available tags |
_cache_timestamp |
float
|
Timestamp of last tag cache update |
_cache_ttl |
float
|
Time-to-live for tag cache in seconds |
Initialize Allen Bradley PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_type
|
Optional[str]
|
PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection) |
None
|
plc_config_file
|
Optional[str]
|
Path to PLC configuration file |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pycomm3 is not installed |
async
Initialize the Allen Bradley PLC connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success, plc_object, device_manager) |
async
Establish connection to the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
async
Disconnect from the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
async
Check if Allen Bradley PLC is currently connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected, False otherwise |
async
Read values from Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
Raises:
| Type | Description |
|---|---|
PLCTagReadError
|
If tag reading fails |
async
Write values to Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
Raises:
| Type | Description |
|---|---|
PLCTagWriteError
|
If tag writing fails |
async
Get list of all available tags on the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
async
Get detailed information about a specific tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tag_name
|
str
|
Name of the tag |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with tag information (type, description, etc.) |
async
Get detailed information about the connected PLC using proper pycomm3 methods.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with PLC information |
staticmethod
Discover available Allen Bradley PLCs using proper pycomm3 discovery methods.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers in format "AllenBradley:IP:Type" |
async
classmethod
Async wrapper for get_available_plcs() - runs discovery in threadpool.
Use this instead of get_available_plcs() when calling from async context to avoid blocking the event loop during PLC network discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List[str]: List of discovered PLC IP addresses. |
Mock Allen Bradley PLC Implementation
This module provides a mock implementation of Allen Bradley PLCs for testing and development without requiring actual hardware or the pycomm3 SDK.
Features
- Complete simulation of all three driver types (Logix, SLC, CIP)
- Realistic tag data generation and management
- Configurable number of mock PLCs
- Error simulation capabilities for testing
- No hardware dependencies
Components
- MockAllenBradleyPLC: Mock PLC implementation
Usage
from mindtrace.hardware.plcs.backends.allen_bradley import MockAllenBradleyPLC
Initialize mock PLC
plc = MockAllenBradleyPLC("TestPLC", "192.168.1.100", plc_type="logix")
Use exactly like real PLC
await plc.connect() tags = await plc.read_tag(["Motor1_Speed", "Conveyor_Status"]) await plc.write_tag([("Pump1_Command", True)]) await plc.disconnect()
MockAllenBradleyPLC(
plc_name: str,
ip_address: str,
plc_type: Optional[str] = None,
plc_config_file: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
)
Bases: BasePLC
Mock implementation of Allen Bradley PLC for testing and development.
This class provides a complete simulation of the Allen Bradley PLC API without requiring actual hardware. It simulates all three driver types and provides realistic tag behavior for comprehensive testing.
Attributes:
| Name | Type | Description |
|---|---|---|
plc_name |
User-defined PLC identifier |
|
ip_address |
Simulated IP address |
|
plc_type |
PLC type ("logix", "slc", "cip", or "auto") |
|
driver_type |
Detected/simulated driver type |
|
_is_connected |
Connection status simulation |
|
_tag_values |
Dict[str, Any]
|
Simulated tag values storage |
_tag_types |
Dict[str, str]
|
Tag type mapping for different driver types |
_cache_ttl |
Tag cache time-to-live |
|
_tags_cache |
Optional[List[str]]
|
Cached list of available tags |
_cache_timestamp |
float
|
Timestamp of last cache update |
Initialize mock Allen Bradley PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
ip_address
|
str
|
Simulated IP address |
required |
plc_type
|
Optional[str]
|
PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection) |
None
|
plc_config_file
|
Optional[str]
|
Path to configuration file (simulated) |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
async
Initialize the mock Allen Bradley PLC connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success, mock_plc_object, mock_device_manager) |
async
Simulate connection to the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
async
Simulate disconnection from the Allen Bradley PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
async
Check if mock Allen Bradley PLC is currently connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected, False otherwise |
async
Simulate reading values from Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
async
Simulate writing values to Allen Bradley PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
async
Get list of all available mock tags.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
async
Get detailed information about a mock tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tag_name
|
str
|
Name of the tag |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with tag information |
async
Get detailed information about the mock PLC.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with PLC information |
staticmethod
Discover available mock Allen Bradley PLCs.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers in format "AllenBradley:IP:Type" |
base
Abstract base classes for PLC implementations.
This module defines the interface that all PLC backends must implement, providing a consistent API for PLC operations across different manufacturers and communication protocols.
Features
- Abstract base class with comprehensive async PLC interface
- Consistent async pattern matching camera backends
- Type-safe method signatures with full type hints
- Configuration system integration
- Resource management and cleanup
- Default implementations for optional features
- Standardized constructor signature across all backends
- Retry logic with exponential backoff
- Connection management and monitoring
Usage
This is an abstract base class and cannot be instantiated directly. PLC backends should inherit from BasePLC and implement all abstract methods.
Example
class MyPLCBackend(BasePLC): async def initialize(self) -> Tuple[bool, Any, Any]: # Implementation here pass
async def connect(self) -> bool:
# Implementation here
pass
async def read_tag(self, tags: Union[str, List[str]]) -> Dict[str, Any]:
# Implementation here
pass
# ... implement other abstract methods
Backend Requirements
All PLC backends must implement the following abstract methods: - initialize(): Establish initial connection and setup - connect(): Connect to the PLC - disconnect(): Disconnect from the PLC - is_connected(): Check connection status - read_tag(): Read tag values from PLC - write_tag(): Write tag values to PLC - get_all_tags(): List all available tags - get_tag_info(): Get detailed tag information - get_available_plcs(): Static method for PLC discovery - get_backend_info(): Static method for backend information
Error Handling
Backends should raise appropriate exceptions from the PLC exception hierarchy: - PLCError: Base exception for all PLC-related errors - PLCNotFoundError: PLC not found during discovery - PLCConnectionError: Connection establishment or maintenance failures - PLCInitializationError: PLC initialization failures - PLCCommunicationError: Communication protocol errors - PLCTagError: Tag-related operation errors - PLCTimeoutError: Operation timeout errors - PLCConfigurationError: Configuration-related errors
BasePLC(
plc_name: str,
ip_address: str,
plc_config_file: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
)
Bases: MindtraceABC
Abstract base class for PLC implementations.
This class defines the interface that all PLC backends must implement to ensure consistent behavior across different manufacturers and protocols.
Attributes:
| Name | Type | Description |
|---|---|---|
plc_name |
Unique identifier for the PLC instance |
|
plc_config_file |
Path to PLC-specific configuration file |
|
ip_address |
IP address of the PLC |
|
connection_timeout |
Connection timeout in seconds |
|
read_timeout |
Tag read timeout in seconds |
|
write_timeout |
Tag write timeout in seconds |
|
retry_count |
Number of retry attempts for operations |
|
retry_delay |
Delay between retry attempts in seconds |
|
plc |
The underlying PLC connection object |
|
device_manager |
Device-specific manager instance |
|
initialized |
Whether the PLC has been initialized |
Initialize the PLC instance.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_config_file
|
Optional[str]
|
Path to PLC configuration file |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
abstractmethod
async
Initialize the PLC connection.
Returns:
| Type | Description |
|---|---|
Tuple[bool, Any, Any]
|
Tuple of (success, plc_object, device_manager) |
abstractmethod
async
Establish connection to the PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
abstractmethod
async
Disconnect from the PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
abstractmethod
async
Check if PLC is currently connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if connected, False otherwise |
abstractmethod
async
Read values from PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
abstractmethod
async
Write values to PLC tags.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
abstractmethod
async
Get list of all available tags on the PLC.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
abstractmethod
async
Get detailed information about a specific tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tag_name
|
str
|
Name of the tag |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with tag information (type, description, etc.) |
abstractmethod
staticmethod
Discover available PLCs for this backend.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers in format "Backend:Identifier" |
abstractmethod
staticmethod
Get information about this PLC backend.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with backend information |
async
Attempt to reconnect to the PLC.
Returns:
| Type | Description |
|---|---|
bool
|
True if reconnection successful, False otherwise |
async
Read tags with retry mechanism.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
Raises:
| Type | Description |
|---|---|
PLCTagError
|
If all retry attempts fail |
async
Write tags with retry mechanism.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
Raises:
| Type | Description |
|---|---|
PLCTagError
|
If all retry attempts fail |
plc_manager
Modern PLC Manager for Mindtrace Hardware System
A comprehensive PLC management system that provides unified access to multiple PLC backends with async operations, proper resource management, and batch processing capabilities.
Key Features
- Automatic PLC discovery and registration
- Unified interface for different PLC manufacturers
- Async operations with proper error handling
- Batch tag read/write operations
- Connection management and monitoring
- Thread-safe operations with proper locking
- Comprehensive configuration management
- Integrated logging and status reporting
Supported Backends
- Allen-Bradley: ControlLogix, CompactLogix PLCs (pycomm3)
- Siemens: S7-300, S7-400, S7-1200, S7-1500 PLCs (python-snap7)
- Modbus: Modbus TCP/RTU devices (pymodbus)
- Mock backends for testing and development
Requirements
- pycomm3: Allen-Bradley PLC communication
- python-snap7: Siemens PLC communication
- pymodbus: Modbus device communication
- asyncio: Async operations support
Installation
pip install pycomm3 python-snap7 pymodbus
Usage
Simple usage with discovery
async with PLCManager() as manager: plcs = await manager.discover_plcs() await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.connect_plc("PLC1")
# Read tags
values = await manager.read_tag("PLC1", ["Tag1", "Tag2"])
# Write tags
await manager.write_tag("PLC1", [("Tag1", 100), ("Tag2", 200)])
Batch operations
async with PLCManager() as manager: # Register multiple PLCs await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.register_plc("PLC2", "Siemens", "192.168.1.101")
# Connect all PLCs
results = await manager.connect_all_plcs()
# Batch read from multiple PLCs
read_requests = [
("PLC1", ["Temperature", "Pressure"]),
("PLC2", ["Speed", "Position"])
]
values = await manager.read_tags_batch(read_requests)
Configuration
All parameters are configurable via the hardware configuration system: - MINDTRACE_HW_PLC_AUTO_DISCOVERY: Enable automatic PLC discovery - MINDTRACE_HW_PLC_CONNECTION_TIMEOUT: Connection timeout in seconds - MINDTRACE_HW_PLC_READ_TIMEOUT: Tag read timeout in seconds - MINDTRACE_HW_PLC_WRITE_TIMEOUT: Tag write timeout in seconds - MINDTRACE_HW_PLC_RETRY_COUNT: Number of retry attempts - MINDTRACE_HW_PLC_MAX_CONCURRENT_CONNECTIONS: Maximum concurrent connections - MINDTRACE_HW_PLC_ALLEN_BRADLEY_ENABLED: Enable Allen-Bradley backend - MINDTRACE_HW_PLC_SIEMENS_ENABLED: Enable Siemens backend - MINDTRACE_HW_PLC_MODBUS_ENABLED: Enable Modbus backend
Error Handling
The module uses a comprehensive exception hierarchy for precise error reporting: - PLCError: Base exception for all PLC-related errors - PLCNotFoundError: PLC not found during discovery or registration - PLCConnectionError: Connection establishment or maintenance failures - PLCInitializationError: PLC initialization failures - PLCCommunicationError: Communication protocol errors - PLCTagError: Tag-related operation errors - PLCTagReadError: Tag read operation failures - PLCTagWriteError: Tag write operation failures - HardwareOperationError: General hardware operation failures
Thread Safety
All PLC operations are thread-safe. Multiple PLCs can be operated simultaneously from different threads without interference.
Performance Notes
- PLC discovery may take several seconds depending on network size
- Batch operations are more efficient than individual tag operations
- Connection pooling is used for optimal performance
- Consider PLC-specific optimizations for production use
PLCManager
Bases: Mindtrace
Unified PLC management system for industrial automation.
This manager provides a comprehensive interface for managing PLCs from different manufacturers with support for discovery, registration, connection management, and batch tag operations. It handles multiple PLC backends transparently and provides thread-safe operations with proper error handling.
The manager supports: - Automatic PLC discovery across multiple backends - Dynamic PLC registration and connection management - Batch tag read/write operations for optimal performance - Connection monitoring and automatic reconnection - Comprehensive error handling and logging - Thread-safe operations with proper resource management
Supported PLC Types: - Allen-Bradley: ControlLogix, CompactLogix, MicroLogix PLCs - Siemens: S7-300, S7-400, S7-1200, S7-1500 PLCs (Future) - Modbus: Modbus TCP/RTU devices (Future) - Mock PLCs: For testing and development
Attributes:
| Name | Type | Description |
|---|---|---|
plcs |
Dict[str, BasePLC]
|
Dictionary mapping PLC names to PLC instances |
config |
Hardware configuration manager instance |
|
logger |
Centralized logger for PLC operations |
Example
Basic usage
async with PLCManager() as manager: # Discover available PLCs discovered = await manager.discover_plcs()
# Register and connect to a PLC
await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100")
await manager.connect_plc("PLC1")
# Read and write tags
values = await manager.read_tag("PLC1", ["Temperature", "Pressure"])
await manager.write_tag("PLC1", [("Setpoint", 75.0)])
Batch operations
async with PLCManager() as manager: # Register multiple PLCs await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.register_plc("PLC2", "AllenBradley", "192.168.1.101")
# Batch read from multiple PLCs
read_requests = [
("PLC1", ["Temperature", "Pressure"]),
("PLC2", ["Speed", "Position"])
]
results = await manager.read_tags_batch(read_requests)
Initialize the PLC manager.
async
Discover available PLCs from all enabled backends.
Returns:
| Type | Description |
|---|---|
Dict[str, List[str]]
|
Dictionary mapping backend names to lists of discovered PLCs |
async
register_plc(
plc_name: str,
backend: str,
ip_address: str,
plc_type: Optional[str] = None,
**kwargs
) -> bool
Register a PLC with the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
backend
|
str
|
Backend type ("AllenBradley", "Siemens", "Modbus") |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_type
|
Optional[str]
|
Specific PLC type (backend-dependent) |
None
|
**kwargs
|
Additional backend-specific parameters |
{}
|
Returns:
| Type | Description |
|---|---|
bool
|
True if registration successful, False otherwise |
async
Unregister a PLC from the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC to unregister |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if unregistration successful, False otherwise |
async
Connect to a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC to connect |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if connection successful, False otherwise |
async
Disconnect from a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC to disconnect |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if disconnection successful, False otherwise |
async
Connect to all registered PLCs.
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping PLC names to connection success status |
async
Disconnect from all registered PLCs.
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping PLC names to disconnection success status |
async
Read tags from a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
async
Write tags to a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
async
Read tags from multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[str, List[str]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping PLC names to their tag read results |
async
write_tags_batch(
requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]
Write tags to multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, bool]]
|
Dictionary mapping PLC names to their tag write results |
async
Get status information for a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary with PLC status information |
async
Get status information for all registered PLCs.
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping PLC names to their status information |
async
Get list of available tags for a specific PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Name of the PLC |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available tag names |
Get list of registered PLC names.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of registered PLC names |
Get information about available PLC backends.
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping backend names to their information |
scanners_3d
3D Scanner module for structured light and other 3D scanning technologies.
This module provides support for 3D scanners including: - Photoneo PhoXi structured light scanners - Future: Time-of-Flight (ToF) cameras - Future: LiDAR sensors
Usage
from mindtrace.hardware.scanners_3d import Scanner3D, AsyncScanner3D
Synchronous usage
with Scanner3D() as scanner: ... result = scanner.capture() ... point_cloud = scanner.capture_point_cloud() ... point_cloud.save_ply("output.ply")
Async usage
async with await AsyncScanner3D.open() as scanner: ... result = await scanner.capture() ... point_cloud = await scanner.capture_point_cloud()
PhotoneoBackend
PhotoneoBackend(
serial_number: Optional[str] = None,
cti_path: Optional[str] = None,
op_timeout_s: float = 30.0,
buffer_count: int = 5,
)
Bases: Scanner3DBackend
Backend for Photoneo PhoXi 3D scanners using Harvesters.
Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.
This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.
Requirements
- Harvesters library: pip install harvesters
- Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
- Photoneo PhoXi firmware version 1.13.0 or later
Usage
backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of specific scanner. If None, opens first available Photoneo device. |
None
|
cti_path
|
Optional[str]
|
Path to GenTL Producer (.cti file). Auto-detected if None. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
buffer_count
|
int
|
Number of frame buffers for acquisition. |
5
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
CameraConfigurationError
|
If CTI file not found |
discover
staticmethod
Discover available Photoneo devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
discover_async
async
classmethod
Async wrapper for discover().
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
discover_detailed
staticmethod
Discover Photoneo devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
discover_detailed_async
async
classmethod
Async wrapper for discover_detailed().
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
initialize
async
Initialize scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
capture
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture 3D scan data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
capture_point_cloud
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color/intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
get_capabilities
async
Get scanner capabilities and available settings.
get_configuration
async
Get current scanner configuration.
set_configuration
async
Apply scanner configuration.
set_exposure_time
async
Set exposure time in milliseconds.
set_coding_strategy
async
Set structured light coding strategy.
set_output_topology
async
Set point cloud output topology.
set_normals_estimation_radius
async
Set radius for surface normal estimation (0-4).
get_normals_estimation_radius
async
Get current normals estimation radius.
set_max_inaccuracy
async
Set maximum allowed inaccuracy for point filtering (0-100).
set_hole_filling
async
Enable/disable hole filling in point cloud.
set_calibration_volume_only
async
Enable/disable filtering to calibration volume only.
get_calibration_volume_only
async
Get calibration volume filtering state.
set_trigger_mode
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
set_hardware_trigger
async
Enable/disable hardware trigger.
set_shutter_multiplier
async
Set shutter multiplier (1-10).
AsyncScanner3D
Bases: Mindtrace
Async 3D scanner interface.
Provides high-level 3D scanning operations including multi-component capture and point cloud generation.
Usage
scanner = await AsyncScanner3D.open() result = await scanner.capture() print(result.range_shape) await scanner.close()
Initialize async 3D scanner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Backend instance (e.g., PhotoneoBackend) |
required |
open
async
classmethod
Open and initialize a 3D scanner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Scanner identifier. Format: "Backend:serial_number" Supported backends: "Photoneo", "MockPhotoneo". If None, opens first available Photoneo scanner. |
None
|
Returns:
| Type | Description |
|---|---|
'AsyncScanner3D'
|
Initialized AsyncScanner3D instance |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
Examples:
capture
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture multi-component 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
capture_point_cloud
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
get_capabilities
async
Get scanner capabilities and available settings.
Returns:
| Type | Description |
|---|---|
ScannerCapabilities
|
ScannerCapabilities with available options and ranges |
Examples:
get_configuration
async
Get current scanner configuration.
Returns:
| Type | Description |
|---|---|
ScannerConfiguration
|
ScannerConfiguration with current settings |
Examples:
set_configuration
async
Apply scanner configuration.
Only non-None values in the configuration will be applied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScannerConfiguration
|
Configuration to apply |
required |
Examples:
set_exposure_time
async
get_exposure_time
async
set_trigger_mode
async
CoordinateMap
dataclass
CoordinateMap(
x_map: Optional[ndarray] = None,
y_map: Optional[ndarray] = None,
width: int = 0,
height: int = 0,
scale: float = 1.0,
offset: float = 0.0,
is_valid: bool = False,
)
Coordinate map for efficient point cloud generation.
Photoneo devices can provide pre-computed coordinate maps that allow efficient conversion from range-only data to full 3D point clouds. This enables faster transfers (only Z data) with local point cloud computation.
Attributes:
| Name | Type | Description |
|---|---|---|
x_map |
Optional[ndarray]
|
X coordinate map (H, W) - multiply by range to get X |
y_map |
Optional[ndarray]
|
Y coordinate map (H, W) - multiply by range to get Y |
width |
int
|
Map width in pixels |
height |
int
|
Map height in pixels |
scale |
float
|
Coordinate scale factor |
offset |
float
|
Coordinate offset |
is_valid |
bool
|
Whether the map has been initialized |
from_projected_c
classmethod
from_projected_c(
projected_c: ndarray,
width: int,
height: int,
scale: float = 1.0,
offset: float = 0.0,
) -> "CoordinateMap"
Create coordinate map from Photoneo ProjectedC component.
The ProjectedC component contains pre-computed X,Y coordinates that can be cached and reused for faster point cloud generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
projected_c
|
ndarray
|
ProjectedC data from Photoneo (H, W, 3) float32 |
required |
width
|
int
|
Image width |
required |
height
|
int
|
Image height |
required |
scale
|
float
|
Coordinate scale factor |
1.0
|
offset
|
float
|
Coordinate offset |
0.0
|
Returns:
| Type | Description |
|---|---|
'CoordinateMap'
|
CoordinateMap instance |
compute_point_cloud
Compute 3D point cloud from range map using cached coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
range_map
|
ndarray
|
Depth/range map (H, W) |
required |
valid_mask
|
Optional[ndarray]
|
Optional mask of valid pixels |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Point cloud array (N, 3) with X, Y, Z coordinates |
PointCloudData
dataclass
PointCloudData(
points: ndarray,
colors: Optional[ndarray] = None,
normals: Optional[ndarray] = None,
confidence: Optional[ndarray] = None,
num_points: int = 0,
has_colors: bool = False,
)
3D point cloud data with optional attributes.
Attributes:
| Name | Type | Description |
|---|---|---|
points |
ndarray
|
Array of 3D points (N, 3) - (x, y, z) in meters |
colors |
Optional[ndarray]
|
Optional RGB colors (N, 3) - values in [0, 1] |
normals |
Optional[ndarray]
|
Optional surface normals (N, 3) - unit vectors |
confidence |
Optional[ndarray]
|
Optional per-point confidence (N,) |
num_points |
int
|
Number of valid points |
has_colors |
bool
|
Flag indicating if color information is present |
save_ply
Save point cloud as PLY file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path |
required |
binary
|
bool
|
If True, save in binary format; otherwise ASCII |
True
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If plyfile is not installed |
downsample
Downsample point cloud by given factor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
factor
|
int
|
Downsampling factor (e.g., 2 = keep every 2nd point) |
required |
Returns:
| Type | Description |
|---|---|
'PointCloudData'
|
New PointCloudData with downsampled data |
filter_by_confidence
Filter points by minimum confidence threshold.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_confidence
|
float
|
Minimum confidence value (0.0 to 1.0) |
required |
Returns:
| Type | Description |
|---|---|
'PointCloudData'
|
New PointCloudData with filtered points |
Raises:
| Type | Description |
|---|---|
ValueError
|
If no confidence data available |
ScanComponent
Bases: Enum
Available scan components from 3D scanners.
Scanner3D
Scanner3D(
async_scanner: Optional[AsyncScanner3D] = None,
loop: Optional[AbstractEventLoop] = None,
name: Optional[str] = None,
**kwargs
)
Bases: Mindtrace
Synchronous wrapper around AsyncScanner3D.
All operations are executed on a background event loop. This provides a simple synchronous API for 3D scanner operations.
Usage
scanner = Scanner3D() result = scanner.capture() print(result.range_shape) scanner.close()
Or with context manager
with Scanner3D() as scanner: ... result = scanner.capture() ... print(result.range_shape)
Create a synchronous 3D scanner wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
async_scanner
|
Optional[AsyncScanner3D]
|
Existing AsyncScanner3D instance |
None
|
loop
|
Optional[AbstractEventLoop]
|
Event loop to use for async operations |
None
|
name
|
Optional[str]
|
Scanner identifier. Format: "Photoneo:serial_number" If None, opens first available scanner. |
None
|
**kwargs
|
Additional arguments passed to Mindtrace |
{}
|
Examples:
>>> # Use existing async scanner
>>> async_scan = await AsyncScanner3D.open()
>>> sync_scan = Scanner3D(async_scanner=async_scan, loop=loop)
name
property
Get scanner name.
Returns:
| Type | Description |
|---|---|
str
|
Scanner name in format "Backend:serial_number" |
is_open
property
Check if scanner is open.
Returns:
| Type | Description |
|---|---|
bool
|
True if scanner is open, False otherwise |
close
capture
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture multi-component 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
capture_point_cloud
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
set_exposure_time
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
get_exposure_time
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
Examples:
set_trigger_mode
get_trigger_mode
ScanResult
dataclass
ScanResult(
range_map: Optional[ndarray] = None,
intensity: Optional[ndarray] = None,
confidence: Optional[ndarray] = None,
normal_map: Optional[ndarray] = None,
color: Optional[ndarray] = None,
timestamp: float = 0.0,
frame_number: int = 0,
components_enabled: Dict[ScanComponent, bool] = dict(),
metadata: Dict[str, Union[str, int, float]] = dict(),
)
Result from 3D scanner capture containing multi-component data.
Attributes:
| Name | Type | Description |
|---|---|---|
range_map |
Optional[ndarray]
|
Depth/range map - typically uint16 or float32 (H, W) |
intensity |
Optional[ndarray]
|
Intensity image - uint8 or uint16 (H, W) or (H, W, 3) |
confidence |
Optional[ndarray]
|
Confidence map - uint8 or uint16 (H, W), values indicate quality |
normal_map |
Optional[ndarray]
|
Surface normals - float32 (H, W, 3), xyz components |
color |
Optional[ndarray]
|
Color texture - uint8 (H, W, 3) RGB |
timestamp |
float
|
Capture timestamp in seconds (from device or system) |
frame_number |
int
|
Sequential frame number |
components_enabled |
Dict[ScanComponent, bool]
|
Dict of which components were captured |
metadata |
Dict[str, Union[str, int, float]]
|
Additional scan metadata (exposure, gain, etc.) |
get_valid_mask
Get mask of valid pixels based on range and confidence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_confidence
|
int
|
Minimum confidence threshold (0-255 typical) |
0
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Boolean mask (H, W) where True indicates valid pixel |
backends
3D scanner backend implementations.
This module provides the abstract base class and concrete implementations for 3D scanner backends.
MockPhotoneoBackend
MockPhotoneoBackend(
serial_number: Optional[str] = None,
width: int = 2064,
height: int = 1544,
op_timeout_s: float = 30.0,
)
Bases: Scanner3DBackend
Mock backend for Photoneo scanners for testing.
Generates synthetic 3D data for testing scanner integration without physical hardware.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner backends.
Usage
backend = MockPhotoneoBackend(serial_number="MOCK001") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize mock Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of mock device. If None, uses first available mock device. |
None
|
width
|
int
|
Width of generated images |
2064
|
height
|
int
|
Height of generated images |
1544
|
op_timeout_s
|
float
|
Timeout for operations (simulated) |
30.0
|
staticmethod
Discover available mock devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for mock devices |
staticmethod
Discover mock devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
classmethod
Async wrapper for discover_detailed().
async
Initialize mock scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If mock device not found |
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture synthetic 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout (simulated) |
10000
|
enable_range
|
bool
|
Whether to generate range data |
True
|
enable_intensity
|
bool
|
Whether to generate intensity data |
True
|
enable_confidence
|
bool
|
Whether to generate confidence data |
False
|
enable_normal
|
bool
|
Whether to generate normal data |
False
|
enable_color
|
bool
|
Whether to generate color data |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult with synthetic data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If not initialized |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate synthetic point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include colors |
True
|
include_confidence
|
bool
|
Whether to include confidence |
False
|
timeout_ms
|
int
|
Capture timeout |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with synthetic points |
async
Get current mock scanner configuration.
async
Apply scanner configuration.
async
Set exposure time in milliseconds.
async
Set shutter multiplier (1-10).
async
Set structured light coding strategy.
async
Set point cloud output topology.
async
Set radius for surface normal estimation (0-4).
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering (0-100).
async
Enable/disable hole filling in point cloud.
async
Enable/disable filtering to calibration volume only.
async
Get calibration volume filtering state.
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
async
Enable/disable hardware trigger.
PhotoneoBackend
PhotoneoBackend(
serial_number: Optional[str] = None,
cti_path: Optional[str] = None,
op_timeout_s: float = 30.0,
buffer_count: int = 5,
)
Bases: Scanner3DBackend
Backend for Photoneo PhoXi 3D scanners using Harvesters.
Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.
This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.
Requirements
- Harvesters library: pip install harvesters
- Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
- Photoneo PhoXi firmware version 1.13.0 or later
Usage
backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of specific scanner. If None, opens first available Photoneo device. |
None
|
cti_path
|
Optional[str]
|
Path to GenTL Producer (.cti file). Auto-detected if None. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
buffer_count
|
int
|
Number of frame buffers for acquisition. |
5
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
CameraConfigurationError
|
If CTI file not found |
staticmethod
Discover available Photoneo devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
async
classmethod
Async wrapper for discover().
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
staticmethod
Discover Photoneo devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
classmethod
Async wrapper for discover_detailed().
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
Initialize scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture 3D scan data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color/intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
async
Get scanner capabilities and available settings.
async
Get current scanner configuration.
async
Apply scanner configuration.
async
Set exposure time in milliseconds.
async
Set structured light coding strategy.
async
Set point cloud output topology.
async
Set radius for surface normal estimation (0-4).
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering (0-100).
async
Enable/disable hole filling in point cloud.
async
Enable/disable filtering to calibration volume only.
async
Get calibration volume filtering state.
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
async
Enable/disable hardware trigger.
async
Set shutter multiplier (1-10).
Scanner3DBackend
Bases: MindtraceABC
Abstract base class for all 3D scanner implementations.
This class defines the async interface that all 3D scanner backends must implement to ensure consistent behavior across different scanner types and manufacturers. Supports structured light (Photoneo, Ensenso), time-of-flight, LiDAR, and other 3D scanning technologies.
Uses async-first design consistent with CameraBackend and StereoCameraBackend.
Attributes:
| Name | Type | Description |
|---|---|---|
serial_number |
Unique identifier for the scanner |
|
is_open |
bool
|
Scanner connection status |
Implementation Guide
- Offload blocking SDK calls from async methods:
Use
asyncio.to_threadfor simple cases orloop.run_in_executorwith a per-instance single-thread executor when the SDK requires thread affinity. - Thread affinity:
Many vendor SDKs (e.g., Harvesters/GenTL) are safest when all calls originate from one
OS thread. Prefer a dedicated single-thread executor created during
initialize()and shut down inclose()to serialize SDK access without blocking the event loop. - Timeouts and cancellation:
Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with
asyncio.wait_forto bound runtime. - Event loop hygiene:
Never call blocking functions directly in async methods. Replace sleeps with
await asyncio.sleepor run blocking work in the executor. - Errors:
Map SDK-specific exceptions to domain exceptions in
mindtrace.hardware.core.exceptionswith clear, contextual messages. - Cleanup:
Ensure resources (device handles, Harvester instances, buffers) are released in
close().
Supported Scanner Types
- Structured light: Photoneo PhoXi, Ensenso, Zivid
- Time-of-flight: Various ToF sensors
- LiDAR: Point cloud scanners
- Other 3D sensing technologies
Example Implementation
class MyScanner3DBackend(Scanner3DBackend): ... async def initialize(self) -> bool: ... # Connect to scanner ... return True ... ... async def capture(self, ...) -> ScanResult: ... # Capture 3D scan data ... return ScanResult(...)
Initialize base 3D scanner backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Unique identifier for the scanner (auto-discovered if None) |
None
|
op_timeout_s
|
float
|
Default timeout in seconds for SDK operations |
30.0
|
abstractmethod
staticmethod
Discover available 3D scanners.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers or identifiers for available scanners |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If required SDK is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available scanners |
staticmethod
Discover scanners with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing scanner information: |
List[Dict[str, str]]
|
|
List[Dict[str, str]]
|
|
List[Dict[str, str]]
|
|
List[Dict[str, str]]
|
|
async
classmethod
Async wrapper for discover_detailed().
abstractmethod
async
Initialize scanner connection.
This method should: 1. Connect to the scanner hardware 2. Apply default configuration 3. Prepare for acquisition
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner cannot be found |
CameraConnectionError
|
If connection fails |
SDKNotAvailableError
|
If required SDK is not available |
abstractmethod
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture 3D scan data with multiple components.
3D scanners can output multiple data types in a single capture: - Range/Depth: Z-distance from scanner to surface - Intensity: Grayscale texture/reflectance image - Confidence: Per-pixel quality/confidence values - Normal: Surface normal vectors - Color: RGB texture (if color camera available)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity/texture data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence/quality data |
False
|
enable_normal
|
bool
|
Whether to capture surface normal vectors |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured multi-component data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
abstractmethod
async
Close scanner and release resources.
This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up Harvester/GenTL resources 4. Clean up executors/threads if used
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Default implementation captures scan data and generates point cloud. Override for backend-specific optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color/intensity information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
async
Get scanner capabilities and available settings.
Returns:
| Type | Description |
|---|---|
ScannerCapabilities
|
ScannerCapabilities describing what features are available |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
async
Get current scanner configuration.
Returns:
| Type | Description |
|---|---|
ScannerConfiguration
|
ScannerConfiguration with current settings |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
async
Apply scanner configuration.
Only non-None values in the configuration will be applied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScannerConfiguration
|
Configuration to apply |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraConfigurationError
|
If configuration fails |
async
Set exposure time in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
milliseconds
|
float
|
Exposure time in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraConfigurationError
|
If configuration fails |
async
Get current exposure time in milliseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in milliseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
async
Set shutter multiplier (1-10).
Higher values increase exposure by capturing multiple patterns.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
multiplier
|
int
|
Shutter multiplier value (1-10) |
required |
async
Set scanner operation mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Operation mode ('Camera', 'Scanner', 'Mode_2D') |
required |
async
Set structured light coding strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
strategy
|
str
|
Coding strategy ('Normal', 'Interreflections', 'HighFrequency') |
required |
async
Set scan quality/speed tradeoff.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Quality preset ('Ultra', 'High', 'Fast') |
required |
async
Set LED illumination power.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
power
|
int
|
LED power level (typically 0-4095) |
required |
async
Set laser/projector power.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
power
|
int
|
Laser power level (typically 1-4095) |
required |
async
Set texture/intensity data source.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
str
|
Texture source ('LED', 'Computed', 'Laser', 'Focus', 'Color') |
required |
async
Set point cloud output topology.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
topology
|
str
|
Output topology ('Raw', 'RegularGrid', 'FullGrid') |
required |
async
Set coordinate system reference camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Camera space ('PrimaryCamera', 'ColorCamera') |
required |
async
Set radius for surface normal estimation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
radius
|
int
|
Estimation radius (typically 0-4) |
required |
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
float
|
Maximum inaccuracy (typically 0-100) |
required |
async
Enable/disable hole filling in point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool
|
Whether to enable hole filling |
required |
async
Enable/disable filtering to calibration volume only.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool
|
Whether to filter to calibration volume |
required |
async
Get calibration volume filtering state.
async
Set trigger mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ('Software', 'Hardware', 'Continuous') |
required |
async
Enable/disable hardware trigger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool
|
Whether to enable hardware trigger |
required |
async
Set hardware trigger signal edge.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
signal
|
str
|
Trigger signal edge ('Falling', 'Rising', 'Both') |
required |
async
Get current hardware trigger signal setting.
async
Set maximum frames per second.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fps
|
float
|
Maximum FPS (typically 0-100) |
required |
photoneo
Photoneo PhoXi 3D scanner backend.
MockPhotoneoBackend(
serial_number: Optional[str] = None,
width: int = 2064,
height: int = 1544,
op_timeout_s: float = 30.0,
)
Bases: Scanner3DBackend
Mock backend for Photoneo scanners for testing.
Generates synthetic 3D data for testing scanner integration without physical hardware.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner backends.
Usage
backend = MockPhotoneoBackend(serial_number="MOCK001") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize mock Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of mock device. If None, uses first available mock device. |
None
|
width
|
int
|
Width of generated images |
2064
|
height
|
int
|
Height of generated images |
1544
|
op_timeout_s
|
float
|
Timeout for operations (simulated) |
30.0
|
staticmethod
Discover available mock devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for mock devices |
staticmethod
Discover mock devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
classmethod
Async wrapper for discover_detailed().
async
Initialize mock scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If mock device not found |
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture synthetic 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout (simulated) |
10000
|
enable_range
|
bool
|
Whether to generate range data |
True
|
enable_intensity
|
bool
|
Whether to generate intensity data |
True
|
enable_confidence
|
bool
|
Whether to generate confidence data |
False
|
enable_normal
|
bool
|
Whether to generate normal data |
False
|
enable_color
|
bool
|
Whether to generate color data |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult with synthetic data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If not initialized |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate synthetic point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include colors |
True
|
include_confidence
|
bool
|
Whether to include confidence |
False
|
timeout_ms
|
int
|
Capture timeout |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with synthetic points |
async
Get current mock scanner configuration.
async
Apply scanner configuration.
async
Set exposure time in milliseconds.
async
Set shutter multiplier (1-10).
async
Set structured light coding strategy.
async
Set point cloud output topology.
async
Set radius for surface normal estimation (0-4).
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering (0-100).
async
Enable/disable hole filling in point cloud.
async
Enable/disable filtering to calibration volume only.
async
Get calibration volume filtering state.
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
async
Enable/disable hardware trigger.
PhotoneoBackend(
serial_number: Optional[str] = None,
cti_path: Optional[str] = None,
op_timeout_s: float = 30.0,
buffer_count: int = 5,
)
Bases: Scanner3DBackend
Backend for Photoneo PhoXi 3D scanners using Harvesters.
Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.
This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.
Requirements
- Harvesters library: pip install harvesters
- Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
- Photoneo PhoXi firmware version 1.13.0 or later
Usage
backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of specific scanner. If None, opens first available Photoneo device. |
None
|
cti_path
|
Optional[str]
|
Path to GenTL Producer (.cti file). Auto-detected if None. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
buffer_count
|
int
|
Number of frame buffers for acquisition. |
5
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
CameraConfigurationError
|
If CTI file not found |
staticmethod
Discover available Photoneo devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
async
classmethod
Async wrapper for discover().
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
staticmethod
Discover Photoneo devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
classmethod
Async wrapper for discover_detailed().
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
Initialize scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture 3D scan data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color/intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
async
Get scanner capabilities and available settings.
async
Get current scanner configuration.
async
Apply scanner configuration.
async
Set exposure time in milliseconds.
async
Set structured light coding strategy.
async
Set point cloud output topology.
async
Set radius for surface normal estimation (0-4).
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering (0-100).
async
Enable/disable hole filling in point cloud.
async
Enable/disable filtering to calibration volume only.
async
Get calibration volume filtering state.
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
async
Enable/disable hardware trigger.
async
Set shutter multiplier (1-10).
Mock Photoneo backend for testing without hardware.
MockPhotoneoBackend(
serial_number: Optional[str] = None,
width: int = 2064,
height: int = 1544,
op_timeout_s: float = 30.0,
)
Bases: Scanner3DBackend
Mock backend for Photoneo scanners for testing.
Generates synthetic 3D data for testing scanner integration without physical hardware.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner backends.
Usage
backend = MockPhotoneoBackend(serial_number="MOCK001") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize mock Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of mock device. If None, uses first available mock device. |
None
|
width
|
int
|
Width of generated images |
2064
|
height
|
int
|
Height of generated images |
1544
|
op_timeout_s
|
float
|
Timeout for operations (simulated) |
30.0
|
staticmethod
Discover available mock devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for mock devices |
staticmethod
Discover mock devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
classmethod
Async wrapper for discover_detailed().
async
Initialize mock scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If mock device not found |
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture synthetic 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout (simulated) |
10000
|
enable_range
|
bool
|
Whether to generate range data |
True
|
enable_intensity
|
bool
|
Whether to generate intensity data |
True
|
enable_confidence
|
bool
|
Whether to generate confidence data |
False
|
enable_normal
|
bool
|
Whether to generate normal data |
False
|
enable_color
|
bool
|
Whether to generate color data |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult with synthetic data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If not initialized |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate synthetic point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include colors |
True
|
include_confidence
|
bool
|
Whether to include confidence |
False
|
timeout_ms
|
int
|
Capture timeout |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with synthetic points |
async
Get current mock scanner configuration.
async
Apply scanner configuration.
async
Set exposure time in milliseconds.
async
Set shutter multiplier (1-10).
async
Set structured light coding strategy.
async
Set point cloud output topology.
async
Set radius for surface normal estimation (0-4).
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering (0-100).
async
Enable/disable hole filling in point cloud.
async
Enable/disable filtering to calibration volume only.
async
Get calibration volume filtering state.
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
async
Enable/disable hardware trigger.
Photoneo PhoXi 3D scanner backend using Harvesters (GigE Vision).
This backend provides access to Photoneo structured light 3D scanners via the GigE Vision protocol using the Harvesters library.
PhotoneoBackend(
serial_number: Optional[str] = None,
cti_path: Optional[str] = None,
op_timeout_s: float = 30.0,
buffer_count: int = 5,
)
Bases: Scanner3DBackend
Backend for Photoneo PhoXi 3D scanners using Harvesters.
Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.
This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.
Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.
Requirements
- Harvesters library: pip install harvesters
- Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
- Photoneo PhoXi firmware version 1.13.0 or later
Usage
backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()
Initialize Photoneo backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number of specific scanner. If None, opens first available Photoneo device. |
None
|
cti_path
|
Optional[str]
|
Path to GenTL Producer (.cti file). Auto-detected if None. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
buffer_count
|
int
|
Number of frame buffers for acquisition. |
5
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
CameraConfigurationError
|
If CTI file not found |
staticmethod
Discover available Photoneo devices.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If Harvesters is not available |
async
classmethod
Async wrapper for discover().
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Photoneo devices |
staticmethod
Discover Photoneo devices with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
classmethod
Async wrapper for discover_detailed().
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing device information |
async
Initialize scanner connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture 3D scan data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color/intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
async
Get scanner capabilities and available settings.
async
Get current scanner configuration.
async
Apply scanner configuration.
async
Set exposure time in milliseconds.
async
Set structured light coding strategy.
async
Set point cloud output topology.
async
Set radius for surface normal estimation (0-4).
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering (0-100).
async
Enable/disable hole filling in point cloud.
async
Enable/disable filtering to calibration volume only.
async
Get calibration volume filtering state.
async
Set trigger mode ('Software', 'Hardware', 'Continuous').
async
Enable/disable hardware trigger.
async
Set shutter multiplier (1-10).
scanner_3d_backend
Abstract base class for 3D scanner backends.
This module defines the async interface that all 3D scanner backends must implement to ensure consistent behavior across different scanner types and manufacturers (structured light, time-of-flight, LiDAR, etc.).
Following the same architectural pattern as CameraBackend and StereoCameraBackend for consistency across the hardware module.
Bases: MindtraceABC
Abstract base class for all 3D scanner implementations.
This class defines the async interface that all 3D scanner backends must implement to ensure consistent behavior across different scanner types and manufacturers. Supports structured light (Photoneo, Ensenso), time-of-flight, LiDAR, and other 3D scanning technologies.
Uses async-first design consistent with CameraBackend and StereoCameraBackend.
Attributes:
| Name | Type | Description |
|---|---|---|
serial_number |
Unique identifier for the scanner |
|
is_open |
bool
|
Scanner connection status |
Implementation Guide
- Offload blocking SDK calls from async methods:
Use
asyncio.to_threadfor simple cases orloop.run_in_executorwith a per-instance single-thread executor when the SDK requires thread affinity. - Thread affinity:
Many vendor SDKs (e.g., Harvesters/GenTL) are safest when all calls originate from one
OS thread. Prefer a dedicated single-thread executor created during
initialize()and shut down inclose()to serialize SDK access without blocking the event loop. - Timeouts and cancellation:
Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with
asyncio.wait_forto bound runtime. - Event loop hygiene:
Never call blocking functions directly in async methods. Replace sleeps with
await asyncio.sleepor run blocking work in the executor. - Errors:
Map SDK-specific exceptions to domain exceptions in
mindtrace.hardware.core.exceptionswith clear, contextual messages. - Cleanup:
Ensure resources (device handles, Harvester instances, buffers) are released in
close().
Supported Scanner Types
- Structured light: Photoneo PhoXi, Ensenso, Zivid
- Time-of-flight: Various ToF sensors
- LiDAR: Point cloud scanners
- Other 3D sensing technologies
Example Implementation
class MyScanner3DBackend(Scanner3DBackend): ... async def initialize(self) -> bool: ... # Connect to scanner ... return True ... ... async def capture(self, ...) -> ScanResult: ... # Capture 3D scan data ... return ScanResult(...)
Initialize base 3D scanner backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Unique identifier for the scanner (auto-discovered if None) |
None
|
op_timeout_s
|
float
|
Default timeout in seconds for SDK operations |
30.0
|
abstractmethod
staticmethod
Discover available 3D scanners.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers or identifiers for available scanners |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If required SDK is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available scanners |
staticmethod
Discover scanners with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing scanner information: |
List[Dict[str, str]]
|
|
List[Dict[str, str]]
|
|
List[Dict[str, str]]
|
|
List[Dict[str, str]]
|
|
async
classmethod
Async wrapper for discover_detailed().
abstractmethod
async
Initialize scanner connection.
This method should: 1. Connect to the scanner hardware 2. Apply default configuration 3. Prepare for acquisition
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner cannot be found |
CameraConnectionError
|
If connection fails |
SDKNotAvailableError
|
If required SDK is not available |
abstractmethod
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture 3D scan data with multiple components.
3D scanners can output multiple data types in a single capture: - Range/Depth: Z-distance from scanner to surface - Intensity: Grayscale texture/reflectance image - Confidence: Per-pixel quality/confidence values - Normal: Surface normal vectors - Color: RGB texture (if color camera available)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity/texture data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence/quality data |
False
|
enable_normal
|
bool
|
Whether to capture surface normal vectors |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured multi-component data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
abstractmethod
async
Close scanner and release resources.
This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up Harvester/GenTL resources 4. Clean up executors/threads if used
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Default implementation captures scan data and generates point cloud. Override for backend-specific optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color/intensity information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
async
Get scanner capabilities and available settings.
Returns:
| Type | Description |
|---|---|
ScannerCapabilities
|
ScannerCapabilities describing what features are available |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
async
Get current scanner configuration.
Returns:
| Type | Description |
|---|---|
ScannerConfiguration
|
ScannerConfiguration with current settings |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
async
Apply scanner configuration.
Only non-None values in the configuration will be applied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScannerConfiguration
|
Configuration to apply |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraConfigurationError
|
If configuration fails |
async
Set exposure time in milliseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
milliseconds
|
float
|
Exposure time in milliseconds |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraConfigurationError
|
If configuration fails |
async
Get current exposure time in milliseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in milliseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
async
Set shutter multiplier (1-10).
Higher values increase exposure by capturing multiple patterns.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
multiplier
|
int
|
Shutter multiplier value (1-10) |
required |
async
Set scanner operation mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Operation mode ('Camera', 'Scanner', 'Mode_2D') |
required |
async
Set structured light coding strategy.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
strategy
|
str
|
Coding strategy ('Normal', 'Interreflections', 'HighFrequency') |
required |
async
Set scan quality/speed tradeoff.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Quality preset ('Ultra', 'High', 'Fast') |
required |
async
Set LED illumination power.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
power
|
int
|
LED power level (typically 0-4095) |
required |
async
Set laser/projector power.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
power
|
int
|
Laser power level (typically 1-4095) |
required |
async
Set texture/intensity data source.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
source
|
str
|
Texture source ('LED', 'Computed', 'Laser', 'Focus', 'Color') |
required |
async
Set point cloud output topology.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
topology
|
str
|
Output topology ('Raw', 'RegularGrid', 'FullGrid') |
required |
async
Set coordinate system reference camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
space
|
str
|
Camera space ('PrimaryCamera', 'ColorCamera') |
required |
async
Set radius for surface normal estimation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
radius
|
int
|
Estimation radius (typically 0-4) |
required |
async
Get current normals estimation radius.
async
Set maximum allowed inaccuracy for point filtering.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
value
|
float
|
Maximum inaccuracy (typically 0-100) |
required |
async
Enable/disable hole filling in point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool
|
Whether to enable hole filling |
required |
async
Enable/disable filtering to calibration volume only.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool
|
Whether to filter to calibration volume |
required |
async
Get calibration volume filtering state.
async
Set trigger mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ('Software', 'Hardware', 'Continuous') |
required |
async
Enable/disable hardware trigger.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enabled
|
bool
|
Whether to enable hardware trigger |
required |
async
Set hardware trigger signal edge.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
signal
|
str
|
Trigger signal edge ('Falling', 'Rising', 'Both') |
required |
async
Get current hardware trigger signal setting.
async
Set maximum frames per second.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
fps
|
float
|
Maximum FPS (typically 0-100) |
required |
core
Core 3D scanner interfaces and models.
AsyncScanner3D
Bases: Mindtrace
Async 3D scanner interface.
Provides high-level 3D scanning operations including multi-component capture and point cloud generation.
Usage
scanner = await AsyncScanner3D.open() result = await scanner.capture() print(result.range_shape) await scanner.close()
Initialize async 3D scanner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Backend instance (e.g., PhotoneoBackend) |
required |
async
classmethod
Open and initialize a 3D scanner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Scanner identifier. Format: "Backend:serial_number" Supported backends: "Photoneo", "MockPhotoneo". If None, opens first available Photoneo scanner. |
None
|
Returns:
| Type | Description |
|---|---|
'AsyncScanner3D'
|
Initialized AsyncScanner3D instance |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
Examples:
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture multi-component 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
async
Get scanner capabilities and available settings.
Returns:
| Type | Description |
|---|---|
ScannerCapabilities
|
ScannerCapabilities with available options and ranges |
Examples:
async
Get current scanner configuration.
Returns:
| Type | Description |
|---|---|
ScannerConfiguration
|
ScannerConfiguration with current settings |
Examples:
async
Apply scanner configuration.
Only non-None values in the configuration will be applied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScannerConfiguration
|
Configuration to apply |
required |
Examples:
async
async
async
CameraSpace
Bases: Enum
Coordinate system reference camera.
CodingQuality
Bases: Enum
Scan quality/speed tradeoff.
CodingStrategy
Bases: Enum
Structured light coding strategy.
CoordinateMap
dataclass
CoordinateMap(
x_map: Optional[ndarray] = None,
y_map: Optional[ndarray] = None,
width: int = 0,
height: int = 0,
scale: float = 1.0,
offset: float = 0.0,
is_valid: bool = False,
)
Coordinate map for efficient point cloud generation.
Photoneo devices can provide pre-computed coordinate maps that allow efficient conversion from range-only data to full 3D point clouds. This enables faster transfers (only Z data) with local point cloud computation.
Attributes:
| Name | Type | Description |
|---|---|---|
x_map |
Optional[ndarray]
|
X coordinate map (H, W) - multiply by range to get X |
y_map |
Optional[ndarray]
|
Y coordinate map (H, W) - multiply by range to get Y |
width |
int
|
Map width in pixels |
height |
int
|
Map height in pixels |
scale |
float
|
Coordinate scale factor |
offset |
float
|
Coordinate offset |
is_valid |
bool
|
Whether the map has been initialized |
classmethod
from_projected_c(
projected_c: ndarray,
width: int,
height: int,
scale: float = 1.0,
offset: float = 0.0,
) -> "CoordinateMap"
Create coordinate map from Photoneo ProjectedC component.
The ProjectedC component contains pre-computed X,Y coordinates that can be cached and reused for faster point cloud generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
projected_c
|
ndarray
|
ProjectedC data from Photoneo (H, W, 3) float32 |
required |
width
|
int
|
Image width |
required |
height
|
int
|
Image height |
required |
scale
|
float
|
Coordinate scale factor |
1.0
|
offset
|
float
|
Coordinate offset |
0.0
|
Returns:
| Type | Description |
|---|---|
'CoordinateMap'
|
CoordinateMap instance |
Compute 3D point cloud from range map using cached coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
range_map
|
ndarray
|
Depth/range map (H, W) |
required |
valid_mask
|
Optional[ndarray]
|
Optional mask of valid pixels |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Point cloud array (N, 3) with X, Y, Z coordinates |
HardwareTriggerSignal
Bases: Enum
Hardware trigger signal edge.
OperationMode
Bases: Enum
Scanner operation mode.
OutputTopology
Bases: Enum
Point cloud output topology.
PointCloudData
dataclass
PointCloudData(
points: ndarray,
colors: Optional[ndarray] = None,
normals: Optional[ndarray] = None,
confidence: Optional[ndarray] = None,
num_points: int = 0,
has_colors: bool = False,
)
3D point cloud data with optional attributes.
Attributes:
| Name | Type | Description |
|---|---|---|
points |
ndarray
|
Array of 3D points (N, 3) - (x, y, z) in meters |
colors |
Optional[ndarray]
|
Optional RGB colors (N, 3) - values in [0, 1] |
normals |
Optional[ndarray]
|
Optional surface normals (N, 3) - unit vectors |
confidence |
Optional[ndarray]
|
Optional per-point confidence (N,) |
num_points |
int
|
Number of valid points |
has_colors |
bool
|
Flag indicating if color information is present |
Save point cloud as PLY file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path |
required |
binary
|
bool
|
If True, save in binary format; otherwise ASCII |
True
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If plyfile is not installed |
Downsample point cloud by given factor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
factor
|
int
|
Downsampling factor (e.g., 2 = keep every 2nd point) |
required |
Returns:
| Type | Description |
|---|---|
'PointCloudData'
|
New PointCloudData with downsampled data |
Filter points by minimum confidence threshold.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_confidence
|
float
|
Minimum confidence value (0.0 to 1.0) |
required |
Returns:
| Type | Description |
|---|---|
'PointCloudData'
|
New PointCloudData with filtered points |
Raises:
| Type | Description |
|---|---|
ValueError
|
If no confidence data available |
ScanComponent
Bases: Enum
Available scan components from 3D scanners.
ScannerCapabilities
dataclass
ScannerCapabilities(
has_range: bool = True,
has_intensity: bool = False,
has_confidence: bool = False,
has_normal: bool = False,
has_color: bool = False,
operation_modes: list = list(),
coding_strategies: list = list(),
coding_qualities: list = list(),
texture_sources: list = list(),
output_topologies: list = list(),
exposure_range: Optional[tuple] = None,
led_power_range: Optional[tuple] = None,
laser_power_range: Optional[tuple] = None,
fps_range: Optional[tuple] = None,
depth_resolution: Optional[tuple] = None,
color_resolution: Optional[tuple] = None,
model: str = "",
serial_number: str = "",
firmware_version: str = "",
)
Describes the capabilities of a 3D scanner.
Used to query what features and settings are available on a specific scanner.
ScannerConfiguration
dataclass
ScannerConfiguration(
operation_mode: Optional[OperationMode] = None,
coding_strategy: Optional[CodingStrategy] = None,
coding_quality: Optional[CodingQuality] = None,
maximum_fps: Optional[float] = None,
exposure_time: Optional[float] = None,
single_pattern_exposure: Optional[float] = None,
shutter_multiplier: Optional[int] = None,
scan_multiplier: Optional[int] = None,
color_exposure: Optional[float] = None,
led_power: Optional[int] = None,
laser_power: Optional[int] = None,
texture_source: Optional[TextureSource] = None,
camera_texture_source: Optional[TextureSource] = None,
output_topology: Optional[OutputTopology] = None,
camera_space: Optional[CameraSpace] = None,
normals_estimation_radius: Optional[int] = None,
max_inaccuracy: Optional[float] = None,
calibration_volume_only: Optional[bool] = None,
hole_filling: Optional[bool] = None,
trigger_mode: Optional[TriggerMode] = None,
hardware_trigger: Optional[bool] = None,
hardware_trigger_signal: Optional[HardwareTriggerSignal] = None,
)
Configuration settings for 3D scanners.
Groups all configurable parameters for structured light scanners. Not all parameters may be available on all scanner models.
ScanResult
dataclass
ScanResult(
range_map: Optional[ndarray] = None,
intensity: Optional[ndarray] = None,
confidence: Optional[ndarray] = None,
normal_map: Optional[ndarray] = None,
color: Optional[ndarray] = None,
timestamp: float = 0.0,
frame_number: int = 0,
components_enabled: Dict[ScanComponent, bool] = dict(),
metadata: Dict[str, Union[str, int, float]] = dict(),
)
Result from 3D scanner capture containing multi-component data.
Attributes:
| Name | Type | Description |
|---|---|---|
range_map |
Optional[ndarray]
|
Depth/range map - typically uint16 or float32 (H, W) |
intensity |
Optional[ndarray]
|
Intensity image - uint8 or uint16 (H, W) or (H, W, 3) |
confidence |
Optional[ndarray]
|
Confidence map - uint8 or uint16 (H, W), values indicate quality |
normal_map |
Optional[ndarray]
|
Surface normals - float32 (H, W, 3), xyz components |
color |
Optional[ndarray]
|
Color texture - uint8 (H, W, 3) RGB |
timestamp |
float
|
Capture timestamp in seconds (from device or system) |
frame_number |
int
|
Sequential frame number |
components_enabled |
Dict[ScanComponent, bool]
|
Dict of which components were captured |
metadata |
Dict[str, Union[str, int, float]]
|
Additional scan metadata (exposure, gain, etc.) |
Get mask of valid pixels based on range and confidence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_confidence
|
int
|
Minimum confidence threshold (0-255 typical) |
0
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Boolean mask (H, W) where True indicates valid pixel |
TextureSource
Bases: Enum
Source for texture/intensity data.
TriggerMode
Bases: Enum
Acquisition trigger mode.
Scanner3D
Scanner3D(
async_scanner: Optional[AsyncScanner3D] = None,
loop: Optional[AbstractEventLoop] = None,
name: Optional[str] = None,
**kwargs
)
Bases: Mindtrace
Synchronous wrapper around AsyncScanner3D.
All operations are executed on a background event loop. This provides a simple synchronous API for 3D scanner operations.
Usage
scanner = Scanner3D() result = scanner.capture() print(result.range_shape) scanner.close()
Or with context manager
with Scanner3D() as scanner: ... result = scanner.capture() ... print(result.range_shape)
Create a synchronous 3D scanner wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
async_scanner
|
Optional[AsyncScanner3D]
|
Existing AsyncScanner3D instance |
None
|
loop
|
Optional[AbstractEventLoop]
|
Event loop to use for async operations |
None
|
name
|
Optional[str]
|
Scanner identifier. Format: "Photoneo:serial_number" If None, opens first available scanner. |
None
|
**kwargs
|
Additional arguments passed to Mindtrace |
{}
|
Examples:
>>> # Use existing async scanner
>>> async_scan = await AsyncScanner3D.open()
>>> sync_scan = Scanner3D(async_scanner=async_scan, loop=loop)
property
Get scanner name.
Returns:
| Type | Description |
|---|---|
str
|
Scanner name in format "Backend:serial_number" |
property
Check if scanner is open.
Returns:
| Type | Description |
|---|---|
bool
|
True if scanner is open, False otherwise |
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture multi-component 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
Examples:
async_scanner_3d
Async 3D scanner interface providing high-level scanning operations.
Bases: Mindtrace
Async 3D scanner interface.
Provides high-level 3D scanning operations including multi-component capture and point cloud generation.
Usage
scanner = await AsyncScanner3D.open() result = await scanner.capture() print(result.range_shape) await scanner.close()
Initialize async 3D scanner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Backend instance (e.g., PhotoneoBackend) |
required |
async
classmethod
Open and initialize a 3D scanner.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Scanner identifier. Format: "Backend:serial_number" Supported backends: "Photoneo", "MockPhotoneo". If None, opens first available Photoneo scanner. |
None
|
Returns:
| Type | Description |
|---|---|
'AsyncScanner3D'
|
Initialized AsyncScanner3D instance |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If scanner not found |
CameraConnectionError
|
If connection fails |
Examples:
async
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture multi-component 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
async
Get scanner capabilities and available settings.
Returns:
| Type | Description |
|---|---|
ScannerCapabilities
|
ScannerCapabilities with available options and ranges |
Examples:
async
Get current scanner configuration.
Returns:
| Type | Description |
|---|---|
ScannerConfiguration
|
ScannerConfiguration with current settings |
Examples:
async
Apply scanner configuration.
Only non-None values in the configuration will be applied.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ScannerConfiguration
|
Configuration to apply |
required |
Examples:
async
async
async
models
Data models for 3D scanner operations.
This module provides data structures for handling 3D scanner data including multi-component scan results, coordinate maps, and point clouds.
Designed for structured light scanners like Photoneo PhoXi, but extensible for other 3D scanning technologies (ToF, LiDAR, etc.).
Bases: Enum
Available scan components from 3D scanners.
Bases: Enum
Scanner operation mode.
Bases: Enum
Structured light coding strategy.
Bases: Enum
Scan quality/speed tradeoff.
Bases: Enum
Source for texture/intensity data.
Bases: Enum
Point cloud output topology.
Bases: Enum
Coordinate system reference camera.
Bases: Enum
Acquisition trigger mode.
Bases: Enum
Hardware trigger signal edge.
dataclass
ScannerConfiguration(
operation_mode: Optional[OperationMode] = None,
coding_strategy: Optional[CodingStrategy] = None,
coding_quality: Optional[CodingQuality] = None,
maximum_fps: Optional[float] = None,
exposure_time: Optional[float] = None,
single_pattern_exposure: Optional[float] = None,
shutter_multiplier: Optional[int] = None,
scan_multiplier: Optional[int] = None,
color_exposure: Optional[float] = None,
led_power: Optional[int] = None,
laser_power: Optional[int] = None,
texture_source: Optional[TextureSource] = None,
camera_texture_source: Optional[TextureSource] = None,
output_topology: Optional[OutputTopology] = None,
camera_space: Optional[CameraSpace] = None,
normals_estimation_radius: Optional[int] = None,
max_inaccuracy: Optional[float] = None,
calibration_volume_only: Optional[bool] = None,
hole_filling: Optional[bool] = None,
trigger_mode: Optional[TriggerMode] = None,
hardware_trigger: Optional[bool] = None,
hardware_trigger_signal: Optional[HardwareTriggerSignal] = None,
)
Configuration settings for 3D scanners.
Groups all configurable parameters for structured light scanners. Not all parameters may be available on all scanner models.
dataclass
ScannerCapabilities(
has_range: bool = True,
has_intensity: bool = False,
has_confidence: bool = False,
has_normal: bool = False,
has_color: bool = False,
operation_modes: list = list(),
coding_strategies: list = list(),
coding_qualities: list = list(),
texture_sources: list = list(),
output_topologies: list = list(),
exposure_range: Optional[tuple] = None,
led_power_range: Optional[tuple] = None,
laser_power_range: Optional[tuple] = None,
fps_range: Optional[tuple] = None,
depth_resolution: Optional[tuple] = None,
color_resolution: Optional[tuple] = None,
model: str = "",
serial_number: str = "",
firmware_version: str = "",
)
Describes the capabilities of a 3D scanner.
Used to query what features and settings are available on a specific scanner.
dataclass
ScanResult(
range_map: Optional[ndarray] = None,
intensity: Optional[ndarray] = None,
confidence: Optional[ndarray] = None,
normal_map: Optional[ndarray] = None,
color: Optional[ndarray] = None,
timestamp: float = 0.0,
frame_number: int = 0,
components_enabled: Dict[ScanComponent, bool] = dict(),
metadata: Dict[str, Union[str, int, float]] = dict(),
)
Result from 3D scanner capture containing multi-component data.
Attributes:
| Name | Type | Description |
|---|---|---|
range_map |
Optional[ndarray]
|
Depth/range map - typically uint16 or float32 (H, W) |
intensity |
Optional[ndarray]
|
Intensity image - uint8 or uint16 (H, W) or (H, W, 3) |
confidence |
Optional[ndarray]
|
Confidence map - uint8 or uint16 (H, W), values indicate quality |
normal_map |
Optional[ndarray]
|
Surface normals - float32 (H, W, 3), xyz components |
color |
Optional[ndarray]
|
Color texture - uint8 (H, W, 3) RGB |
timestamp |
float
|
Capture timestamp in seconds (from device or system) |
frame_number |
int
|
Sequential frame number |
components_enabled |
Dict[ScanComponent, bool]
|
Dict of which components were captured |
metadata |
Dict[str, Union[str, int, float]]
|
Additional scan metadata (exposure, gain, etc.) |
Get mask of valid pixels based on range and confidence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_confidence
|
int
|
Minimum confidence threshold (0-255 typical) |
0
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Boolean mask (H, W) where True indicates valid pixel |
dataclass
CoordinateMap(
x_map: Optional[ndarray] = None,
y_map: Optional[ndarray] = None,
width: int = 0,
height: int = 0,
scale: float = 1.0,
offset: float = 0.0,
is_valid: bool = False,
)
Coordinate map for efficient point cloud generation.
Photoneo devices can provide pre-computed coordinate maps that allow efficient conversion from range-only data to full 3D point clouds. This enables faster transfers (only Z data) with local point cloud computation.
Attributes:
| Name | Type | Description |
|---|---|---|
x_map |
Optional[ndarray]
|
X coordinate map (H, W) - multiply by range to get X |
y_map |
Optional[ndarray]
|
Y coordinate map (H, W) - multiply by range to get Y |
width |
int
|
Map width in pixels |
height |
int
|
Map height in pixels |
scale |
float
|
Coordinate scale factor |
offset |
float
|
Coordinate offset |
is_valid |
bool
|
Whether the map has been initialized |
classmethod
from_projected_c(
projected_c: ndarray,
width: int,
height: int,
scale: float = 1.0,
offset: float = 0.0,
) -> "CoordinateMap"
Create coordinate map from Photoneo ProjectedC component.
The ProjectedC component contains pre-computed X,Y coordinates that can be cached and reused for faster point cloud generation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
projected_c
|
ndarray
|
ProjectedC data from Photoneo (H, W, 3) float32 |
required |
width
|
int
|
Image width |
required |
height
|
int
|
Image height |
required |
scale
|
float
|
Coordinate scale factor |
1.0
|
offset
|
float
|
Coordinate offset |
0.0
|
Returns:
| Type | Description |
|---|---|
'CoordinateMap'
|
CoordinateMap instance |
Compute 3D point cloud from range map using cached coordinates.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
range_map
|
ndarray
|
Depth/range map (H, W) |
required |
valid_mask
|
Optional[ndarray]
|
Optional mask of valid pixels |
None
|
Returns:
| Type | Description |
|---|---|
ndarray
|
Point cloud array (N, 3) with X, Y, Z coordinates |
dataclass
PointCloudData(
points: ndarray,
colors: Optional[ndarray] = None,
normals: Optional[ndarray] = None,
confidence: Optional[ndarray] = None,
num_points: int = 0,
has_colors: bool = False,
)
3D point cloud data with optional attributes.
Attributes:
| Name | Type | Description |
|---|---|---|
points |
ndarray
|
Array of 3D points (N, 3) - (x, y, z) in meters |
colors |
Optional[ndarray]
|
Optional RGB colors (N, 3) - values in [0, 1] |
normals |
Optional[ndarray]
|
Optional surface normals (N, 3) - unit vectors |
confidence |
Optional[ndarray]
|
Optional per-point confidence (N,) |
num_points |
int
|
Number of valid points |
has_colors |
bool
|
Flag indicating if color information is present |
Save point cloud as PLY file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path |
required |
binary
|
bool
|
If True, save in binary format; otherwise ASCII |
True
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If plyfile is not installed |
Downsample point cloud by given factor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
factor
|
int
|
Downsampling factor (e.g., 2 = keep every 2nd point) |
required |
Returns:
| Type | Description |
|---|---|
'PointCloudData'
|
New PointCloudData with downsampled data |
Filter points by minimum confidence threshold.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_confidence
|
float
|
Minimum confidence value (0.0 to 1.0) |
required |
Returns:
| Type | Description |
|---|---|
'PointCloudData'
|
New PointCloudData with filtered points |
Raises:
| Type | Description |
|---|---|
ValueError
|
If no confidence data available |
scanner_3d
Synchronous 3D scanner interface.
This module provides a synchronous wrapper around AsyncScanner3D, following the same pattern as the StereoCamera class.
Scanner3D(
async_scanner: Optional[AsyncScanner3D] = None,
loop: Optional[AbstractEventLoop] = None,
name: Optional[str] = None,
**kwargs
)
Bases: Mindtrace
Synchronous wrapper around AsyncScanner3D.
All operations are executed on a background event loop. This provides a simple synchronous API for 3D scanner operations.
Usage
scanner = Scanner3D() result = scanner.capture() print(result.range_shape) scanner.close()
Or with context manager
with Scanner3D() as scanner: ... result = scanner.capture() ... print(result.range_shape)
Create a synchronous 3D scanner wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
async_scanner
|
Optional[AsyncScanner3D]
|
Existing AsyncScanner3D instance |
None
|
loop
|
Optional[AbstractEventLoop]
|
Event loop to use for async operations |
None
|
name
|
Optional[str]
|
Scanner identifier. Format: "Photoneo:serial_number" If None, opens first available scanner. |
None
|
**kwargs
|
Additional arguments passed to Mindtrace |
{}
|
Examples:
>>> # Use existing async scanner
>>> async_scan = await AsyncScanner3D.open()
>>> sync_scan = Scanner3D(async_scanner=async_scan, loop=loop)
property
Get scanner name.
Returns:
| Type | Description |
|---|---|
str
|
Scanner name in format "Backend:serial_number" |
property
Check if scanner is open.
Returns:
| Type | Description |
|---|---|
bool
|
True if scanner is open, False otherwise |
capture(
timeout_ms: int = 10000,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
) -> ScanResult
Capture multi-component 3D scan data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
enable_range
|
bool
|
Whether to capture range/depth data |
True
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_confidence
|
bool
|
Whether to capture confidence data |
False
|
enable_normal
|
bool
|
Whether to capture surface normals |
False
|
enable_color
|
bool
|
Whether to capture color texture |
False
|
Returns:
| Type | Description |
|---|---|
ScanResult
|
ScanResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 10000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
include_confidence
|
bool
|
Whether to include confidence values |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
10000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
CameraCaptureError
|
If capture fails |
Examples:
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If scanner not opened |
Examples:
setup
Setup utilities for 3D scanners.
PhotoneoSetup
Bases: Mindtrace
Photoneo 3D scanner SDK setup and verification.
This class handles the installation of the Matrix Vision mvGenTL Producer required for Photoneo 3D scanner communication via GigE Vision.
Based on Photoneo's official recommendations: https://github.com/photoneo-3d/photoneo-python-examples
Initialize Photoneo setup.
Find the CTI file on this system.
Searches environment variable first, then platform-specific known paths.
Returns:
| Type | Description |
|---|---|
str
|
Path to CTI file if found, empty string otherwise |
Verify that the CTI file is properly installed.
Returns:
| Type | Description |
|---|---|
bool
|
True if CTI file exists and is accessible |
Verify GENICAM_GENTL64_PATH is set correctly.
Returns:
| Type | Description |
|---|---|
bool
|
True if environment variable is properly configured |
Verify that Harvesters library is available.
Returns:
| Type | Description |
|---|---|
bool
|
True if Harvesters is importable |
Discover Photoneo devices on the network.
Returns:
| Type | Description |
|---|---|
List[dict]
|
List of discovered Photoneo devices with their info |
Install the Matrix Vision mvGenTL Producer.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful |
Uninstall the Matrix Vision mvGenTL Producer.
Returns:
| Type | Description |
|---|---|
bool
|
True if uninstallation successful |
setup_photoneo
Photoneo 3D Scanner SDK Setup Script
This script automates the setup of the Photoneo 3D scanner environment. Photoneo scanners use GigE Vision protocol and require the Matrix Vision mvGenTL Producer for communication via Harvesters.
Based on: https://github.com/photoneo-3d/photoneo-python-examples
Supports: Linux (x86_64, aarch64), Windows (x64), macOS (ARM64, x86_64)
Requirements: - Matrix Vision mvGenTL Producer (version 2.49.0 recommended) - Harvesters library: pip install harvesters - PhoXi firmware version 1.13.0 or later
Usage
mindtrace-scanner-photoneo install # Install Matrix Vision SDK mindtrace-scanner-photoneo verify # Verify installation mindtrace-scanner-photoneo discover # Test device discovery mindtrace-scanner-photoneo uninstall # Uninstall SDK
Bases: Mindtrace
Photoneo 3D scanner SDK setup and verification.
This class handles the installation of the Matrix Vision mvGenTL Producer required for Photoneo 3D scanner communication via GigE Vision.
Based on Photoneo's official recommendations: https://github.com/photoneo-3d/photoneo-python-examples
Initialize Photoneo setup.
Find the CTI file on this system.
Searches environment variable first, then platform-specific known paths.
Returns:
| Type | Description |
|---|---|
str
|
Path to CTI file if found, empty string otherwise |
Verify that the CTI file is properly installed.
Returns:
| Type | Description |
|---|---|
bool
|
True if CTI file exists and is accessible |
Verify GENICAM_GENTL64_PATH is set correctly.
Returns:
| Type | Description |
|---|---|
bool
|
True if environment variable is properly configured |
Verify that Harvesters library is available.
Returns:
| Type | Description |
|---|---|
bool
|
True if Harvesters is importable |
Discover Photoneo devices on the network.
Returns:
| Type | Description |
|---|---|
List[dict]
|
List of discovered Photoneo devices with their info |
Install the Matrix Vision mvGenTL Producer.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful |
Uninstall the Matrix Vision mvGenTL Producer.
Returns:
| Type | Description |
|---|---|
bool
|
True if uninstallation successful |
install(
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
)
) -> None
Install the Matrix Vision mvGenTL Producer for Photoneo scanners.
Downloads and installs mvGenTL Producer v2.49.0 as recommended by Photoneo. See: https://github.com/photoneo-3d/photoneo-python-examples
uninstall(
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
)
) -> None
Uninstall the Matrix Vision mvGenTL Producer.
verify(
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
)
) -> None
Verify Photoneo SDK installation and configuration.
sensors
MindTrace Hardware Sensor System.
A unified sensor system that abstracts different communication backends (MQTT, HTTP, Serial, Modbus) behind a simple AsyncSensor interface.
SensorBackend
Bases: ABC
Abstract base class for all sensor backends.
This interface abstracts different communication patterns: - MQTT: Push-based (subscribe to topics, cache messages) - HTTP: Pull-based (make requests on-demand) - Serial: Pull-based (send commands, read responses) - Modbus: Pull-based (read registers)
connect
abstractmethod
async
Establish connection to the backend.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
disconnect
abstractmethod
async
Close connection to the backend.
Should be safe to call multiple times.
read_data
abstractmethod
async
Read sensor data from the specified address.
For different backends, 'address' means: - MQTT: topic name (returns cached message) - HTTP: endpoint path (makes GET request) - Serial: sensor command (send command, read response) - Modbus: register address (read holding registers)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Backend-specific address/identifier |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Dictionary with sensor data, or None if no data available |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend not connected |
TimeoutError
|
If read operation times out |
ValueError
|
If address is invalid |
HTTPSensorBackend
HTTPSensorBackend(
base_url: str,
auth_token: Optional[str] = None,
timeout: float = 30.0,
**kwargs
)
Bases: SensorBackend
HTTP backend for sensor communication (placeholder).
This backend will connect to REST APIs and make HTTP GET requests to read sensor data. It implements a pull-based pattern where we request data on-demand.
Future implementation will: - Make HTTP GET requests to base_url + endpoint - Handle authentication headers - Parse JSON responses - Implement timeout and retry logic
Initialize HTTP backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
Base URL for HTTP requests (e.g., "http://api.sensors.com") |
required |
auth_token
|
Optional[str]
|
Optional authentication token |
None
|
timeout
|
float
|
Request timeout in seconds |
30.0
|
**kwargs
|
Additional HTTP client parameters |
{}
|
connect
async
Establish HTTP client connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP backend not yet implemented |
read_data
async
Read sensor data via HTTP GET request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Endpoint path (e.g., "/sensors/temperature/current") |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
JSON response data, or None if request fails |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP backend not yet implemented |
MQTTSensorBackend
MQTTSensorBackend(
broker_url: str,
identifier: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
keepalive: int = 60,
**kwargs
)
Bases: SensorBackend
MQTT backend for sensor communication.
This backend connects to an MQTT broker and subscribes to topics. Messages are cached when received, and read_data() returns the latest cached message.
This implements a push-based pattern where data comes to us, unlike HTTP/Serial which are pull-based where we request data on-demand.
Initialize MQTT backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
Optional[str]
|
MQTT client identifier (auto-generated if None) |
None
|
username
|
Optional[str]
|
MQTT username (optional) |
None
|
password
|
Optional[str]
|
MQTT password (optional) |
None
|
keepalive
|
int
|
MQTT keepalive interval in seconds |
60
|
**kwargs
|
Additional MQTT client parameters |
{}
|
connect
async
Connect to MQTT broker.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection to broker fails |
read_data
async
Read cached data from MQTT topic.
For MQTT, the address is the topic name. If we haven't subscribed to this topic yet, we'll subscribe and wait briefly for a message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
MQTT topic name |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Latest cached message for the topic, or None if no data available |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If not connected to broker |
ValueError
|
If topic name is invalid |
SerialSensorBackend
Bases: SensorBackend
Serial backend for sensor communication (placeholder).
This backend will connect to sensors via serial/USB ports and send commands to read sensor data. It implements a pull-based pattern where we send commands and read responses on-demand.
Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor commands and read responses - Parse sensor data (JSON, CSV, or custom formats) - Handle timeouts and communication errors
Initialize Serial backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
port
|
str
|
Serial port path (e.g., "/dev/ttyUSB0" or "COM3") |
required |
baudrate
|
int
|
Serial communication baudrate |
9600
|
timeout
|
float
|
Communication timeout in seconds |
5.0
|
**kwargs
|
Additional serial parameters (parity, stopbits, etc.) |
{}
|
connect
async
Open serial port connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial backend not yet implemented |
read_data
async
Send command to sensor and read response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Sensor command (e.g., "READ_TEMP", "GET_HUMIDITY") |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Parsed sensor response data, or None if command fails |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial backend not yet implemented |
SensorManager
Simple manager for multiple sensors.
This manager provides basic functionality: - Register sensors with different backends - Remove sensors by ID - Read from all sensors in parallel
The manager keeps sensors in a registry and delegates operations to them.
Initialize sensor manager.
register_sensor
register_sensor(
sensor_id: str,
backend_type: str,
connection_params: Dict[str, Any],
address: str,
) -> AsyncSensor
Register a new sensor with the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
backend_type
|
str
|
Type of backend ("mqtt", "http", "serial") |
required |
connection_params
|
Dict[str, Any]
|
Backend-specific connection parameters |
required |
address
|
str
|
Backend-specific address (topic, endpoint, command) |
required |
Returns:
| Type | Description |
|---|---|
AsyncSensor
|
The created AsyncSensor instance |
Raises:
| Type | Description |
|---|---|
ValueError
|
If sensor_id already exists or parameters are invalid |
Examples:
Register MQTT sensor
sensor = manager.register_sensor( "temp001", "mqtt", {"broker_url": "mqtt://localhost:1883"}, "sensors/temperature" )
Register HTTP sensor
sensor = manager.register_sensor( "temp002", "http", {"base_url": "http://api.sensors.com"}, "/sensors/temperature" )
remove_sensor
Remove a sensor from the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
ID of sensor to remove |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If sensor_id doesn't exist |
get_sensor
Get a sensor by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
ID of sensor to get |
required |
Returns:
| Type | Description |
|---|---|
Optional[AsyncSensor]
|
AsyncSensor instance or None if not found |
list_sensors
Get list of all registered sensor IDs.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of sensor IDs |
connect_all
async
Connect all registered sensors.
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping sensor IDs to connection success (True/False) |
read_all
async
Read data from all registered sensors.
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping sensor IDs to their data (or error info) |
Examples:
{ "temp001": {"temperature": 23.5, "unit": "C"}, "temp002": {"error": "Not connected"}, "humid001": {"humidity": 65.2, "unit": "%"} }
AsyncSensor
Unified async sensor interface.
This class provides a simple, consistent API for reading sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).
The sensor abstracts different communication patterns: - MQTT: Push-based (messages are cached when received) - HTTP: Pull-based (requests made on-demand) - Serial: Pull-based (commands sent on-demand)
All backends are hidden behind the same connect/disconnect/read interface.
Initialize AsyncSensor with a backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for this sensor |
required |
backend
|
SensorBackend
|
Backend implementation (MQTT, HTTP, Serial, etc.) |
required |
address
|
str
|
Backend-specific address (topic, endpoint, command, etc.) |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If sensor_id or address is empty |
TypeError
|
If backend is not a SensorBackend instance |
is_connected
property
Check if sensor backend is connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if backend is connected, False otherwise |
connect
async
Connect the sensor backend.
This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
disconnect
async
Disconnect the sensor backend.
This closes the connection to the underlying communication system. Safe to call multiple times.
read
async
Read sensor data.
This method abstracts different communication patterns: - MQTT: Returns cached message from topic - HTTP: Makes GET request to endpoint - Serial: Sends command and reads response
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Dictionary with sensor data, or None if no data available |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend is not connected |
TimeoutError
|
If read operation times out |
ValueError
|
If address is invalid |
SensorSimulator
Unified sensor simulator interface.
This class provides a simple, consistent API for publishing sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).
The simulator abstracts different communication patterns: - MQTT: Publish messages to topics - HTTP: POST data to REST endpoints - Serial: Send data/commands to serial devices
All backends are hidden behind the same connect/disconnect/publish interface. This is perfect for integration testing and sensor data simulation.
Initialize SensorSimulator with a backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
simulator_id
|
str
|
Unique identifier for this simulator |
required |
backend
|
SensorSimulatorBackend
|
Backend implementation (MQTT, HTTP, Serial, etc.) |
required |
address
|
str
|
Backend-specific address (topic, endpoint, command, etc.) |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If simulator_id or address is empty |
TypeError
|
If backend is not a SensorSimulatorBackend instance |
is_connected
property
Check if simulator backend is connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if backend is connected, False otherwise |
connect
async
Connect the simulator backend.
This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
disconnect
async
Disconnect the simulator backend.
This closes the connection to the underlying communication system. Safe to call multiple times.
publish
async
Publish sensor data.
This method abstracts different communication patterns: - MQTT: Publishes message to topic - HTTP: Makes POST request to endpoint - Serial: Sends data to serial port
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Union[Dict[str, Any], Any]
|
Data to publish (dict, primitive, or complex object) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend is not connected |
TimeoutError
|
If publish operation times out |
ValueError
|
If address is invalid or data cannot be serialized |
SensorSimulatorBackend
Bases: ABC
Abstract base class for all sensor simulator backends.
This interface abstracts different communication patterns for publishing: - MQTT: Publish messages to topics - HTTP: POST data to endpoints - Serial: Send data/commands to serial ports - Modbus: Write to registers
connect
abstractmethod
async
Establish connection to the backend.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
disconnect
abstractmethod
async
Close connection to the backend.
Should be safe to call multiple times.
publish_data
abstractmethod
async
Publish sensor data to the specified address.
For different backends, 'address' means: - MQTT: topic name to publish to - HTTP: endpoint path to POST to - Serial: sensor command or data format - Modbus: register address to write to
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Backend-specific address/identifier |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (dict, primitive, or complex object) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend not connected |
TimeoutError
|
If publish operation times out |
ValueError
|
If address is invalid or data cannot be serialized |
HTTPSensorSimulator
HTTPSensorSimulator(
base_url: str,
auth_token: Optional[str] = None,
timeout: float = 30.0,
**kwargs
)
Bases: SensorSimulatorBackend
HTTP backend for sensor simulation (placeholder).
This backend will connect to REST APIs and make HTTP POST requests to publish sensor data. It implements a push-based pattern where we send data to endpoints.
Future implementation will: - Make HTTP POST requests to base_url + endpoint - Handle authentication headers - Send JSON payloads - Implement timeout and retry logic
Initialize HTTP simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
Base URL for HTTP requests (e.g., "http://api.sensors.com") |
required |
auth_token
|
Optional[str]
|
Optional authentication token |
None
|
timeout
|
float
|
Request timeout in seconds |
30.0
|
**kwargs
|
Additional HTTP client parameters |
{}
|
connect
async
Establish HTTP client connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP simulator not yet implemented |
publish_data
async
Publish sensor data via HTTP POST request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Endpoint path (e.g., "/sensors/temperature/data") |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (will be JSON-encoded) |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP simulator not yet implemented |
MQTTSensorSimulator
MQTTSensorSimulator(
broker_url: str,
identifier: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
keepalive: int = 60,
**kwargs
)
Bases: SensorSimulatorBackend
MQTT backend for sensor simulation.
This backend connects to an MQTT broker and publishes sensor data to topics. It's designed for testing and integration scenarios where you need to simulate sensor data streams that can be consumed by AsyncSensor instances.
Initialize MQTT simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
Optional[str]
|
MQTT client identifier (auto-generated if None) |
None
|
username
|
Optional[str]
|
MQTT username (optional) |
None
|
password
|
Optional[str]
|
MQTT password (optional) |
None
|
keepalive
|
int
|
MQTT keepalive interval in seconds |
60
|
**kwargs
|
Additional MQTT client parameters |
{}
|
connect
async
Connect to MQTT broker.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection to broker fails |
publish_data
async
Publish sensor data to MQTT topic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
MQTT topic name to publish to |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (will be JSON-encoded if dict/list) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If not connected to broker |
ValueError
|
If topic name is invalid |
TimeoutError
|
If publish operation times out |
SerialSensorSimulator
Bases: SensorSimulatorBackend
Serial backend for sensor simulation (placeholder).
This backend will connect to serial/USB ports and send sensor data commands. It implements a push-based pattern where we send sensor data to simulate physical sensor devices.
Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor data in various formats (JSON, CSV, custom protocols) - Simulate sensor response patterns and timing - Handle communication protocols and handshaking
Initialize Serial simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
port
|
str
|
Serial port path (e.g., "/dev/ttyUSB0" or "COM3") |
required |
baudrate
|
int
|
Serial communication baudrate |
9600
|
timeout
|
float
|
Communication timeout in seconds |
5.0
|
**kwargs
|
Additional serial parameters (parity, stopbits, etc.) |
{}
|
connect
async
Open serial port connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial simulator not yet implemented |
publish_data
async
Send sensor data via serial port.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Command type or data format identifier (e.g., "TEMP_DATA", "JSON_FORMAT") |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to send (will be formatted according to address) |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial simulator not yet implemented |
create_backend
Create a sensor backend of the specified type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend_type
|
str
|
Type of backend ("mqtt", "http", "serial") |
required |
**params
|
Backend-specific parameters |
{}
|
Returns:
| Type | Description |
|---|---|
SensorBackend
|
Instantiated backend |
Raises:
| Type | Description |
|---|---|
ValueError
|
If backend_type is unknown |
TypeError
|
If required parameters are missing |
Examples:
MQTT backend
mqtt_backend = create_backend("mqtt", broker_url="mqtt://localhost:1883")
HTTP backend
http_backend = create_backend("http", base_url="http://api.sensors.com")
Serial backend
serial_backend = create_backend("serial", port="/dev/ttyUSB0", baudrate=9600)
create_simulator_backend
Create a sensor simulator backend of the specified type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend_type
|
str
|
Type of backend ("mqtt", "http", "serial") |
required |
**params
|
Backend-specific parameters |
{}
|
Returns:
| Type | Description |
|---|---|
SensorSimulatorBackend
|
Instantiated simulator backend |
Raises:
| Type | Description |
|---|---|
ValueError
|
If backend_type is unknown |
TypeError
|
If required parameters are missing |
Examples:
MQTT simulator backend
mqtt_sim = create_simulator_backend("mqtt", broker_url="mqtt://localhost:1883")
HTTP simulator backend
http_sim = create_simulator_backend("http", base_url="http://api.sensors.com")
Serial simulator backend
serial_sim = create_simulator_backend("serial", port="/dev/ttyUSB0", baudrate=9600)
backends
Sensor backends package.
base
Base sensor backend interface.
This module defines the abstract interface that all sensor backends must implement.
Bases: ABC
Abstract base class for all sensor backends.
This interface abstracts different communication patterns: - MQTT: Push-based (subscribe to topics, cache messages) - HTTP: Pull-based (make requests on-demand) - Serial: Pull-based (send commands, read responses) - Modbus: Pull-based (read registers)
abstractmethod
async
Establish connection to the backend.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
abstractmethod
async
Close connection to the backend.
Should be safe to call multiple times.
abstractmethod
async
Read sensor data from the specified address.
For different backends, 'address' means: - MQTT: topic name (returns cached message) - HTTP: endpoint path (makes GET request) - Serial: sensor command (send command, read response) - Modbus: register address (read holding registers)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Backend-specific address/identifier |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Dictionary with sensor data, or None if no data available |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend not connected |
TimeoutError
|
If read operation times out |
ValueError
|
If address is invalid |
http
HTTP sensor backend implementation (placeholder).
This module will implement the SensorBackend interface for HTTP/REST API communication. Currently this is a placeholder that raises NotImplementedError.
HTTPSensorBackend(
base_url: str,
auth_token: Optional[str] = None,
timeout: float = 30.0,
**kwargs
)
Bases: SensorBackend
HTTP backend for sensor communication (placeholder).
This backend will connect to REST APIs and make HTTP GET requests to read sensor data. It implements a pull-based pattern where we request data on-demand.
Future implementation will: - Make HTTP GET requests to base_url + endpoint - Handle authentication headers - Parse JSON responses - Implement timeout and retry logic
Initialize HTTP backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
Base URL for HTTP requests (e.g., "http://api.sensors.com") |
required |
auth_token
|
Optional[str]
|
Optional authentication token |
None
|
timeout
|
float
|
Request timeout in seconds |
30.0
|
**kwargs
|
Additional HTTP client parameters |
{}
|
async
Establish HTTP client connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP backend not yet implemented |
async
Read sensor data via HTTP GET request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Endpoint path (e.g., "/sensors/temperature/current") |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
JSON response data, or None if request fails |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP backend not yet implemented |
mqtt
MQTT sensor backend implementation.
This module implements the SensorBackend interface for MQTT communication. It uses a push-based model where messages are cached when received.
MQTTSensorBackend(
broker_url: str,
identifier: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
keepalive: int = 60,
**kwargs
)
Bases: SensorBackend
MQTT backend for sensor communication.
This backend connects to an MQTT broker and subscribes to topics. Messages are cached when received, and read_data() returns the latest cached message.
This implements a push-based pattern where data comes to us, unlike HTTP/Serial which are pull-based where we request data on-demand.
Initialize MQTT backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
Optional[str]
|
MQTT client identifier (auto-generated if None) |
None
|
username
|
Optional[str]
|
MQTT username (optional) |
None
|
password
|
Optional[str]
|
MQTT password (optional) |
None
|
keepalive
|
int
|
MQTT keepalive interval in seconds |
60
|
**kwargs
|
Additional MQTT client parameters |
{}
|
async
Connect to MQTT broker.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection to broker fails |
async
Read cached data from MQTT topic.
For MQTT, the address is the topic name. If we haven't subscribed to this topic yet, we'll subscribe and wait briefly for a message.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
MQTT topic name |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Latest cached message for the topic, or None if no data available |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If not connected to broker |
ValueError
|
If topic name is invalid |
serial
Serial sensor backend implementation (placeholder).
This module will implement the SensorBackend interface for serial/USB communication. Currently this is a placeholder that raises NotImplementedError.
Bases: SensorBackend
Serial backend for sensor communication (placeholder).
This backend will connect to sensors via serial/USB ports and send commands to read sensor data. It implements a pull-based pattern where we send commands and read responses on-demand.
Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor commands and read responses - Parse sensor data (JSON, CSV, or custom formats) - Handle timeouts and communication errors
Initialize Serial backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
port
|
str
|
Serial port path (e.g., "/dev/ttyUSB0" or "COM3") |
required |
baudrate
|
int
|
Serial communication baudrate |
9600
|
timeout
|
float
|
Communication timeout in seconds |
5.0
|
**kwargs
|
Additional serial parameters (parity, stopbits, etc.) |
{}
|
async
Open serial port connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial backend not yet implemented |
async
Send command to sensor and read response.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Sensor command (e.g., "READ_TEMP", "GET_HUMIDITY") |
required |
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Parsed sensor response data, or None if command fails |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial backend not yet implemented |
core
Sensor core package.
factory
Backend factory for creating sensor backends and simulators.
This module provides factory functions to create different types of sensor backends and simulator backends based on type strings and parameters.
Create a sensor backend of the specified type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend_type
|
str
|
Type of backend ("mqtt", "http", "serial") |
required |
**params
|
Backend-specific parameters |
{}
|
Returns:
| Type | Description |
|---|---|
SensorBackend
|
Instantiated backend |
Raises:
| Type | Description |
|---|---|
ValueError
|
If backend_type is unknown |
TypeError
|
If required parameters are missing |
Examples:
MQTT backend
mqtt_backend = create_backend("mqtt", broker_url="mqtt://localhost:1883")
HTTP backend
http_backend = create_backend("http", base_url="http://api.sensors.com")
Serial backend
serial_backend = create_backend("serial", port="/dev/ttyUSB0", baudrate=9600)
Register a custom backend type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend_type
|
str
|
Name for the backend type |
required |
backend_class
|
type
|
Backend class that implements SensorBackend |
required |
Raises:
| Type | Description |
|---|---|
TypeError
|
If backend_class doesn't inherit from SensorBackend |
Get all available backend types.
Returns:
| Type | Description |
|---|---|
Dict[str, type]
|
Dictionary mapping backend names to classes |
Create a sensor simulator backend of the specified type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend_type
|
str
|
Type of backend ("mqtt", "http", "serial") |
required |
**params
|
Backend-specific parameters |
{}
|
Returns:
| Type | Description |
|---|---|
SensorSimulatorBackend
|
Instantiated simulator backend |
Raises:
| Type | Description |
|---|---|
ValueError
|
If backend_type is unknown |
TypeError
|
If required parameters are missing |
Examples:
MQTT simulator backend
mqtt_sim = create_simulator_backend("mqtt", broker_url="mqtt://localhost:1883")
HTTP simulator backend
http_sim = create_simulator_backend("http", base_url="http://api.sensors.com")
Serial simulator backend
serial_sim = create_simulator_backend("serial", port="/dev/ttyUSB0", baudrate=9600)
Register a custom simulator backend type.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend_type
|
str
|
Name for the backend type |
required |
backend_class
|
type
|
Backend class that implements SensorSimulatorBackend |
required |
Raises:
| Type | Description |
|---|---|
TypeError
|
If backend_class doesn't inherit from SensorSimulatorBackend |
manager
Simple sensor manager implementation.
This module implements a minimal SensorManager that can register/remove sensors and perform bulk read operations across multiple sensors.
Simple manager for multiple sensors.
This manager provides basic functionality: - Register sensors with different backends - Remove sensors by ID - Read from all sensors in parallel
The manager keeps sensors in a registry and delegates operations to them.
Initialize sensor manager.
register_sensor(
sensor_id: str,
backend_type: str,
connection_params: Dict[str, Any],
address: str,
) -> AsyncSensor
Register a new sensor with the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
backend_type
|
str
|
Type of backend ("mqtt", "http", "serial") |
required |
connection_params
|
Dict[str, Any]
|
Backend-specific connection parameters |
required |
address
|
str
|
Backend-specific address (topic, endpoint, command) |
required |
Returns:
| Type | Description |
|---|---|
AsyncSensor
|
The created AsyncSensor instance |
Raises:
| Type | Description |
|---|---|
ValueError
|
If sensor_id already exists or parameters are invalid |
Examples:
Register MQTT sensor
sensor = manager.register_sensor( "temp001", "mqtt", {"broker_url": "mqtt://localhost:1883"}, "sensors/temperature" )
Register HTTP sensor
sensor = manager.register_sensor( "temp002", "http", {"base_url": "http://api.sensors.com"}, "/sensors/temperature" )
Remove a sensor from the manager.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
ID of sensor to remove |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If sensor_id doesn't exist |
Get a sensor by ID.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
ID of sensor to get |
required |
Returns:
| Type | Description |
|---|---|
Optional[AsyncSensor]
|
AsyncSensor instance or None if not found |
Get list of all registered sensor IDs.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of sensor IDs |
async
Connect all registered sensors.
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping sensor IDs to connection success (True/False) |
async
Read data from all registered sensors.
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping sensor IDs to their data (or error info) |
Examples:
{ "temp001": {"temperature": 23.5, "unit": "C"}, "temp002": {"error": "Not connected"}, "humid001": {"humidity": 65.2, "unit": "%"} }
sensor
Unified AsyncSensor class.
This module implements the main AsyncSensor class that provides a simple, unified interface for all sensor backends (MQTT, HTTP, Serial, etc.).
Unified async sensor interface.
This class provides a simple, consistent API for reading sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).
The sensor abstracts different communication patterns: - MQTT: Push-based (messages are cached when received) - HTTP: Pull-based (requests made on-demand) - Serial: Pull-based (commands sent on-demand)
All backends are hidden behind the same connect/disconnect/read interface.
Initialize AsyncSensor with a backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for this sensor |
required |
backend
|
SensorBackend
|
Backend implementation (MQTT, HTTP, Serial, etc.) |
required |
address
|
str
|
Backend-specific address (topic, endpoint, command, etc.) |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If sensor_id or address is empty |
TypeError
|
If backend is not a SensorBackend instance |
property
Check if sensor backend is connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if backend is connected, False otherwise |
async
Connect the sensor backend.
This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
async
Disconnect the sensor backend.
This closes the connection to the underlying communication system. Safe to call multiple times.
async
Read sensor data.
This method abstracts different communication patterns: - MQTT: Returns cached message from topic - HTTP: Makes GET request to endpoint - Serial: Sends command and reads response
Returns:
| Type | Description |
|---|---|
Optional[Dict[str, Any]]
|
Dictionary with sensor data, or None if no data available |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend is not connected |
TimeoutError
|
If read operation times out |
ValueError
|
If address is invalid |
simulator
SensorSimulator class for publishing sensor data.
This module implements the main SensorSimulator class that provides a simple, unified interface for publishing sensor data to all simulator backends (MQTT, HTTP, Serial, etc.).
Unified sensor simulator interface.
This class provides a simple, consistent API for publishing sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).
The simulator abstracts different communication patterns: - MQTT: Publish messages to topics - HTTP: POST data to REST endpoints - Serial: Send data/commands to serial devices
All backends are hidden behind the same connect/disconnect/publish interface. This is perfect for integration testing and sensor data simulation.
Initialize SensorSimulator with a backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
simulator_id
|
str
|
Unique identifier for this simulator |
required |
backend
|
SensorSimulatorBackend
|
Backend implementation (MQTT, HTTP, Serial, etc.) |
required |
address
|
str
|
Backend-specific address (topic, endpoint, command, etc.) |
required |
Raises:
| Type | Description |
|---|---|
ValueError
|
If simulator_id or address is empty |
TypeError
|
If backend is not a SensorSimulatorBackend instance |
property
Check if simulator backend is connected.
Returns:
| Type | Description |
|---|---|
bool
|
True if backend is connected, False otherwise |
async
Connect the simulator backend.
This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
async
Disconnect the simulator backend.
This closes the connection to the underlying communication system. Safe to call multiple times.
async
Publish sensor data.
This method abstracts different communication patterns: - MQTT: Publishes message to topic - HTTP: Makes POST request to endpoint - Serial: Sends data to serial port
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
data
|
Union[Dict[str, Any], Any]
|
Data to publish (dict, primitive, or complex object) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend is not connected |
TimeoutError
|
If publish operation times out |
ValueError
|
If address is invalid or data cannot be serialized |
simulators
Sensor simulators for testing and integration purposes.
This module provides SensorSimulator implementations that can publish data to various backends (MQTT, HTTP, Serial) for testing AsyncSensor functionality.
SensorSimulatorBackend
Bases: ABC
Abstract base class for all sensor simulator backends.
This interface abstracts different communication patterns for publishing: - MQTT: Publish messages to topics - HTTP: POST data to endpoints - Serial: Send data/commands to serial ports - Modbus: Write to registers
abstractmethod
async
Establish connection to the backend.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
abstractmethod
async
Close connection to the backend.
Should be safe to call multiple times.
abstractmethod
async
Publish sensor data to the specified address.
For different backends, 'address' means: - MQTT: topic name to publish to - HTTP: endpoint path to POST to - Serial: sensor command or data format - Modbus: register address to write to
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Backend-specific address/identifier |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (dict, primitive, or complex object) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend not connected |
TimeoutError
|
If publish operation times out |
ValueError
|
If address is invalid or data cannot be serialized |
HTTPSensorSimulator
HTTPSensorSimulator(
base_url: str,
auth_token: Optional[str] = None,
timeout: float = 30.0,
**kwargs
)
Bases: SensorSimulatorBackend
HTTP backend for sensor simulation (placeholder).
This backend will connect to REST APIs and make HTTP POST requests to publish sensor data. It implements a push-based pattern where we send data to endpoints.
Future implementation will: - Make HTTP POST requests to base_url + endpoint - Handle authentication headers - Send JSON payloads - Implement timeout and retry logic
Initialize HTTP simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
Base URL for HTTP requests (e.g., "http://api.sensors.com") |
required |
auth_token
|
Optional[str]
|
Optional authentication token |
None
|
timeout
|
float
|
Request timeout in seconds |
30.0
|
**kwargs
|
Additional HTTP client parameters |
{}
|
async
Establish HTTP client connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP simulator not yet implemented |
async
Publish sensor data via HTTP POST request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Endpoint path (e.g., "/sensors/temperature/data") |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (will be JSON-encoded) |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP simulator not yet implemented |
MQTTSensorSimulator
MQTTSensorSimulator(
broker_url: str,
identifier: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
keepalive: int = 60,
**kwargs
)
Bases: SensorSimulatorBackend
MQTT backend for sensor simulation.
This backend connects to an MQTT broker and publishes sensor data to topics. It's designed for testing and integration scenarios where you need to simulate sensor data streams that can be consumed by AsyncSensor instances.
Initialize MQTT simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
Optional[str]
|
MQTT client identifier (auto-generated if None) |
None
|
username
|
Optional[str]
|
MQTT username (optional) |
None
|
password
|
Optional[str]
|
MQTT password (optional) |
None
|
keepalive
|
int
|
MQTT keepalive interval in seconds |
60
|
**kwargs
|
Additional MQTT client parameters |
{}
|
async
Connect to MQTT broker.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection to broker fails |
async
Publish sensor data to MQTT topic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
MQTT topic name to publish to |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (will be JSON-encoded if dict/list) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If not connected to broker |
ValueError
|
If topic name is invalid |
TimeoutError
|
If publish operation times out |
SerialSensorSimulator
Bases: SensorSimulatorBackend
Serial backend for sensor simulation (placeholder).
This backend will connect to serial/USB ports and send sensor data commands. It implements a push-based pattern where we send sensor data to simulate physical sensor devices.
Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor data in various formats (JSON, CSV, custom protocols) - Simulate sensor response patterns and timing - Handle communication protocols and handshaking
Initialize Serial simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
port
|
str
|
Serial port path (e.g., "/dev/ttyUSB0" or "COM3") |
required |
baudrate
|
int
|
Serial communication baudrate |
9600
|
timeout
|
float
|
Communication timeout in seconds |
5.0
|
**kwargs
|
Additional serial parameters (parity, stopbits, etc.) |
{}
|
async
Open serial port connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial simulator not yet implemented |
async
Send sensor data via serial port.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Command type or data format identifier (e.g., "TEMP_DATA", "JSON_FORMAT") |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to send (will be formatted according to address) |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial simulator not yet implemented |
base
Base sensor simulator backend interface.
This module defines the abstract interface that all sensor simulator backends must implement.
Bases: ABC
Abstract base class for all sensor simulator backends.
This interface abstracts different communication patterns for publishing: - MQTT: Publish messages to topics - HTTP: POST data to endpoints - Serial: Send data/commands to serial ports - Modbus: Write to registers
abstractmethod
async
Establish connection to the backend.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection fails |
abstractmethod
async
Close connection to the backend.
Should be safe to call multiple times.
abstractmethod
async
Publish sensor data to the specified address.
For different backends, 'address' means: - MQTT: topic name to publish to - HTTP: endpoint path to POST to - Serial: sensor command or data format - Modbus: register address to write to
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Backend-specific address/identifier |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (dict, primitive, or complex object) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If backend not connected |
TimeoutError
|
If publish operation times out |
ValueError
|
If address is invalid or data cannot be serialized |
http
HTTP sensor simulator backend implementation (placeholder).
This module will implement the SensorSimulatorBackend interface for HTTP/REST API communication. Currently this is a placeholder that raises NotImplementedError.
HTTPSensorSimulator(
base_url: str,
auth_token: Optional[str] = None,
timeout: float = 30.0,
**kwargs
)
Bases: SensorSimulatorBackend
HTTP backend for sensor simulation (placeholder).
This backend will connect to REST APIs and make HTTP POST requests to publish sensor data. It implements a push-based pattern where we send data to endpoints.
Future implementation will: - Make HTTP POST requests to base_url + endpoint - Handle authentication headers - Send JSON payloads - Implement timeout and retry logic
Initialize HTTP simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
base_url
|
str
|
Base URL for HTTP requests (e.g., "http://api.sensors.com") |
required |
auth_token
|
Optional[str]
|
Optional authentication token |
None
|
timeout
|
float
|
Request timeout in seconds |
30.0
|
**kwargs
|
Additional HTTP client parameters |
{}
|
async
Establish HTTP client connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP simulator not yet implemented |
async
Publish sensor data via HTTP POST request.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Endpoint path (e.g., "/sensors/temperature/data") |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (will be JSON-encoded) |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
HTTP simulator not yet implemented |
mqtt
MQTT sensor simulator backend implementation.
This module implements the SensorSimulatorBackend interface for MQTT communication. It publishes sensor data to MQTT topics for testing and integration purposes.
MQTTSensorSimulator(
broker_url: str,
identifier: Optional[str] = None,
username: Optional[str] = None,
password: Optional[str] = None,
keepalive: int = 60,
**kwargs
)
Bases: SensorSimulatorBackend
MQTT backend for sensor simulation.
This backend connects to an MQTT broker and publishes sensor data to topics. It's designed for testing and integration scenarios where you need to simulate sensor data streams that can be consumed by AsyncSensor instances.
Initialize MQTT simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
Optional[str]
|
MQTT client identifier (auto-generated if None) |
None
|
username
|
Optional[str]
|
MQTT username (optional) |
None
|
password
|
Optional[str]
|
MQTT password (optional) |
None
|
keepalive
|
int
|
MQTT keepalive interval in seconds |
60
|
**kwargs
|
Additional MQTT client parameters |
{}
|
async
Connect to MQTT broker.
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If connection to broker fails |
async
Publish sensor data to MQTT topic.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
MQTT topic name to publish to |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to publish (will be JSON-encoded if dict/list) |
required |
Raises:
| Type | Description |
|---|---|
ConnectionError
|
If not connected to broker |
ValueError
|
If topic name is invalid |
TimeoutError
|
If publish operation times out |
serial
Serial sensor simulator backend implementation (placeholder).
This module will implement the SensorSimulatorBackend interface for serial/USB communication. Currently this is a placeholder that raises NotImplementedError.
Bases: SensorSimulatorBackend
Serial backend for sensor simulation (placeholder).
This backend will connect to serial/USB ports and send sensor data commands. It implements a push-based pattern where we send sensor data to simulate physical sensor devices.
Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor data in various formats (JSON, CSV, custom protocols) - Simulate sensor response patterns and timing - Handle communication protocols and handshaking
Initialize Serial simulator backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
port
|
str
|
Serial port path (e.g., "/dev/ttyUSB0" or "COM3") |
required |
baudrate
|
int
|
Serial communication baudrate |
9600
|
timeout
|
float
|
Communication timeout in seconds |
5.0
|
**kwargs
|
Additional serial parameters (parity, stopbits, etc.) |
{}
|
async
Open serial port connection.
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial simulator not yet implemented |
async
Send sensor data via serial port.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
address
|
str
|
Command type or data format identifier (e.g., "TEMP_DATA", "JSON_FORMAT") |
required |
data
|
Union[Dict[str, Any], Any]
|
Data to send (will be formatted according to address) |
required |
Raises:
| Type | Description |
|---|---|
NotImplementedError
|
Serial simulator not yet implemented |
services
Hardware API modules - lazy imports for independent service operation.
cameras
CameraManagerService - Service-based camera management API.
CameraManagerConnectionManager
CameraManagerConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for CameraManagerService.
Provides strongly-typed methods for all camera management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Discover available camera backends.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available backend names |
async
Get detailed information about all backends.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping backend names to their information |
async
Discover available cameras from all or specific backends.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Optional[str]
|
Optional backend name to filter by |
None
|
Returns:
| Type | Description |
|---|---|
List[str]
|
List of camera names in format 'Backend:device_name' |
async
Open a single camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name in format 'Backend:device_name' |
required |
test_connection
|
bool
|
Test connection after opening |
True
|
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Open multiple cameras in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names |
required |
test_connection
|
bool
|
Test connection after opening |
True
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Close a specific camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to close |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Close multiple cameras in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names to close |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Close all active cameras.
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get list of currently active cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of active camera names |
async
Get camera status information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Camera status information |
async
Get detailed camera information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Camera information |
async
Get camera capabilities information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Camera capabilities |
async
Get system diagnostics information.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
System diagnostics data |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to configure |
required |
properties
|
Dict[str, Any]
|
Configuration properties |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Configure multiple cameras in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
configurations
|
Dict[str, Dict[str, Any]]
|
Dictionary mapping camera names to their configurations |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Get current camera configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Current camera configuration |
async
Import camera configuration from file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
config_path
|
str
|
Path to configuration file |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Import operation result |
async
Export camera configuration to file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
config_path
|
str
|
Path to save configuration file |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Export operation result |
async
capture_image(
camera: str, save_path: Optional[str] = None, output_format: str = "pil"
) -> Dict[str, Any]
Capture a single image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
save_path
|
Optional[str]
|
Optional path to save image |
None
|
output_format
|
str
|
Output format for returned image ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Capture result |
async
Capture images from multiple cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names |
required |
output_format
|
str
|
Output format for returned images ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch capture results |
async
capture_hdr_image(
camera: str,
save_path_pattern: Optional[str] = None,
exposure_levels: int = 3,
exposure_multiplier: float = 2.0,
return_images: bool = True,
output_format: str = "pil",
) -> Dict[str, Any]
Capture HDR image sequence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
save_path_pattern
|
Optional[str]
|
Path pattern with {exposure} placeholder |
None
|
exposure_levels
|
int
|
Number of exposure levels |
3
|
exposure_multiplier
|
float
|
Multiplier between exposures |
2.0
|
return_images
|
bool
|
Return captured images |
True
|
output_format
|
str
|
Output format for returned images ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
HDR capture result |
async
capture_hdr_images_batch(
cameras: List[str],
save_path_pattern: Optional[str] = None,
exposure_levels: int = 3,
exposure_multiplier: float = 2.0,
return_images: bool = True,
output_format: str = "pil",
) -> Dict[str, Any]
Capture HDR images from multiple cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names |
required |
save_path_pattern
|
Optional[str]
|
Path pattern with {exposure} placeholder |
None
|
exposure_levels
|
int
|
Number of exposure levels |
3
|
exposure_multiplier
|
float
|
Multiplier between exposures |
2.0
|
return_images
|
bool
|
Return captured images |
True
|
output_format
|
str
|
Output format for returned images ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch HDR capture results |
async
Get current bandwidth settings.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Bandwidth settings |
async
Set maximum concurrent capture limit.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_concurrent_captures
|
int
|
Maximum concurrent captures |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get network diagnostics information.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Network diagnostics data |
async
Configure stage+set capture groups with per-group concurrency semaphores.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Dict[str, Dict[str, Dict[str, Any]]]
|
|
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get current capture group configuration.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary of capture groups keyed by |
CameraManagerService
Bases: Service
Camera Management Service.
Provides comprehensive camera management functionality through a Service-based architecture with MCP tool integration and async camera operations.
Initialize CameraManagerService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_mocks
|
bool
|
Include mock cameras in discovery |
False
|
**kwargs
|
Additional Service initialization parameters |
{}
|
async
Get detailed information about all backends.
async
Discover available cameras from all or specific backends.
async
Open a single camera with exposure validation.
async
Open multiple cameras in batch.
async
Close a specific camera.
async
Close multiple cameras in batch.
async
Get list of currently active cameras.
async
Get camera status information.
async
Get detailed camera information.
async
Get camera capabilities information.
async
Configure camera parameters.
async
Configure multiple cameras in batch.
async
Get current camera configuration.
async
Import camera configuration from file.
async
Export camera configuration to file.
async
Capture a single image with timeout protection.
async
Capture images from multiple cameras.
async
Capture HDR image sequence.
async
Capture HDR images from multiple cameras.
async
Get network diagnostics information.
async
get_performance_settings(
request: CameraPerformanceSettingsRequest = None,
) -> CameraPerformanceSettingsResponse
Get current camera performance settings.
Returns global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.
async
Update camera performance settings.
Updates global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.
async
Configure stage+set capture groups with per-group concurrency semaphores.
async
Get current capture group configuration.
async
Remove all capture group configurations.
async
Start camera stream with resilient state management.
Stop camera stream with resilient state management.
async
Get camera stream status with resilient state management.
Get list of cameras with active streams.
async
Serve MJPEG video stream for a specific camera.
async
Get liquid lens hardware state for a camera.
async
Get current optical power (diopters) for a camera's liquid lens.
async
Set optical power (diopters) for a camera's liquid lens.
async
Trigger one-shot autofocus on a camera's liquid lens.
async
Get autofocus configuration for a camera's liquid lens.
async
Update autofocus configuration for a camera's liquid lens.
async
calibrate_homography_checkerboard(
request: HomographyCalibrateCheckerboardRequest,
) -> HomographyCalibrationResponse
Calibrate homography using checkerboard pattern detection.
calibrate_homography_correspondences(
request: HomographyCalibrateCorrespondencesRequest,
) -> HomographyCalibrationResponse
Calibrate homography from known point correspondences.
calibrate_homography_multi_view(
request: HomographyCalibrateMultiViewRequest,
) -> HomographyCalibrationResponse
Calibrate homography from multiple checkerboard positions on the same plane.
Ideal for calibrating long surfaces (metallic bars, conveyor belts) using a standard checkerboard moved to multiple positions.
measure_homography_box(
request: HomographyMeasureBoundingBoxRequest,
) -> HomographyMeasurementResponse
Measure bounding box dimensions using homography calibration.
measure_homography_batch(
request: HomographyMeasureBatchRequest,
) -> HomographyBatchMeasurementResponse
Unified batch measurement for bounding boxes and/or point-pair distances.
measure_homography_distance(
request: HomographyMeasureDistanceRequest,
) -> HomographyDistanceResponse
Measure distance between two points using homography calibration.
async
Health check endpoint for container healthcheck.
connection_manager
Connection Manager for CameraManagerService.
Provides a strongly-typed client interface for programmatic access to camera management operations.
CameraManagerConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for CameraManagerService.
Provides strongly-typed methods for all camera management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Discover available camera backends.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available backend names |
async
Get detailed information about all backends.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping backend names to their information |
async
Discover available cameras from all or specific backends.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Optional[str]
|
Optional backend name to filter by |
None
|
Returns:
| Type | Description |
|---|---|
List[str]
|
List of camera names in format 'Backend:device_name' |
async
Open a single camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name in format 'Backend:device_name' |
required |
test_connection
|
bool
|
Test connection after opening |
True
|
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Open multiple cameras in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names |
required |
test_connection
|
bool
|
Test connection after opening |
True
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Close a specific camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to close |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Close multiple cameras in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names to close |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Close all active cameras.
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get list of currently active cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of active camera names |
async
Get camera status information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Camera status information |
async
Get detailed camera information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Camera information |
async
Get camera capabilities information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Camera capabilities |
async
Get system diagnostics information.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
System diagnostics data |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to configure |
required |
properties
|
Dict[str, Any]
|
Configuration properties |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Configure multiple cameras in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
configurations
|
Dict[str, Dict[str, Any]]
|
Dictionary mapping camera names to their configurations |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Get current camera configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Current camera configuration |
async
Import camera configuration from file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
config_path
|
str
|
Path to configuration file |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Import operation result |
async
Export camera configuration to file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
config_path
|
str
|
Path to save configuration file |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Export operation result |
async
capture_image(
camera: str, save_path: Optional[str] = None, output_format: str = "pil"
) -> Dict[str, Any]
Capture a single image.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
save_path
|
Optional[str]
|
Optional path to save image |
None
|
output_format
|
str
|
Output format for returned image ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Capture result |
async
Capture images from multiple cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names |
required |
output_format
|
str
|
Output format for returned images ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch capture results |
async
capture_hdr_image(
camera: str,
save_path_pattern: Optional[str] = None,
exposure_levels: int = 3,
exposure_multiplier: float = 2.0,
return_images: bool = True,
output_format: str = "pil",
) -> Dict[str, Any]
Capture HDR image sequence.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
camera
|
str
|
Camera name |
required |
save_path_pattern
|
Optional[str]
|
Path pattern with {exposure} placeholder |
None
|
exposure_levels
|
int
|
Number of exposure levels |
3
|
exposure_multiplier
|
float
|
Multiplier between exposures |
2.0
|
return_images
|
bool
|
Return captured images |
True
|
output_format
|
str
|
Output format for returned images ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
HDR capture result |
async
capture_hdr_images_batch(
cameras: List[str],
save_path_pattern: Optional[str] = None,
exposure_levels: int = 3,
exposure_multiplier: float = 2.0,
return_images: bool = True,
output_format: str = "pil",
) -> Dict[str, Any]
Capture HDR images from multiple cameras.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
cameras
|
List[str]
|
List of camera names |
required |
save_path_pattern
|
Optional[str]
|
Path pattern with {exposure} placeholder |
None
|
exposure_levels
|
int
|
Number of exposure levels |
3
|
exposure_multiplier
|
float
|
Multiplier between exposures |
2.0
|
return_images
|
bool
|
Return captured images |
True
|
output_format
|
str
|
Output format for returned images ("numpy" or "pil") |
'pil'
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch HDR capture results |
async
Get current bandwidth settings.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Bandwidth settings |
async
Set maximum concurrent capture limit.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
max_concurrent_captures
|
int
|
Maximum concurrent captures |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get network diagnostics information.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Network diagnostics data |
async
Configure stage+set capture groups with per-group concurrency semaphores.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
Dict[str, Dict[str, Dict[str, Any]]]
|
|
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get current capture group configuration.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary of capture groups keyed by |
models
Models for CameraManagerService API.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for setting camera bandwidth limit.
Bases: BaseModel
Request model for setting bandwidth limit.
Bases: BaseModel
Request model for batch camera closing.
Bases: BaseModel
Request model for closing a camera.
Bases: BaseModel
Request model for camera configuration.
Bases: BaseModel
Request model for batch camera opening.
Bases: BaseModel
Request model for opening a camera.
Bases: BaseModel
Request model for updating camera performance settings.
Global settings (always applicable): - timeout_ms, retrieve_retry_count, max_concurrent_captures
Per-camera GigE settings (requires camera field, only for GigE cameras): - packet_size, inter_packet_delay, bandwidth_limit_mbps
Bases: BaseModel
Request model for camera query operations.
Bases: BaseModel
Request model for batch HDR image capture.
Bases: BaseModel
Request model for HDR image capture.
Bases: BaseModel
Request model for configuration file export.
Bases: BaseModel
Request model for configuration file import.
Bases: BaseModel
Request model for configuring stage+set capture groups.
Each group creates a concurrency semaphore sized to batch_size,
limiting how many cameras within the group can capture simultaneously.
Bases: BaseModel
Request model for exposure setting.
Bases: BaseModel
Request model for setting focus configuration.
Bases: BaseModel
Request model for gain setting.
Bases: BaseModel
Request model for checkerboard-based homography calibration.
Bases: BaseModel
Request model for multi-view checkerboard calibration.
Note: Checkerboard parameters (board_size, square_size, world_unit) are configured in HomographySettings (see config.py), not passed per-request.
Bases: BaseModel
Unified request model for batch measurements (bounding boxes and/or point-pair distances).
classmethod
Validate bounding boxes.
classmethod
Validate point pairs.
Bases: BaseModel
Request model for measuring a single bounding box.
Bases: BaseModel
Request model for measuring distance between two points.
Bases: BaseModel
Request model for image enhancement setting.
Bases: BaseModel
Request model for setting inter-packet delay.
Bases: BaseModel
Request model for setting optical power.
Bases: BaseModel
Request model for setting camera packet size.
Bases: BaseModel
Request model for pixel format setting.
Bases: BaseModel
Request model for ROI (Region of Interest) setting.
Bases: BaseModel
Request model for starting camera stream.
Bases: BaseModel
Request model for getting stream status.
Bases: BaseModel
Request model for stopping camera stream.
Bases: BaseModel
Request model for triggering one-shot autofocus.
Bases: BaseModel
Request model for trigger mode setting.
Bases: BaseModel
Request model for white balance setting.
Bases: BaseModel
Backend information model.
Bases: BaseModel
Bandwidth settings model.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Batch operation result model.
Bases: BaseModel
Camera capabilities model.
Bases: BaseModel
Camera configuration model.
Bases: BaseModel
Camera information model.
Bases: BaseModel
Camera performance and retry settings model.
Global settings: - timeout_ms, retrieve_retry_count, max_concurrent_captures
Per-camera GigE settings (None if not applicable or not queried): - packet_size, inter_packet_delay, bandwidth_limit_mbps
Bases: BaseModel
Camera status model.
Bases: BaseModel
Capture group information model.
Bases: BaseModel
Capture result model.
Bases: BaseModel
Configuration file operation result.
Bases: BaseModel
Error detail model.
Bases: BaseModel
HDR capture result model.
Bases: BaseModel
Health check response model.
Bases: BaseModel
Batch measurement data containing both box and distance measurements.
Bases: BaseModel
Homography calibration result model.
Bases: BaseModel
Homography distance measurement result model.
Bases: BaseModel
Homography measurement result model.
Bases: BaseModel
Liquid lens hardware state.
Bases: BaseModel
Network diagnostics model.
Bases: BaseModel
Parameter range model.
Bases: BaseModel
Stream information model.
Bases: BaseModel
Stream status model.
Bases: BaseModel
System diagnostics model.
Request models for CameraManagerService.
Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all camera operations.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for opening a camera.
Bases: BaseModel
Request model for batch camera opening.
Bases: BaseModel
Request model for closing a camera.
Bases: BaseModel
Request model for batch camera closing.
Bases: BaseModel
Request model for camera configuration.
Bases: BaseModel
Request model for camera query operations.
Bases: BaseModel
Request model for configuration file import.
Bases: BaseModel
Request model for configuration file export.
Bases: BaseModel
Request model for HDR image capture.
Bases: BaseModel
Request model for batch HDR image capture.
Bases: BaseModel
Request model for setting bandwidth limit.
Bases: BaseModel
Request model for updating camera performance settings.
Global settings (always applicable): - timeout_ms, retrieve_retry_count, max_concurrent_captures
Per-camera GigE settings (requires camera field, only for GigE cameras): - packet_size, inter_packet_delay, bandwidth_limit_mbps
Bases: BaseModel
Request model for exposure setting.
Bases: BaseModel
Request model for gain setting.
Bases: BaseModel
Request model for ROI (Region of Interest) setting.
Bases: BaseModel
Request model for trigger mode setting.
Bases: BaseModel
Request model for pixel format setting.
Bases: BaseModel
Request model for white balance setting.
Bases: BaseModel
Request model for image enhancement setting.
Bases: BaseModel
Request model for setting camera bandwidth limit.
Bases: BaseModel
Request model for setting camera packet size.
Bases: BaseModel
Request model for setting inter-packet delay.
Bases: BaseModel
Request model for starting camera stream.
Bases: BaseModel
Request model for stopping camera stream.
Bases: BaseModel
Request model for getting stream status.
Bases: BaseModel
Request model for checkerboard-based homography calibration.
Bases: BaseModel
Request model for measuring a single bounding box.
Bases: BaseModel
Unified request model for batch measurements (bounding boxes and/or point-pair distances).
classmethod
Validate bounding boxes.
classmethod
Validate point pairs.
Bases: BaseModel
Request model for measuring distance between two points.
Bases: BaseModel
Request model for setting optical power.
Bases: BaseModel
Request model for triggering one-shot autofocus.
Bases: BaseModel
Request model for setting focus configuration.
Bases: BaseModel
Request model for multi-view checkerboard calibration.
Note: Checkerboard parameters (board_size, square_size, world_unit) are configured in HomographySettings (see config.py), not passed per-request.
Bases: BaseModel
Request model for configuring stage+set capture groups.
Each group creates a concurrency semaphore sized to batch_size,
limiting how many cameras within the group can capture simultaneously.
Response models for CameraManagerService.
Contains all Pydantic models for API responses, ensuring consistent response formatting across all camera management endpoints.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Backend information model.
Bases: BaseModel
Camera information model.
Bases: BaseModel
Camera status model.
Bases: BaseModel
Camera capabilities model.
Bases: BaseModel
Camera configuration model.
Bases: BaseModel
Liquid lens hardware state.
Bases: BaseModel
Capture result model.
Bases: BaseModel
HDR capture result model.
Bases: BaseModel
System diagnostics model.
Bases: BaseModel
Bandwidth settings model.
Bases: BaseModel
Camera performance and retry settings model.
Global settings: - timeout_ms, retrieve_retry_count, max_concurrent_captures
Per-camera GigE settings (None if not applicable or not queried): - packet_size, inter_packet_delay, bandwidth_limit_mbps
Bases: BaseModel
Network diagnostics model.
Bases: BaseModel
Batch operation result model.
Bases: BaseModel
Error detail model.
Bases: BaseModel
Parameter range model.
Bases: BaseModel
Configuration file operation result.
Bases: BaseModel
Stream information model.
Bases: BaseModel
Stream status model.
Bases: BaseModel
Homography calibration result model.
Bases: BaseModel
Homography measurement result model.
Bases: BaseModel
Homography distance measurement result model.
Bases: BaseModel
Batch measurement data containing both box and distance measurements.
Bases: BaseModel
Capture group information model.
Bases: BaseModel
Health check response model.
schemas
TaskSchemas for CameraManagerService endpoints.
Backend and Discovery TaskSchemas.
Capture Group TaskSchemas for stage+set batching.
Image Capture TaskSchemas.
Camera Configuration TaskSchemas.
Liquid Lens and Focus Control TaskSchemas.
Health check TaskSchema.
Homography Calibration & Measurement TaskSchemas.
Camera Status and Information TaskSchemas.
Camera Lifecycle TaskSchemas.
Network and Performance TaskSchemas.
Streaming TaskSchemas.
service
CameraManagerService - Service-based API for camera management.
This service wraps AsyncCameraManager functionality in a Service-based architecture with comprehensive MCP tool integration and typed client access.
Bases: Service
Camera Management Service.
Provides comprehensive camera management functionality through a Service-based architecture with MCP tool integration and async camera operations.
Initialize CameraManagerService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_mocks
|
bool
|
Include mock cameras in discovery |
False
|
**kwargs
|
Additional Service initialization parameters |
{}
|
async
Get detailed information about all backends.
async
Discover available cameras from all or specific backends.
async
Open a single camera with exposure validation.
async
Open multiple cameras in batch.
async
Close a specific camera.
async
Close multiple cameras in batch.
async
Get list of currently active cameras.
async
Get camera status information.
async
Get detailed camera information.
async
Get camera capabilities information.
async
Configure camera parameters.
async
Configure multiple cameras in batch.
async
Get current camera configuration.
async
Import camera configuration from file.
async
Export camera configuration to file.
async
Capture a single image with timeout protection.
async
Capture images from multiple cameras.
async
Capture HDR image sequence.
async
Capture HDR images from multiple cameras.
async
Get network diagnostics information.
async
get_performance_settings(
request: CameraPerformanceSettingsRequest = None,
) -> CameraPerformanceSettingsResponse
Get current camera performance settings.
Returns global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.
async
Update camera performance settings.
Updates global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.
async
Configure stage+set capture groups with per-group concurrency semaphores.
async
Get current capture group configuration.
async
Remove all capture group configurations.
async
Start camera stream with resilient state management.
Stop camera stream with resilient state management.
async
Get camera stream status with resilient state management.
Get list of cameras with active streams.
async
Serve MJPEG video stream for a specific camera.
async
Get liquid lens hardware state for a camera.
async
Get current optical power (diopters) for a camera's liquid lens.
async
Set optical power (diopters) for a camera's liquid lens.
async
Trigger one-shot autofocus on a camera's liquid lens.
async
Get autofocus configuration for a camera's liquid lens.
async
Update autofocus configuration for a camera's liquid lens.
async
calibrate_homography_checkerboard(
request: HomographyCalibrateCheckerboardRequest,
) -> HomographyCalibrationResponse
Calibrate homography using checkerboard pattern detection.
calibrate_homography_correspondences(
request: HomographyCalibrateCorrespondencesRequest,
) -> HomographyCalibrationResponse
Calibrate homography from known point correspondences.
calibrate_homography_multi_view(
request: HomographyCalibrateMultiViewRequest,
) -> HomographyCalibrationResponse
Calibrate homography from multiple checkerboard positions on the same plane.
Ideal for calibrating long surfaces (metallic bars, conveyor belts) using a standard checkerboard moved to multiple positions.
measure_homography_box(
request: HomographyMeasureBoundingBoxRequest,
) -> HomographyMeasurementResponse
Measure bounding box dimensions using homography calibration.
measure_homography_batch(
request: HomographyMeasureBatchRequest,
) -> HomographyBatchMeasurementResponse
Unified batch measurement for bounding boxes and/or point-pair distances.
measure_homography_distance(
request: HomographyMeasureDistanceRequest,
) -> HomographyDistanceResponse
Measure distance between two points using homography calibration.
async
Health check endpoint for container healthcheck.
plcs
PLC API Service - REST API for PLC management and control.
PLCManagerConnectionManager
PLCManagerConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for PLCManagerService.
Provides strongly-typed methods for all PLC management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Discover available PLC backends.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available backend names |
async
Get detailed information about all backends.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping backend names to their information |
async
Discover available PLCs from all or specific backends.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Optional[str]
|
Optional backend name to filter by |
None
|
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers |
async
connect_plc(
plc_name: str,
backend: str,
ip_address: str,
plc_type: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
) -> bool
Connect to a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
backend
|
str
|
Backend type (AllenBradley, Siemens, Modbus) |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_type
|
Optional[str]
|
Specific PLC type (logix, slc, cip, auto) |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Connect to multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plcs
|
List[PLCConnectRequest]
|
List of PLC connection requests |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Disconnect from a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name to disconnect |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Disconnect from multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plcs
|
List[str]
|
List of PLC names to disconnect |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Disconnect from all active PLCs.
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get list of currently active PLCs.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of active PLC names |
async
Read tag values from a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
async
Write tag values to a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
async
Read tags from multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[str, List[str]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping PLC names to their tag read results |
async
write_tags_batch(
requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]
Write tags to multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, bool]]
|
Dictionary mapping PLC names to their tag write results |
async
List all available tags on a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
async
Get detailed information about a specific tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
tag
|
str
|
Tag name |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Tag information |
async
Get PLC status information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
PLC status information |
async
Get detailed PLC information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
PLC information |
PLCManagerService
Bases: Service
PLC Management Service.
Provides comprehensive PLC management functionality through a Service-based architecture with MCP tool integration and async PLC operations.
Initialize PLCManagerService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional Service initialization parameters |
{}
|
Get detailed information about all backends.
async
Discover available PLCs from all or specific backends.
async
Connect to multiple PLCs in batch.
async
Disconnect from a PLC.
async
Disconnect from multiple PLCs in batch.
async
Write tag values to a PLC.
async
Read tags from multiple PLCs in batch.
async
Write tags to multiple PLCs in batch.
async
List all available tags on a PLC.
async
Get detailed information about a specific tag.
async
Get PLC status information.
async
Get detailed PLC information.
async
Get system diagnostics information.
connection_manager
Connection Manager for PLCManagerService.
Provides a strongly-typed client interface for programmatic access to PLC management operations.
PLCManagerConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for PLCManagerService.
Provides strongly-typed methods for all PLC management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Discover available PLC backends.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of available backend names |
async
Get detailed information about all backends.
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping backend names to their information |
async
Discover available PLCs from all or specific backends.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Optional[str]
|
Optional backend name to filter by |
None
|
Returns:
| Type | Description |
|---|---|
List[str]
|
List of PLC identifiers |
async
connect_plc(
plc_name: str,
backend: str,
ip_address: str,
plc_type: Optional[str] = None,
connection_timeout: Optional[float] = None,
read_timeout: Optional[float] = None,
write_timeout: Optional[float] = None,
retry_count: Optional[int] = None,
retry_delay: Optional[float] = None,
) -> bool
Connect to a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc_name
|
str
|
Unique identifier for the PLC |
required |
backend
|
str
|
Backend type (AllenBradley, Siemens, Modbus) |
required |
ip_address
|
str
|
IP address of the PLC |
required |
plc_type
|
Optional[str]
|
Specific PLC type (logix, slc, cip, auto) |
None
|
connection_timeout
|
Optional[float]
|
Connection timeout in seconds |
None
|
read_timeout
|
Optional[float]
|
Tag read timeout in seconds |
None
|
write_timeout
|
Optional[float]
|
Tag write timeout in seconds |
None
|
retry_count
|
Optional[int]
|
Number of retry attempts |
None
|
retry_delay
|
Optional[float]
|
Delay between retries in seconds |
None
|
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Connect to multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plcs
|
List[PLCConnectRequest]
|
List of PLC connection requests |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Disconnect from a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name to disconnect |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Disconnect from multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plcs
|
List[str]
|
List of PLC names to disconnect |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Batch operation results |
async
Disconnect from all active PLCs.
Returns:
| Type | Description |
|---|---|
bool
|
True if successful |
async
Get list of currently active PLCs.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of active PLC names |
async
Read tag values from a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
tags
|
Union[str, List[str]]
|
Single tag name or list of tag names |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Dictionary mapping tag names to their values |
async
Write tag values to a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
tags
|
Union[Tuple[str, Any], List[Tuple[str, Any]]]
|
Single (tag_name, value) tuple or list of tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, bool]
|
Dictionary mapping tag names to write success status |
async
Read tags from multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[str, List[str]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, Any]]
|
Dictionary mapping PLC names to their tag read results |
async
write_tags_batch(
requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]
Write tags to multiple PLCs in batch.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
requests
|
List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]
|
List of (plc_name, tags) tuples |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Dict[str, bool]]
|
Dictionary mapping PLC names to their tag write results |
async
List all available tags on a PLC.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
Returns:
| Type | Description |
|---|---|
List[str]
|
List of tag names |
async
Get detailed information about a specific tag.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name |
required |
tag
|
str
|
Tag name |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
Tag information |
async
Get PLC status information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
PLC status information |
async
Get detailed PLC information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
plc
|
str
|
PLC name to query |
required |
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
PLC information |
models
PLC API models - Request and Response models.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for batch PLC connection.
Bases: BaseModel
Request model for connecting to a PLC.
Bases: BaseModel
Request model for batch PLC disconnection.
Bases: BaseModel
Request model for disconnecting from a PLC.
Bases: BaseModel
Request model for PLC query operations.
Bases: BaseModel
Request model for getting tag information.
Bases: BaseModel
Backend information model.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Batch operation result model.
Bases: BaseModel
Health check response model.
Bases: BaseModel
PLC information model.
Bases: BaseModel
PLC status model.
Bases: BaseModel
System diagnostics model.
Bases: BaseModel
Tag information model.
Request models for PLCManagerService.
Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all PLC operations.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for connecting to a PLC.
Bases: BaseModel
Request model for batch PLC connection.
Bases: BaseModel
Request model for disconnecting from a PLC.
Bases: BaseModel
Request model for batch PLC disconnection.
Bases: BaseModel
Request model for PLC query operations.
Bases: BaseModel
Request model for getting tag information.
Response models for PLCManagerService.
Contains all Pydantic models for API responses, ensuring consistent response formatting across all PLC management endpoints.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Backend information model.
Bases: BaseModel
PLC information model.
Bases: BaseModel
PLC status model.
Bases: BaseModel
Tag information model.
Bases: BaseModel
Batch operation result model.
Bases: BaseModel
System diagnostics model.
Bases: BaseModel
Health check response model.
schemas
TaskSchemas for PLCManagerService endpoints.
Backend and Discovery TaskSchemas.
Health check TaskSchema.
PLC Lifecycle TaskSchemas.
Status & Information TaskSchemas.
Tag Operations TaskSchemas.
service
PLCManagerService - Service-based API for PLC management.
This service wraps PLCManager functionality in a Service-based architecture with comprehensive MCP tool integration and typed client access.
Bases: Service
PLC Management Service.
Provides comprehensive PLC management functionality through a Service-based architecture with MCP tool integration and async PLC operations.
Initialize PLCManagerService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional Service initialization parameters |
{}
|
Get detailed information about all backends.
async
Discover available PLCs from all or specific backends.
async
Connect to multiple PLCs in batch.
async
Disconnect from a PLC.
async
Disconnect from multiple PLCs in batch.
async
Write tag values to a PLC.
async
Read tags from multiple PLCs in batch.
async
Write tags to multiple PLCs in batch.
async
List all available tags on a PLC.
async
Get detailed information about a specific tag.
async
Get PLC status information.
async
Get detailed PLC information.
async
Get system diagnostics information.
scanners_3d
Scanner3DService - Service-based 3D scanner management API.
Scanner3DConnectionManager
Scanner3DConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for Scanner3DService.
Provides strongly-typed methods for all 3D scanner management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Get detailed information about all backends.
async
Discover available 3D scanners.
async
Open a 3D scanner.
async
Open multiple 3D scanners.
async
Close multiple 3D scanners.
async
Configure scanner parameters.
async
Configure multiple scanners.
async
Get scanner configuration.
async
capture_scan(
scanner: str,
save_range_path: Optional[str] = None,
save_intensity_path: Optional[str] = None,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
timeout_ms: int = 10000,
output_format: str = "numpy",
) -> Dict[str, Any]
Capture 3D scan data.
async
capture_scan_batch(
captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]
Capture scans from multiple scanners.
async
capture_point_cloud(
scanner: str,
save_path: Optional[str] = None,
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
output_format: str = "numpy",
) -> Dict[str, Any]
Capture and generate 3D point cloud.
Scanner3DService
Bases: Service
3D Scanner Management Service.
Provides comprehensive REST API and MCP tools for managing 3D scanners with multi-component capture capabilities (range, intensity, confidence, normals, color, point clouds).
Supported Operations: - Backend discovery and information - Scanner lifecycle management (open, close, status) - Multi-component capture (range, intensity, confidence, normals, color) - Point cloud generation with optional color and confidence - Scanner configuration (exposure, trigger mode) - Batch operations for multiple scanners - System diagnostics and monitoring
Initialize Scanner3DService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional arguments passed to Service base class |
{}
|
async
Discover available 3D scanners.
async
Open a 3D scanner connection.
async
Open multiple scanners.
async
Close a 3D scanner connection.
async
Close multiple scanners.
async
Get list of active scanners.
async
Get scanner status.
async
Get scanner information.
async
Get system diagnostics.
async
Get scanner capabilities and available settings.
async
Configure scanner parameters.
async
Configure multiple scanners.
async
Get scanner configuration.
async
Capture 3D scan data.
async
Capture scans from multiple scanners.
async
Capture and generate 3D point cloud.
connection_manager
Connection Manager for Scanner3DService.
Provides a strongly-typed client interface for programmatic access to 3D scanner management operations.
Scanner3DConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for Scanner3DService.
Provides strongly-typed methods for all 3D scanner management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Get detailed information about all backends.
async
Discover available 3D scanners.
async
Open a 3D scanner.
async
Open multiple 3D scanners.
async
Close multiple 3D scanners.
async
Configure scanner parameters.
async
Configure multiple scanners.
async
Get scanner configuration.
async
capture_scan(
scanner: str,
save_range_path: Optional[str] = None,
save_intensity_path: Optional[str] = None,
enable_range: bool = True,
enable_intensity: bool = True,
enable_confidence: bool = False,
enable_normal: bool = False,
enable_color: bool = False,
timeout_ms: int = 10000,
output_format: str = "numpy",
) -> Dict[str, Any]
Capture 3D scan data.
async
capture_scan_batch(
captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]
Capture scans from multiple scanners.
async
capture_point_cloud(
scanner: str,
save_path: Optional[str] = None,
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
output_format: str = "numpy",
) -> Dict[str, Any]
Capture and generate 3D point cloud.
models
Models for Scanner3DService.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for batch point cloud capture.
Bases: BaseModel
Request model for point cloud capture.
Bases: BaseModel
Request model for batch scan capture.
Bases: BaseModel
Request model for 3D scan capture.
Bases: BaseModel
Request model for batch scanner closing.
Bases: BaseModel
Request model for closing a 3D scanner.
Bases: BaseModel
Request model for batch scanner configuration.
Bases: BaseModel
Request model for scanner configuration.
All fields are optional - only provided fields will be applied.
Bases: BaseModel
Request model for batch scanner opening.
Bases: BaseModel
Request model for opening a 3D scanner.
Bases: BaseModel
Request model for scanner queries.
Bases: BaseModel
3D scanner backend information model.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Individual batch operation result.
Bases: BaseModel
Health check response model.
Bases: BaseModel
Batch point cloud capture result model.
Bases: BaseModel
Point cloud capture result model.
Bases: BaseModel
Batch scan capture result model.
Bases: BaseModel
Scan capture result model.
Bases: BaseModel
Scanner capabilities model.
Bases: BaseModel
3D scanner configuration model.
Bases: BaseModel
3D scanner information model.
Bases: BaseModel
3D scanner status model.
Bases: BaseModel
System diagnostics model.
Request models for Scanner3DService.
Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all 3D scanner operations.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for opening a 3D scanner.
Bases: BaseModel
Request model for batch scanner opening.
Bases: BaseModel
Request model for closing a 3D scanner.
Bases: BaseModel
Request model for batch scanner closing.
Bases: BaseModel
Request model for scanner queries.
Bases: BaseModel
Request model for scanner configuration.
All fields are optional - only provided fields will be applied.
Bases: BaseModel
Request model for batch scanner configuration.
Bases: BaseModel
Request model for 3D scan capture.
Bases: BaseModel
Request model for batch scan capture.
Bases: BaseModel
Request model for point cloud capture.
Bases: BaseModel
Request model for batch point cloud capture.
Response models for Scanner3DService.
Contains all Pydantic models for API responses, ensuring consistent response formatting across all 3D scanner management endpoints.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
3D scanner backend information model.
Bases: BaseModel
3D scanner status model.
Bases: BaseModel
3D scanner information model.
Bases: BaseModel
3D scanner configuration model.
Bases: BaseModel
Scanner capabilities model.
Bases: BaseModel
Scan capture result model.
Bases: BaseModel
Batch scan capture result model.
Bases: BaseModel
Point cloud capture result model.
Bases: BaseModel
Batch point cloud capture result model.
Bases: BaseModel
Individual batch operation result.
Bases: BaseModel
System diagnostics model.
Bases: BaseModel
Health check response model.
schemas
MCP TaskSchemas for Scanner3DService.
Scanner Capture TaskSchemas.
Scanner Configuration TaskSchemas.
Health Check TaskSchemas.
Scanner Information TaskSchemas.
Scanner Lifecycle TaskSchemas.
service
Scanner3DService - Service-based API for 3D scanner management.
This service provides comprehensive REST API and MCP tools for managing 3D scanners (Photoneo PhoXi, etc.) with multi-component capture capabilities.
Bases: Service
3D Scanner Management Service.
Provides comprehensive REST API and MCP tools for managing 3D scanners with multi-component capture capabilities (range, intensity, confidence, normals, color, point clouds).
Supported Operations: - Backend discovery and information - Scanner lifecycle management (open, close, status) - Multi-component capture (range, intensity, confidence, normals, color) - Point cloud generation with optional color and confidence - Scanner configuration (exposure, trigger mode) - Batch operations for multiple scanners - System diagnostics and monitoring
Initialize Scanner3DService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional arguments passed to Service base class |
{}
|
async
Discover available 3D scanners.
async
Open a 3D scanner connection.
async
Open multiple scanners.
async
Close a 3D scanner connection.
async
Close multiple scanners.
async
Get list of active scanners.
async
Get scanner status.
async
Get scanner information.
async
Get system diagnostics.
async
Get scanner capabilities and available settings.
async
Configure scanner parameters.
async
Configure multiple scanners.
async
Get scanner configuration.
async
Capture 3D scan data.
async
Capture scans from multiple scanners.
async
Capture and generate 3D point cloud.
sensors
Sensor API module providing service and connection management.
SensorConnectionManager
SensorConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Strongly-typed connection manager for sensor service operations.
async
connect_sensor(
sensor_id: str, backend_type: str, config: Dict[str, Any], address: str
) -> SensorConnectionResponse
Connect to a sensor with specified configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
backend_type
|
str
|
Backend type (mqtt, http, serial) |
required |
config
|
Dict[str, Any]
|
Backend-specific configuration |
required |
address
|
str
|
Sensor address (topic, endpoint, or port) |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
Disconnect from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor to disconnect |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of disconnection |
async
Read data from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
timeout
|
Optional[float]
|
Optional read timeout in seconds |
None
|
Returns:
| Type | Description |
|---|---|
SensorDataResponse
|
Response containing sensor data or error information |
async
Get status information for a sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
Returns:
| Type | Description |
|---|---|
SensorStatusResponse
|
Response containing sensor status information |
async
List all registered sensors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_status
|
bool
|
Whether to include connection status for each sensor |
False
|
Returns:
| Type | Description |
|---|---|
SensorListResponse
|
Response containing list of sensors |
async
connect_mqtt_sensor(
sensor_id: str, broker_url: str, identifier: str, address: str
) -> SensorConnectionResponse
Connect to an MQTT sensor with simplified parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
str
|
Client identifier for MQTT connection |
required |
address
|
str
|
MQTT topic to subscribe to |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
connect_http_sensor(
sensor_id: str,
base_url: str,
address: str,
headers: Optional[Dict[str, str]] = None,
) -> SensorConnectionResponse
Connect to an HTTP sensor with simplified parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
base_url
|
str
|
Base URL for HTTP requests |
required |
address
|
str
|
Endpoint path for sensor data |
required |
headers
|
Optional[Dict[str, str]]
|
Optional HTTP headers |
None
|
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
connect_serial_sensor(
sensor_id: str, port: str, baudrate: int = 9600, timeout: float = 1.0
) -> SensorConnectionResponse
Connect to a serial sensor with simplified parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
port
|
str
|
Serial port (e.g., "/dev/ttyUSB0" or "COM1") |
required |
baudrate
|
int
|
Serial communication baud rate |
9600
|
timeout
|
float
|
Serial read timeout |
1.0
|
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
SensorManagerService
Bases: Service
Service wrapper for SensorManager with MCP endpoint registration.
Initialize the sensor manager service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
manager
|
Optional[SensorManager]
|
Optional SensorManager instance. If None, creates a new one. |
None
|
**kwargs
|
Additional arguments passed to the Service base class |
{}
|
async
Connect to a sensor with specified configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorConnectionRequest
|
Connection request with sensor configuration |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
Disconnect from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorStatusRequest
|
Request containing sensor_id to disconnect |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of disconnection |
async
Read data from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorDataRequest
|
Request specifying sensor and read parameters |
required |
Returns:
| Type | Description |
|---|---|
SensorDataResponse
|
Response containing sensor data or error information |
async
Get status information for a sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorStatusRequest
|
Request containing sensor_id |
required |
Returns:
| Type | Description |
|---|---|
SensorStatusResponse
|
Response containing sensor status information |
async
List all registered sensors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorListRequest
|
Request with listing options |
required |
Returns:
| Type | Description |
|---|---|
SensorListResponse
|
Response containing list of sensors |
connection_manager
Connection manager for typed sensor service client access.
SensorConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Strongly-typed connection manager for sensor service operations.
async
connect_sensor(
sensor_id: str, backend_type: str, config: Dict[str, Any], address: str
) -> SensorConnectionResponse
Connect to a sensor with specified configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
backend_type
|
str
|
Backend type (mqtt, http, serial) |
required |
config
|
Dict[str, Any]
|
Backend-specific configuration |
required |
address
|
str
|
Sensor address (topic, endpoint, or port) |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
Disconnect from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor to disconnect |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of disconnection |
async
Read data from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
timeout
|
Optional[float]
|
Optional read timeout in seconds |
None
|
Returns:
| Type | Description |
|---|---|
SensorDataResponse
|
Response containing sensor data or error information |
async
Get status information for a sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
Returns:
| Type | Description |
|---|---|
SensorStatusResponse
|
Response containing sensor status information |
async
List all registered sensors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_status
|
bool
|
Whether to include connection status for each sensor |
False
|
Returns:
| Type | Description |
|---|---|
SensorListResponse
|
Response containing list of sensors |
async
connect_mqtt_sensor(
sensor_id: str, broker_url: str, identifier: str, address: str
) -> SensorConnectionResponse
Connect to an MQTT sensor with simplified parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
broker_url
|
str
|
MQTT broker URL (e.g., "mqtt://localhost:1883") |
required |
identifier
|
str
|
Client identifier for MQTT connection |
required |
address
|
str
|
MQTT topic to subscribe to |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
connect_http_sensor(
sensor_id: str,
base_url: str,
address: str,
headers: Optional[Dict[str, str]] = None,
) -> SensorConnectionResponse
Connect to an HTTP sensor with simplified parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
base_url
|
str
|
Base URL for HTTP requests |
required |
address
|
str
|
Endpoint path for sensor data |
required |
headers
|
Optional[Dict[str, str]]
|
Optional HTTP headers |
None
|
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
connect_serial_sensor(
sensor_id: str, port: str, baudrate: int = 9600, timeout: float = 1.0
) -> SensorConnectionResponse
Connect to a serial sensor with simplified parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sensor_id
|
str
|
Unique identifier for the sensor |
required |
port
|
str
|
Serial port (e.g., "/dev/ttyUSB0" or "COM1") |
required |
baudrate
|
int
|
Serial communication baud rate |
9600
|
timeout
|
float
|
Serial read timeout |
1.0
|
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
models
Sensor API models for request/response data structures.
Bases: BaseModel
Request to connect to a sensor.
Bases: BaseModel
Request to read data from a connected sensor.
Bases: BaseModel
Request to list all sensors.
Bases: BaseModel
Request to get status of a sensor.
Bases: BaseModel
Health check response model.
Bases: BaseModel
Response from sensor connection operation.
Bases: str, Enum
Status of sensor connection.
Bases: BaseModel
Response containing sensor data.
Bases: BaseModel
Information about a sensor.
Bases: BaseModel
Response containing list of sensors.
Bases: BaseModel
Response containing sensor status information.
Request models for sensor operations.
Bases: BaseModel
Request to connect to a sensor.
Bases: BaseModel
Request to read data from a connected sensor.
Bases: BaseModel
Request to get status of a sensor.
Bases: BaseModel
Request to list all sensors.
Response models for sensor operations.
Bases: str, Enum
Status of sensor connection.
Bases: BaseModel
Information about a sensor.
Bases: BaseModel
Response from sensor connection operation.
Bases: BaseModel
Response containing sensor data.
Bases: BaseModel
Response containing sensor status information.
Bases: BaseModel
Response containing list of sensors.
Bases: BaseModel
Health check response model.
schemas
Sensor task schemas for service operations.
Task schemas for sensor data access.
Task schemas for sensor lifecycle management.
Task schemas for sensor data operations.
Task schemas for sensor data access.
Health check TaskSchema.
Task schemas for sensor lifecycle operations.
Task schemas for sensor lifecycle management.
service
Sensor Manager Service providing MCP endpoints for sensor operations.
Bases: Service
Service wrapper for SensorManager with MCP endpoint registration.
Initialize the sensor manager service.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
manager
|
Optional[SensorManager]
|
Optional SensorManager instance. If None, creates a new one. |
None
|
**kwargs
|
Additional arguments passed to the Service base class |
{}
|
async
Connect to a sensor with specified configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorConnectionRequest
|
Connection request with sensor configuration |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of connection |
async
Disconnect from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorStatusRequest
|
Request containing sensor_id to disconnect |
required |
Returns:
| Type | Description |
|---|---|
SensorConnectionResponse
|
Response indicating success/failure of disconnection |
async
Read data from a connected sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorDataRequest
|
Request specifying sensor and read parameters |
required |
Returns:
| Type | Description |
|---|---|
SensorDataResponse
|
Response containing sensor data or error information |
async
Get status information for a sensor.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorStatusRequest
|
Request containing sensor_id |
required |
Returns:
| Type | Description |
|---|---|
SensorStatusResponse
|
Response containing sensor status information |
async
List all registered sensors.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
request
|
SensorListRequest
|
Request with listing options |
required |
Returns:
| Type | Description |
|---|---|
SensorListResponse
|
Response containing list of sensors |
stereo_cameras
StereoCameraService - Service-based stereo camera management API.
StereoCameraConnectionManager
StereoCameraConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for StereoCameraService.
Provides strongly-typed methods for all stereo camera management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Get detailed information about all backends.
async
Discover available stereo cameras.
async
Open a stereo camera.
async
Open multiple stereo cameras.
async
Close multiple stereo cameras.
async
Get list of currently active stereo cameras.
async
Get detailed stereo camera information.
async
Configure stereo camera parameters.
async
Configure multiple stereo cameras.
async
Get current stereo camera configuration.
async
capture_stereo_pair(
camera: str,
save_intensity_path: Optional[str] = None,
save_disparity_path: Optional[str] = None,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
output_format: str = "pil",
) -> Dict[str, Any]
Capture stereo data (intensity + disparity).
async
capture_stereo_batch(
captures: List[Dict[str, Any]], output_format: str = "pil"
) -> Dict[str, Any]
Capture stereo data from multiple cameras.
StereoCameraService
Bases: Service
Stereo Camera Management Service.
Provides comprehensive REST API and MCP tools for managing stereo cameras with multi-component capture capabilities (intensity, disparity, point clouds).
Supported Operations: - Backend discovery and information - Camera lifecycle management (open, close, status) - Multi-component capture (intensity + disparity) - Point cloud generation with optional color - Camera configuration (depth range, illumination, binning, quality, exposure, gain) - Batch operations for multiple cameras - System diagnostics and monitoring
Initialize StereoCameraService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional arguments passed to Service base class |
{}
|
Get detailed information about stereo camera backends.
Discover available stereo cameras.
async
Open a stereo camera connection.
async
Open multiple stereo cameras.
async
Close a stereo camera connection.
async
Close multiple stereo cameras.
Get list of active stereo cameras.
async
Get stereo camera status.
async
Get detailed stereo camera information.
Get full calibration data including Q matrix for 2D-to-3D projection.
Get system diagnostics and statistics.
async
Configure stereo camera parameters.
async
Configure multiple stereo cameras.
async
Get current stereo camera configuration.
async
Capture stereo data (intensity + disparity).
async
Capture stereo data from multiple cameras.
async
Capture and generate 3D point cloud.
async
Capture point clouds from multiple cameras.
Start stereo camera stream.
connection_manager
Connection Manager for StereoCameraService.
Provides a strongly-typed client interface for programmatic access to stereo camera management operations.
StereoCameraConnectionManager(
url: Url | None = None,
server_id: UUID | None = None,
server_pid_file: str | None = None,
)
Bases: ConnectionManager
Connection Manager for StereoCameraService.
Provides strongly-typed methods for all stereo camera management operations, making it easy to use the service programmatically from other applications.
async
Make GET request to service endpoint.
async
Make POST request to service endpoint.
async
Get detailed information about all backends.
async
Discover available stereo cameras.
async
Open a stereo camera.
async
Open multiple stereo cameras.
async
Close multiple stereo cameras.
async
Get list of currently active stereo cameras.
async
Get detailed stereo camera information.
async
Configure stereo camera parameters.
async
Configure multiple stereo cameras.
async
Get current stereo camera configuration.
async
capture_stereo_pair(
camera: str,
save_intensity_path: Optional[str] = None,
save_disparity_path: Optional[str] = None,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
output_format: str = "pil",
) -> Dict[str, Any]
Capture stereo data (intensity + disparity).
async
capture_stereo_batch(
captures: List[Dict[str, Any]], output_format: str = "pil"
) -> Dict[str, Any]
Capture stereo data from multiple cameras.
models
Models for StereoCameraService API.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for batch point cloud capture.
Bases: BaseModel
Request model for point cloud capture.
Bases: BaseModel
Request model for batch stereo camera closing.
Bases: BaseModel
Request model for closing a stereo camera.
Bases: BaseModel
Request model for batch stereo camera configuration.
Bases: BaseModel
Request model for stereo camera configuration.
Bases: BaseModel
Request model for batch stereo camera opening.
Bases: BaseModel
Request model for opening a stereo camera.
Bases: BaseModel
Request model for stereo camera queries.
Bases: BaseModel
Request model for batch stereo capture.
Bases: BaseModel
Request model for stereo capture.
Bases: BaseModel
Stereo camera backend information model.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Individual batch operation result.
Bases: BaseModel
Health check response model.
Bases: BaseModel
Batch point cloud capture result model.
Bases: BaseModel
Point cloud capture result model.
Bases: BaseModel
Stereo camera configuration model.
Bases: BaseModel
Stereo camera information model.
Bases: BaseModel
Stereo camera status model.
Bases: BaseModel
Batch stereo capture result model.
Bases: BaseModel
Stereo capture result model.
Bases: BaseModel
System diagnostics model.
Request models for StereoCameraService.
Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all stereo camera operations.
Bases: BaseModel
Request model for backend filtering.
Bases: BaseModel
Request model for opening a stereo camera.
Bases: BaseModel
Request model for batch stereo camera opening.
Bases: BaseModel
Request model for closing a stereo camera.
Bases: BaseModel
Request model for batch stereo camera closing.
Bases: BaseModel
Request model for stereo camera queries.
Bases: BaseModel
Request model for stereo camera configuration.
Bases: BaseModel
Request model for batch stereo camera configuration.
Bases: BaseModel
Request model for stereo capture.
Bases: BaseModel
Request model for batch stereo capture.
Bases: BaseModel
Request model for point cloud capture.
Bases: BaseModel
Request model for batch point cloud capture.
Response models for StereoCameraService.
Contains all Pydantic models for API responses, ensuring consistent response formatting across all stereo camera management endpoints.
Bases: BaseModel
Base response model for all API endpoints.
Bases: BaseModel
Stereo camera backend information model.
Bases: BaseModel
Stereo camera status model.
Bases: BaseModel
Stereo camera information model.
Bases: BaseModel
Stereo camera configuration model.
Bases: BaseModel
Stereo capture result model.
Bases: BaseModel
Batch stereo capture result model.
Bases: BaseModel
Point cloud capture result model.
Bases: BaseModel
Batch point cloud capture result model.
Bases: BaseModel
Individual batch operation result.
Bases: BaseModel
System diagnostics model.
Bases: BaseModel
Health check response model.
schemas
MCP TaskSchemas for StereoCameraService.
Stereo Camera Capture TaskSchemas.
Stereo Camera Configuration TaskSchemas.
Health check TaskSchema.
Stereo Camera Information TaskSchemas.
Stereo Camera Lifecycle TaskSchemas.
service
StereoCameraService - Service-based API for stereo camera management.
This service provides comprehensive REST API and MCP tools for managing Basler Stereo ace cameras with multi-component capture (intensity, disparity, depth).
Bases: Service
Stereo Camera Management Service.
Provides comprehensive REST API and MCP tools for managing stereo cameras with multi-component capture capabilities (intensity, disparity, point clouds).
Supported Operations: - Backend discovery and information - Camera lifecycle management (open, close, status) - Multi-component capture (intensity + disparity) - Point cloud generation with optional color - Camera configuration (depth range, illumination, binning, quality, exposure, gain) - Batch operations for multiple cameras - System diagnostics and monitoring
Initialize StereoCameraService.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**kwargs
|
Additional arguments passed to Service base class |
{}
|
Get detailed information about stereo camera backends.
Discover available stereo cameras.
async
Open a stereo camera connection.
async
Open multiple stereo cameras.
async
Close a stereo camera connection.
async
Close multiple stereo cameras.
Get list of active stereo cameras.
async
Get stereo camera status.
async
Get detailed stereo camera information.
Get full calibration data including Q matrix for 2D-to-3D projection.
Get system diagnostics and statistics.
async
Configure stereo camera parameters.
async
Configure multiple stereo cameras.
async
Get current stereo camera configuration.
async
Capture stereo data (intensity + disparity).
async
Capture stereo data from multiple cameras.
async
Capture and generate 3D point cloud.
async
Capture point clouds from multiple cameras.
Start stereo camera stream.
stereo_cameras
Stereo camera support for MindTrace hardware system.
This module provides backends and utilities for stereo camera systems that output multi-component data (intensity, disparity, depth, point clouds).
Available components
- backends: Stereo camera backend implementations
- core: Core stereo camera management classes
- setup: Installation scripts for stereo camera SDKs
Quick Start
from mindtrace.hardware.stereo_cameras import StereoCamera
Open first available stereo camera
camera = StereoCamera()
Capture multi-component data
result = camera.capture() print(f"Intensity: {result.intensity.shape}") print(f"Disparity: {result.disparity.shape}")
Generate point cloud
point_cloud = camera.capture_point_cloud() point_cloud.save_ply("output.ply")
camera.close()
BaslerStereoAceBackend
Bases: StereoCameraBackend
Backend for Basler Stereo ace cameras using pypylon.
The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.
Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.
Initialize Basler Stereo ace backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
discover
staticmethod
Discover available Stereo ace cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
discover_async
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
discover_detailed
staticmethod
Discover Stereo ace cameras with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
discover_detailed_async
async
classmethod
Async wrapper for discover_detailed() - runs discovery in threadpool.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
initialize
async
Initialize camera connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful, False otherwise |
get_calibration
async
Get factory calibration parameters from camera.
Returns:
| Type | Description |
|---|---|
StereoCalibrationData
|
StereoCalibrationData with factory calibration |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If calibration cannot be read |
capture
async
capture(
timeout_ms: int = 20000,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
) -> StereoGrabResult
Capture stereo data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_disparity
|
bool
|
Whether to capture disparity data |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
configure
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
set_depth_range
async
Set depth measurement range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth in meters (e.g., 0.3) |
required |
max_depth
|
float
|
Maximum depth in meters (e.g., 5.0) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
set_illumination_mode
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
set_binning
async
Enable binning for latency reduction.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
set_depth_quality
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Note
Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
Low latency configuration
await camera.set_binning(2, 2) await camera.set_depth_quality("Full")
set_pixel_format
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Example
await camera.set_pixel_format("Mono8") # Force grayscale
set_exposure_time
async
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_exposure_time(5000) # 5ms exposure
set_gain
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
float
|
Gain value (typically 0.0 to 24.0, camera-dependent) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_gain(2.0)
get_exposure_time
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")
get_gain
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
gain = await camera.get_gain() print(f"Current gain: {gain}")
get_depth_quality
async
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")
get_pixel_format
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
format = await camera.get_pixel_format() print(f"Pixel format: {format}")
get_binning
async
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")
get_illumination_mode
async
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")
get_depth_range
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")
set_trigger_mode
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
get_trigger_mode
async
Get current trigger mode (simplified interface).
Returns:
| Type | Description |
|---|---|
str
|
"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
get_trigger_modes
async
Get available trigger modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of supported trigger modes: ["continuous", "trigger"] |
Note
This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.
start_grabbing
async
Start grabbing frames.
This must be called before execute_trigger() in software trigger mode.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
execute_trigger
async
Execute software trigger.
Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
capture_point_cloud
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 20000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values (not supported) |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
AsyncStereoCamera
Bases: Mindtrace
Async stereo camera interface.
Provides high-level stereo camera operations including multi-component capture and 3D point cloud generation.
Initialize async stereo camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Backend instance (e.g., BaslerStereoAceBackend) |
required |
open
async
classmethod
Open and initialize a stereo camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera. |
None
|
Returns:
| Type | Description |
|---|---|
'AsyncStereoCamera'
|
Initialized AsyncStereoCamera instance |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera not found |
CameraConnectionError
|
If connection fails |
Examples:
initialize
async
Initialize camera and load calibration.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Note
Usually not needed as open() handles initialization
capture
async
capture(
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
) -> StereoGrabResult
Capture multi-component stereo data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_intensity
|
bool
|
Whether to capture intensity image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
Examples:
capture_point_cloud
async
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Examples:
configure
async
set_depth_range
async
Set depth measurement range in meters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth (e.g., 0.3 meters) |
required |
max_depth
|
float
|
Maximum depth (e.g., 5.0 meters) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_illumination_mode
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
set_binning
async
Enable binning for latency reduction.
Binning reduces network transfer and computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_depth_quality
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_pixel_format
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Examples:
set_exposure_time
async
set_gain
async
get_exposure_time
async
get_gain
async
get_depth_quality
async
get_pixel_format
async
get_binning
async
get_illumination_mode
async
get_depth_range
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
set_trigger_mode
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition - "trigger": Software-triggered acquisition |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
get_trigger_mode
async
get_trigger_modes
async
start_grabbing
async
Start grabbing frames.
Must be called after enable_software_trigger() and before execute_trigger().
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
execute_trigger
async
Execute software trigger.
Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
Examples:
PointCloudData
dataclass
PointCloudData(
points: ndarray,
colors: Optional[ndarray] = None,
num_points: int = 0,
has_colors: bool = False,
)
3D point cloud data with optional color information.
Attributes:
| Name | Type | Description |
|---|---|---|
points |
ndarray
|
Array of 3D points (N, 3) - (x, y, z) in meters |
colors |
Optional[ndarray]
|
Optional array of RGB colors (N, 3) - values in [0, 1] |
num_points |
int
|
Number of valid points |
has_colors |
bool
|
Flag indicating if color information is present |
save_ply
Save point cloud as PLY file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path |
required |
binary
|
bool
|
If True, save in binary format; otherwise ASCII |
True
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If plyfile is not installed |
StereoCalibrationData
dataclass
StereoCalibrationData(
baseline: float,
focal_length: float,
principal_point_u: float,
principal_point_v: float,
scale3d: float,
offset3d: float,
Q: ndarray,
)
Factory calibration parameters for stereo camera.
These parameters are provided by the camera manufacturer and used for 3D reconstruction from disparity maps.
Attributes:
| Name | Type | Description |
|---|---|---|
baseline |
float
|
Stereo baseline in meters (distance between camera pair) |
focal_length |
float
|
Focal length in pixels |
principal_point_u |
float
|
Principal point U coordinate in pixels |
principal_point_v |
float
|
Principal point V coordinate in pixels |
scale3d |
float
|
Scale factor for disparity conversion |
offset3d |
float
|
Offset for disparity conversion |
Q |
ndarray
|
4x4 reprojection matrix for point cloud generation |
from_camera_params
classmethod
Create calibration data from camera parameter dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Dictionary containing calibration parameters: - Scan3dBaseline: Baseline in meters - Scan3dFocalLength: Focal length in pixels - Scan3dPrincipalPointU: Principal point U in pixels - Scan3dPrincipalPointV: Principal point V in pixels - Scan3dCoordinateScale: Scale factor - Scan3dCoordinateOffset: Offset |
required |
Returns:
| Type | Description |
|---|---|
'StereoCalibrationData'
|
StereoCalibrationData instance |
StereoCamera
StereoCamera(
async_camera: Optional[AsyncStereoCamera] = None,
loop: Optional[AbstractEventLoop] = None,
name: Optional[str] = None,
**kwargs
)
Bases: Mindtrace
Synchronous wrapper around AsyncStereoCamera.
All operations are executed on a background event loop. This provides a simple synchronous API for stereo camera operations.
Create a synchronous stereo camera wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
async_camera
|
Optional[AsyncStereoCamera]
|
Existing AsyncStereoCamera instance |
None
|
loop
|
Optional[AbstractEventLoop]
|
Event loop to use for async operations |
None
|
name
|
Optional[str]
|
Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera. |
None
|
**kwargs
|
Additional arguments passed to Mindtrace |
{}
|
Examples:
>>> # Use existing async camera
>>> async_cam = await AsyncStereoCamera.open()
>>> sync_cam = StereoCamera(async_camera=async_cam, loop=loop)
name
property
Get camera name.
Returns:
| Type | Description |
|---|---|
str
|
Camera name in format "Backend:serial_number" |
calibration
property
Get calibration data.
Returns:
| Type | Description |
|---|---|
Optional[StereoCalibrationData]
|
StereoCalibrationData if available, None otherwise |
is_open
property
Check if camera is open.
Returns:
| Type | Description |
|---|---|
bool
|
True if camera is open, False otherwise |
close
capture
capture(
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
) -> StereoGrabResult
Capture multi-component stereo data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_intensity
|
bool
|
Whether to capture intensity image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
Examples:
capture_point_cloud
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Examples:
configure
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
Examples:
set_depth_range
Set depth measurement range in meters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth (e.g., 0.3 meters) |
required |
max_depth
|
float
|
Maximum depth (e.g., 5.0 meters) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_illumination_mode
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
set_binning
Enable binning for latency reduction.
Binning reduces network transfer and computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_depth_quality
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_pixel_format
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Examples:
set_exposure_time
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
set_gain
get_exposure_time
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
get_gain
get_depth_quality
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
get_pixel_format
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
get_binning
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
get_illumination_mode
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
get_depth_range
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
enable_software_trigger
Enable software triggering mode.
After enabling, use start_grabbing(), then execute_trigger() to capture frames on demand.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
start_grabbing
Start grabbing frames.
Must be called after enable_software_trigger() and before execute_trigger().
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
execute_trigger
Execute software trigger.
Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
Examples:
StereoGrabResult
dataclass
StereoGrabResult(
intensity: Optional[ndarray],
disparity: Optional[ndarray],
timestamp: float,
frame_number: int,
disparity_calibrated: Optional[ndarray] = None,
has_intensity: bool = True,
has_disparity: bool = True,
)
Result from stereo camera capture containing multi-component data.
Attributes:
| Name | Type | Description |
|---|---|---|
intensity |
Optional[ndarray]
|
Intensity image - RGB8 (H, W, 3) or Mono8 (H, W) |
disparity |
Optional[ndarray]
|
Disparity map - uint16 (H, W) |
timestamp |
float
|
Capture timestamp in seconds |
frame_number |
int
|
Sequential frame number |
disparity_calibrated |
Optional[ndarray]
|
Calibrated disparity map - float32 (H, W), optional |
has_intensity |
bool
|
Flag indicating if intensity data is present |
has_disparity |
bool
|
Flag indicating if disparity data is present |
backends
Stereo camera backends.
This module provides the abstract base class and concrete implementations for stereo camera backends.
BaslerStereoAceBackend
Bases: StereoCameraBackend
Backend for Basler Stereo ace cameras using pypylon.
The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.
Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.
Initialize Basler Stereo ace backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
staticmethod
Discover available Stereo ace cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
staticmethod
Discover Stereo ace cameras with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
async
classmethod
Async wrapper for discover_detailed() - runs discovery in threadpool.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
async
Initialize camera connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful, False otherwise |
async
Get factory calibration parameters from camera.
Returns:
| Type | Description |
|---|---|
StereoCalibrationData
|
StereoCalibrationData with factory calibration |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If calibration cannot be read |
async
capture(
timeout_ms: int = 20000,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
) -> StereoGrabResult
Capture stereo data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_disparity
|
bool
|
Whether to capture disparity data |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
async
Set depth measurement range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth in meters (e.g., 0.3) |
required |
max_depth
|
float
|
Maximum depth in meters (e.g., 5.0) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Enable binning for latency reduction.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Note
Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
Low latency configuration
await camera.set_binning(2, 2) await camera.set_depth_quality("Full")
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Example
await camera.set_pixel_format("Mono8") # Force grayscale
async
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_exposure_time(5000) # 5ms exposure
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
float
|
Gain value (typically 0.0 to 24.0, camera-dependent) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_gain(2.0)
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
gain = await camera.get_gain() print(f"Current gain: {gain}")
async
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
format = await camera.get_pixel_format() print(f"Pixel format: {format}")
async
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")
async
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
Get current trigger mode (simplified interface).
Returns:
| Type | Description |
|---|---|
str
|
"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Get available trigger modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of supported trigger modes: ["continuous", "trigger"] |
Note
This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.
async
Start grabbing frames.
This must be called before execute_trigger() in software trigger mode.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
async
Execute software trigger.
Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 20000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values (not supported) |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
StereoCameraBackend
Bases: MindtraceABC
Abstract base class for all stereo camera implementations.
This class defines the async interface that all stereo camera backends must implement to ensure consistent behavior across different stereo camera types and manufacturers. Uses async-first design consistent with CameraBackend and PLC backends.
Attributes:
| Name | Type | Description |
|---|---|---|
serial_number |
Unique identifier for the camera |
|
calibration |
Optional[StereoCalibrationData]
|
Factory calibration parameters |
is_open |
bool
|
Camera connection status |
Implementation Guide
- Offload blocking SDK calls from async methods:
Use
asyncio.to_threadfor simple cases orloop.run_in_executorwith a per-instance single-thread executor when the SDK requires thread affinity. - Thread affinity:
Many vendor SDKs are safest when all calls originate from one OS thread. Prefer a dedicated
single-thread executor created during
initialize()and shut down inclose()to serialize SDK access without blocking the event loop. - Timeouts and cancellation:
Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with
asyncio.wait_forto bound runtime. Note that cancelling an await does not stop the underlying thread function; design idempotent/short tasks when possible. - Event loop hygiene:
Never call blocking functions (e.g., long SDK calls,
time.sleep) directly in async methods. Replace sleeps withawait asyncio.sleepor run blocking work in the executor. - Sync helpers: Lightweight getters/setters that do not touch hardware may remain synchronous. If a "getter" calls into the SDK, route it through the executor to avoid blocking.
- Errors:
Map SDK-specific exceptions to the domain exceptions in
mindtrace.hardware.core.exceptionswith clear, contextual messages. - Cleanup:
Ensure resources (device handles, executors, buffers) are released in
close().__aenter__/__aexit__already callinitialize/closefor async contexts.
Example Implementation
class MyStereoCameraBackend(StereoCameraBackend): ... async def initialize(self) -> bool: ... # Connect to camera, load calibration ... return True ... ... async def capture(self, ...) -> StereoGrabResult: ... # Capture stereo data ... return StereoGrabResult(...)
Initialize base stereo camera backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Unique identifier for the camera (auto-discovered if None) |
None
|
op_timeout_s
|
float
|
Default timeout in seconds for SDK operations |
30.0
|
abstractmethod
staticmethod
Discover available stereo cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers or identifiers for available cameras |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If required SDK is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available cameras |
staticmethod
Discover cameras with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information (serial_number, model, etc.) |
async
classmethod
Async wrapper for discover_detailed().
abstractmethod
async
Initialize camera connection and load calibration.
This method should: 1. Connect to the camera hardware 2. Load factory calibration parameters 3. Apply default configuration
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
abstractmethod
async
capture(
timeout_ms: int = 20000,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
) -> StereoGrabResult
Capture stereo data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
enable_intensity
|
bool
|
Whether to capture intensity/texture image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to raw disparity |
True
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
abstractmethod
async
Close camera and release resources.
This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up executors/threads if used
async
Get factory calibration parameters from camera.
Returns:
| Type | Description |
|---|---|
StereoCalibrationData
|
StereoCalibrationData with factory calibration |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If calibration cannot be read |
async
Capture and generate 3D point cloud.
Default implementation captures stereo data and generates point cloud using calibration parameters. Override for backend-specific optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
async
Set exposure time in microseconds.
async
Set depth measurement range in meters.
async
Get current depth measurement range in meters.
async
Set depth quality level (e.g., 'Full', 'Normal', 'Low').
async
Set illumination mode ('AlwaysActive' or 'AlternateActive').
async
Enable binning for latency reduction.
async
Get current binning settings (horizontal, vertical).
async
Set pixel format for intensity component.
async
Set trigger mode ('continuous' or 'trigger').
async
Start grabbing frames (required before execute_trigger in trigger mode).
basler
Basler Stereo ace backend.
Bases: StereoCameraBackend
Backend for Basler Stereo ace cameras using pypylon.
The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.
Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.
Initialize Basler Stereo ace backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
staticmethod
Discover available Stereo ace cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
staticmethod
Discover Stereo ace cameras with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
async
classmethod
Async wrapper for discover_detailed() - runs discovery in threadpool.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
async
Initialize camera connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful, False otherwise |
async
Get factory calibration parameters from camera.
Returns:
| Type | Description |
|---|---|
StereoCalibrationData
|
StereoCalibrationData with factory calibration |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If calibration cannot be read |
async
capture(
timeout_ms: int = 20000,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
) -> StereoGrabResult
Capture stereo data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_disparity
|
bool
|
Whether to capture disparity data |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
async
Set depth measurement range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth in meters (e.g., 0.3) |
required |
max_depth
|
float
|
Maximum depth in meters (e.g., 5.0) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Enable binning for latency reduction.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Note
Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
Low latency configuration
await camera.set_binning(2, 2) await camera.set_depth_quality("Full")
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Example
await camera.set_pixel_format("Mono8") # Force grayscale
async
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_exposure_time(5000) # 5ms exposure
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
float
|
Gain value (typically 0.0 to 24.0, camera-dependent) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_gain(2.0)
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
gain = await camera.get_gain() print(f"Current gain: {gain}")
async
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
format = await camera.get_pixel_format() print(f"Pixel format: {format}")
async
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")
async
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
Get current trigger mode (simplified interface).
Returns:
| Type | Description |
|---|---|
str
|
"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Get available trigger modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of supported trigger modes: ["continuous", "trigger"] |
Note
This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.
async
Start grabbing frames.
This must be called before execute_trigger() in software trigger mode.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
async
Execute software trigger.
Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 20000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values (not supported) |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Basler Stereo ace camera backend using pypylon.
This backend provides access to Basler Stereo ace cameras which combine two ace2 Pro cameras with a pattern projector into a unified stereo vision system.
Bases: StereoCameraBackend
Backend for Basler Stereo ace cameras using pypylon.
The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.
Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.
Initialize Basler Stereo ace backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera. |
None
|
op_timeout_s
|
float
|
Timeout in seconds for SDK operations (default 30s). |
30.0
|
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
staticmethod
Discover available Stereo ace cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available Stereo ace cameras |
staticmethod
Discover Stereo ace cameras with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If pypylon is not available |
async
classmethod
Async wrapper for discover_detailed() - runs discovery in threadpool.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information |
async
Initialize camera connection.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful, False otherwise |
async
Get factory calibration parameters from camera.
Returns:
| Type | Description |
|---|---|
StereoCalibrationData
|
StereoCalibrationData with factory calibration |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If calibration cannot be read |
async
capture(
timeout_ms: int = 20000,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
) -> StereoGrabResult
Capture stereo data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
enable_intensity
|
bool
|
Whether to capture intensity data |
True
|
enable_disparity
|
bool
|
Whether to capture disparity data |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
async
Set depth measurement range.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth in meters (e.g., 0.3) |
required |
max_depth
|
float
|
Maximum depth in meters (e.g., 5.0) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Enable binning for latency reduction.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Note
Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
Low latency configuration
await camera.set_binning(2, 2) await camera.set_depth_quality("Full")
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Example
await camera.set_pixel_format("Mono8") # Force grayscale
async
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_exposure_time(5000) # 5ms exposure
async
Set camera gain.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
gain
|
float
|
Gain value (typically 0.0 to 24.0, camera-dependent) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Example
await camera.set_gain(2.0)
async
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")
async
Get current camera gain.
Returns:
| Type | Description |
|---|---|
float
|
Current gain value |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
gain = await camera.get_gain() print(f"Current gain: {gain}")
async
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")
async
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
format = await camera.get_pixel_format() print(f"Pixel format: {format}")
async
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")
async
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Example
min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
Get current trigger mode (simplified interface).
Returns:
| Type | Description |
|---|---|
str
|
"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Get available trigger modes.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of supported trigger modes: ["continuous", "trigger"] |
Note
This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.
async
Start grabbing frames.
This must be called before execute_trigger() in software trigger mode.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
async
Execute software trigger.
Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
async
capture_point_cloud(
include_colors: bool = True,
include_confidence: bool = False,
downsample_factor: int = 1,
timeout_ms: int = 20000,
) -> PointCloudData
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
include_confidence
|
bool
|
Whether to include confidence values (not supported) |
False
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
stereo_camera_backend
Abstract base class for stereo camera backends.
This module defines the async interface that all stereo camera backends must implement to ensure consistent behavior across different stereo camera types and manufacturers.
Following the same architectural pattern as CameraBackend for consistency.
Bases: MindtraceABC
Abstract base class for all stereo camera implementations.
This class defines the async interface that all stereo camera backends must implement to ensure consistent behavior across different stereo camera types and manufacturers. Uses async-first design consistent with CameraBackend and PLC backends.
Attributes:
| Name | Type | Description |
|---|---|---|
serial_number |
Unique identifier for the camera |
|
calibration |
Optional[StereoCalibrationData]
|
Factory calibration parameters |
is_open |
bool
|
Camera connection status |
Implementation Guide
- Offload blocking SDK calls from async methods:
Use
asyncio.to_threadfor simple cases orloop.run_in_executorwith a per-instance single-thread executor when the SDK requires thread affinity. - Thread affinity:
Many vendor SDKs are safest when all calls originate from one OS thread. Prefer a dedicated
single-thread executor created during
initialize()and shut down inclose()to serialize SDK access without blocking the event loop. - Timeouts and cancellation:
Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with
asyncio.wait_forto bound runtime. Note that cancelling an await does not stop the underlying thread function; design idempotent/short tasks when possible. - Event loop hygiene:
Never call blocking functions (e.g., long SDK calls,
time.sleep) directly in async methods. Replace sleeps withawait asyncio.sleepor run blocking work in the executor. - Sync helpers: Lightweight getters/setters that do not touch hardware may remain synchronous. If a "getter" calls into the SDK, route it through the executor to avoid blocking.
- Errors:
Map SDK-specific exceptions to the domain exceptions in
mindtrace.hardware.core.exceptionswith clear, contextual messages. - Cleanup:
Ensure resources (device handles, executors, buffers) are released in
close().__aenter__/__aexit__already callinitialize/closefor async contexts.
Example Implementation
class MyStereoCameraBackend(StereoCameraBackend): ... async def initialize(self) -> bool: ... # Connect to camera, load calibration ... return True ... ... async def capture(self, ...) -> StereoGrabResult: ... # Capture stereo data ... return StereoGrabResult(...)
Initialize base stereo camera backend.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
serial_number
|
Optional[str]
|
Unique identifier for the camera (auto-discovered if None) |
None
|
op_timeout_s
|
float
|
Default timeout in seconds for SDK operations |
30.0
|
abstractmethod
staticmethod
Discover available stereo cameras.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers or identifiers for available cameras |
Raises:
| Type | Description |
|---|---|
SDKNotAvailableError
|
If required SDK is not available |
async
classmethod
Async wrapper for discover() - runs discovery in threadpool.
Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.
Returns:
| Type | Description |
|---|---|
List[str]
|
List of serial numbers for available cameras |
staticmethod
Discover cameras with detailed information.
Returns:
| Type | Description |
|---|---|
List[Dict[str, str]]
|
List of dictionaries containing camera information (serial_number, model, etc.) |
async
classmethod
Async wrapper for discover_detailed().
abstractmethod
async
Initialize camera connection and load calibration.
This method should: 1. Connect to the camera hardware 2. Load factory calibration parameters 3. Apply default configuration
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera cannot be found |
CameraInitializationError
|
If camera initialization fails |
CameraConnectionError
|
If camera connection fails |
abstractmethod
async
capture(
timeout_ms: int = 20000,
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
) -> StereoGrabResult
Capture stereo data with multiple components.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
enable_intensity
|
bool
|
Whether to capture intensity/texture image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to raw disparity |
True
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraTimeoutError
|
If capture times out |
abstractmethod
async
Close camera and release resources.
This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up executors/threads if used
async
Get factory calibration parameters from camera.
Returns:
| Type | Description |
|---|---|
StereoCalibrationData
|
StereoCalibrationData with factory calibration |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If calibration cannot be read |
async
Capture and generate 3D point cloud.
Default implementation captures stereo data and generates point cloud using calibration parameters. Override for backend-specific optimization.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional attributes |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
async
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
async
Set exposure time in microseconds.
async
Set depth measurement range in meters.
async
Get current depth measurement range in meters.
async
Set depth quality level (e.g., 'Full', 'Normal', 'Low').
async
Set illumination mode ('AlwaysActive' or 'AlternateActive').
async
Enable binning for latency reduction.
async
Get current binning settings (horizontal, vertical).
async
Set pixel format for intensity component.
async
Set trigger mode ('continuous' or 'trigger').
async
Start grabbing frames (required before execute_trigger in trigger mode).
core
Core stereo camera interfaces and data models.
AsyncStereoCamera
Bases: Mindtrace
Async stereo camera interface.
Provides high-level stereo camera operations including multi-component capture and 3D point cloud generation.
Initialize async stereo camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Backend instance (e.g., BaslerStereoAceBackend) |
required |
async
classmethod
Open and initialize a stereo camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera. |
None
|
Returns:
| Type | Description |
|---|---|
'AsyncStereoCamera'
|
Initialized AsyncStereoCamera instance |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera not found |
CameraConnectionError
|
If connection fails |
Examples:
async
Initialize camera and load calibration.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Note
Usually not needed as open() handles initialization
async
capture(
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
) -> StereoGrabResult
Capture multi-component stereo data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_intensity
|
bool
|
Whether to capture intensity image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
Examples:
async
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Examples:
async
async
Set depth measurement range in meters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth (e.g., 0.3 meters) |
required |
max_depth
|
float
|
Maximum depth (e.g., 5.0 meters) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
Enable binning for latency reduction.
Binning reduces network transfer and computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Examples:
async
async
async
async
async
async
async
async
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition - "trigger": Software-triggered acquisition |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
async
async
Start grabbing frames.
Must be called after enable_software_trigger() and before execute_trigger().
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Execute software trigger.
Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
Examples:
PointCloudData
dataclass
PointCloudData(
points: ndarray,
colors: Optional[ndarray] = None,
num_points: int = 0,
has_colors: bool = False,
)
3D point cloud data with optional color information.
Attributes:
| Name | Type | Description |
|---|---|---|
points |
ndarray
|
Array of 3D points (N, 3) - (x, y, z) in meters |
colors |
Optional[ndarray]
|
Optional array of RGB colors (N, 3) - values in [0, 1] |
num_points |
int
|
Number of valid points |
has_colors |
bool
|
Flag indicating if color information is present |
Save point cloud as PLY file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path |
required |
binary
|
bool
|
If True, save in binary format; otherwise ASCII |
True
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If plyfile is not installed |
StereoCalibrationData
dataclass
StereoCalibrationData(
baseline: float,
focal_length: float,
principal_point_u: float,
principal_point_v: float,
scale3d: float,
offset3d: float,
Q: ndarray,
)
Factory calibration parameters for stereo camera.
These parameters are provided by the camera manufacturer and used for 3D reconstruction from disparity maps.
Attributes:
| Name | Type | Description |
|---|---|---|
baseline |
float
|
Stereo baseline in meters (distance between camera pair) |
focal_length |
float
|
Focal length in pixels |
principal_point_u |
float
|
Principal point U coordinate in pixels |
principal_point_v |
float
|
Principal point V coordinate in pixels |
scale3d |
float
|
Scale factor for disparity conversion |
offset3d |
float
|
Offset for disparity conversion |
Q |
ndarray
|
4x4 reprojection matrix for point cloud generation |
classmethod
Create calibration data from camera parameter dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Dictionary containing calibration parameters: - Scan3dBaseline: Baseline in meters - Scan3dFocalLength: Focal length in pixels - Scan3dPrincipalPointU: Principal point U in pixels - Scan3dPrincipalPointV: Principal point V in pixels - Scan3dCoordinateScale: Scale factor - Scan3dCoordinateOffset: Offset |
required |
Returns:
| Type | Description |
|---|---|
'StereoCalibrationData'
|
StereoCalibrationData instance |
StereoGrabResult
dataclass
StereoGrabResult(
intensity: Optional[ndarray],
disparity: Optional[ndarray],
timestamp: float,
frame_number: int,
disparity_calibrated: Optional[ndarray] = None,
has_intensity: bool = True,
has_disparity: bool = True,
)
Result from stereo camera capture containing multi-component data.
Attributes:
| Name | Type | Description |
|---|---|---|
intensity |
Optional[ndarray]
|
Intensity image - RGB8 (H, W, 3) or Mono8 (H, W) |
disparity |
Optional[ndarray]
|
Disparity map - uint16 (H, W) |
timestamp |
float
|
Capture timestamp in seconds |
frame_number |
int
|
Sequential frame number |
disparity_calibrated |
Optional[ndarray]
|
Calibrated disparity map - float32 (H, W), optional |
has_intensity |
bool
|
Flag indicating if intensity data is present |
has_disparity |
bool
|
Flag indicating if disparity data is present |
StereoCamera
StereoCamera(
async_camera: Optional[AsyncStereoCamera] = None,
loop: Optional[AbstractEventLoop] = None,
name: Optional[str] = None,
**kwargs
)
Bases: Mindtrace
Synchronous wrapper around AsyncStereoCamera.
All operations are executed on a background event loop. This provides a simple synchronous API for stereo camera operations.
Create a synchronous stereo camera wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
async_camera
|
Optional[AsyncStereoCamera]
|
Existing AsyncStereoCamera instance |
None
|
loop
|
Optional[AbstractEventLoop]
|
Event loop to use for async operations |
None
|
name
|
Optional[str]
|
Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera. |
None
|
**kwargs
|
Additional arguments passed to Mindtrace |
{}
|
Examples:
>>> # Use existing async camera
>>> async_cam = await AsyncStereoCamera.open()
>>> sync_cam = StereoCamera(async_camera=async_cam, loop=loop)
property
Get camera name.
Returns:
| Type | Description |
|---|---|
str
|
Camera name in format "Backend:serial_number" |
property
Get calibration data.
Returns:
| Type | Description |
|---|---|
Optional[StereoCalibrationData]
|
StereoCalibrationData if available, None otherwise |
property
Check if camera is open.
Returns:
| Type | Description |
|---|---|
bool
|
True if camera is open, False otherwise |
capture(
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
) -> StereoGrabResult
Capture multi-component stereo data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_intensity
|
bool
|
Whether to capture intensity image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
Examples:
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Examples:
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
Examples:
Set depth measurement range in meters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth (e.g., 0.3 meters) |
required |
max_depth
|
float
|
Maximum depth (e.g., 5.0 meters) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
Enable binning for latency reduction.
Binning reduces network transfer and computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Examples:
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Enable software triggering mode.
After enabling, use start_grabbing(), then execute_trigger() to capture frames on demand.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Start grabbing frames.
Must be called after enable_software_trigger() and before execute_trigger().
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Execute software trigger.
Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
Examples:
async_stereo_camera
Async stereo camera interface providing high-level stereo capture operations.
Bases: Mindtrace
Async stereo camera interface.
Provides high-level stereo camera operations including multi-component capture and 3D point cloud generation.
Initialize async stereo camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
backend
|
Backend instance (e.g., BaslerStereoAceBackend) |
required |
async
classmethod
Open and initialize a stereo camera.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
name
|
Optional[str]
|
Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera. |
None
|
Returns:
| Type | Description |
|---|---|
'AsyncStereoCamera'
|
Initialized AsyncStereoCamera instance |
Raises:
| Type | Description |
|---|---|
CameraNotFoundError
|
If camera not found |
CameraConnectionError
|
If connection fails |
Examples:
async
Initialize camera and load calibration.
Returns:
| Type | Description |
|---|---|
bool
|
True if initialization successful |
Note
Usually not needed as open() handles initialization
async
capture(
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
) -> StereoGrabResult
Capture multi-component stereo data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_intensity
|
bool
|
Whether to capture intensity image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
Examples:
async
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Examples:
async
async
Set depth measurement range in meters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth (e.g., 0.3 meters) |
required |
max_depth
|
float
|
Maximum depth (e.g., 5.0 meters) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
async
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
Enable binning for latency reduction.
Binning reduces network transfer and computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
async
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
async
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Examples:
async
async
async
async
async
async
async
async
async
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Set trigger mode (simplified interface).
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition - "trigger": Software-triggered acquisition |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
async
async
async
Start grabbing frames.
Must be called after enable_software_trigger() and before execute_trigger().
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
async
Execute software trigger.
Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
Examples:
models
Data models for stereo camera operations.
This module provides data structures for handling stereo camera data including multi-component capture results, calibration parameters, and 3D point clouds.
dataclass
StereoGrabResult(
intensity: Optional[ndarray],
disparity: Optional[ndarray],
timestamp: float,
frame_number: int,
disparity_calibrated: Optional[ndarray] = None,
has_intensity: bool = True,
has_disparity: bool = True,
)
Result from stereo camera capture containing multi-component data.
Attributes:
| Name | Type | Description |
|---|---|---|
intensity |
Optional[ndarray]
|
Intensity image - RGB8 (H, W, 3) or Mono8 (H, W) |
disparity |
Optional[ndarray]
|
Disparity map - uint16 (H, W) |
timestamp |
float
|
Capture timestamp in seconds |
frame_number |
int
|
Sequential frame number |
disparity_calibrated |
Optional[ndarray]
|
Calibrated disparity map - float32 (H, W), optional |
has_intensity |
bool
|
Flag indicating if intensity data is present |
has_disparity |
bool
|
Flag indicating if disparity data is present |
dataclass
StereoCalibrationData(
baseline: float,
focal_length: float,
principal_point_u: float,
principal_point_v: float,
scale3d: float,
offset3d: float,
Q: ndarray,
)
Factory calibration parameters for stereo camera.
These parameters are provided by the camera manufacturer and used for 3D reconstruction from disparity maps.
Attributes:
| Name | Type | Description |
|---|---|---|
baseline |
float
|
Stereo baseline in meters (distance between camera pair) |
focal_length |
float
|
Focal length in pixels |
principal_point_u |
float
|
Principal point U coordinate in pixels |
principal_point_v |
float
|
Principal point V coordinate in pixels |
scale3d |
float
|
Scale factor for disparity conversion |
offset3d |
float
|
Offset for disparity conversion |
Q |
ndarray
|
4x4 reprojection matrix for point cloud generation |
classmethod
Create calibration data from camera parameter dictionary.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
dict
|
Dictionary containing calibration parameters: - Scan3dBaseline: Baseline in meters - Scan3dFocalLength: Focal length in pixels - Scan3dPrincipalPointU: Principal point U in pixels - Scan3dPrincipalPointV: Principal point V in pixels - Scan3dCoordinateScale: Scale factor - Scan3dCoordinateOffset: Offset |
required |
Returns:
| Type | Description |
|---|---|
'StereoCalibrationData'
|
StereoCalibrationData instance |
dataclass
PointCloudData(
points: ndarray,
colors: Optional[ndarray] = None,
num_points: int = 0,
has_colors: bool = False,
)
3D point cloud data with optional color information.
Attributes:
| Name | Type | Description |
|---|---|---|
points |
ndarray
|
Array of 3D points (N, 3) - (x, y, z) in meters |
colors |
Optional[ndarray]
|
Optional array of RGB colors (N, 3) - values in [0, 1] |
num_points |
int
|
Number of valid points |
has_colors |
bool
|
Flag indicating if color information is present |
Save point cloud as PLY file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
path
|
str
|
Output file path |
required |
binary
|
bool
|
If True, save in binary format; otherwise ASCII |
True
|
Raises:
| Type | Description |
|---|---|
ImportError
|
If plyfile is not installed |
stereo_camera
Synchronous stereo camera interface.
This module provides a synchronous wrapper around AsyncStereoCamera, following the same pattern as the regular Camera class.
StereoCamera(
async_camera: Optional[AsyncStereoCamera] = None,
loop: Optional[AbstractEventLoop] = None,
name: Optional[str] = None,
**kwargs
)
Bases: Mindtrace
Synchronous wrapper around AsyncStereoCamera.
All operations are executed on a background event loop. This provides a simple synchronous API for stereo camera operations.
Create a synchronous stereo camera wrapper.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
async_camera
|
Optional[AsyncStereoCamera]
|
Existing AsyncStereoCamera instance |
None
|
loop
|
Optional[AbstractEventLoop]
|
Event loop to use for async operations |
None
|
name
|
Optional[str]
|
Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera. |
None
|
**kwargs
|
Additional arguments passed to Mindtrace |
{}
|
Examples:
>>> # Use existing async camera
>>> async_cam = await AsyncStereoCamera.open()
>>> sync_cam = StereoCamera(async_camera=async_cam, loop=loop)
property
Get camera name.
Returns:
| Type | Description |
|---|---|
str
|
Camera name in format "Backend:serial_number" |
property
Get calibration data.
Returns:
| Type | Description |
|---|---|
Optional[StereoCalibrationData]
|
StereoCalibrationData if available, None otherwise |
property
Check if camera is open.
Returns:
| Type | Description |
|---|---|
bool
|
True if camera is open, False otherwise |
capture(
enable_intensity: bool = True,
enable_disparity: bool = True,
calibrate_disparity: bool = True,
timeout_ms: int = 20000,
) -> StereoGrabResult
Capture multi-component stereo data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
enable_intensity
|
bool
|
Whether to capture intensity image |
True
|
enable_disparity
|
bool
|
Whether to capture disparity map |
True
|
calibrate_disparity
|
bool
|
Whether to apply calibration to disparity |
True
|
timeout_ms
|
int
|
Capture timeout in milliseconds |
20000
|
Returns:
| Type | Description |
|---|---|
StereoGrabResult
|
StereoGrabResult containing captured data |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
Examples:
Capture and generate 3D point cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
include_colors
|
bool
|
Whether to include color information from intensity |
True
|
downsample_factor
|
int
|
Downsampling factor (1 = no downsampling) |
1
|
Returns:
| Type | Description |
|---|---|
PointCloudData
|
PointCloudData with 3D points and optional colors |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraCaptureError
|
If capture fails |
CameraConfigurationError
|
If calibration not available |
Examples:
Configure camera parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
**params
|
Parameter name-value pairs |
{}
|
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If configuration fails |
Examples:
Set depth measurement range in meters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
min_depth
|
float
|
Minimum depth (e.g., 0.3 meters) |
required |
max_depth
|
float
|
Maximum depth (e.g., 5.0 meters) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Set illumination mode.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
mode
|
str
|
'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If invalid mode or configuration fails |
Examples:
Enable binning for latency reduction.
Binning reduces network transfer and computation.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
horizontal
|
int
|
Horizontal binning factor (typically 2) |
2
|
vertical
|
int
|
Vertical binning factor (typically 2) |
2
|
Note
When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Set depth quality level.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
quality
|
str
|
Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Set pixel format for intensity component.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
format
|
str
|
Pixel format ("RGB8", "Mono8", etc.) |
required |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If format not available or configuration fails |
Examples:
Set exposure time in microseconds.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
microseconds
|
float
|
Exposure time in microseconds (e.g., 5000 = 5ms) |
required |
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Get current exposure time in microseconds.
Returns:
| Type | Description |
|---|---|
float
|
Current exposure time in microseconds |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current depth quality setting.
Returns:
| Type | Description |
|---|---|
str
|
Current depth quality level (e.g., "Full", "Normal", "Low") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current pixel format.
Returns:
| Type | Description |
|---|---|
str
|
Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current binning settings.
Returns:
| Type | Description |
|---|---|
tuple[int, int]
|
Tuple of (horizontal_binning, vertical_binning) |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current illumination mode.
Returns:
| Type | Description |
|---|---|
str
|
Current illumination mode ("AlwaysActive" or "AlternateActive") |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Get current depth measurement range in meters.
Returns:
| Type | Description |
|---|---|
tuple[float, float]
|
Tuple of (min_depth, max_depth) in meters |
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Enable software triggering mode.
After enabling, use start_grabbing(), then execute_trigger() to capture frames on demand.
Raises:
| Type | Description |
|---|---|
CameraConfigurationError
|
If configuration fails |
Examples:
Start grabbing frames.
Must be called after enable_software_trigger() and before execute_trigger().
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
Examples:
Execute software trigger.
Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.
Raises:
| Type | Description |
|---|---|
CameraConnectionError
|
If camera not opened |
CameraConfigurationError
|
If trigger execution fails |
Examples:
setup
Setup scripts for stereo camera SDKs.
This module provides installation scripts for stereo camera systems.
Available CLI commands (after package installation): mindtrace-stereo-basler install # Install Stereo ace package mindtrace-stereo-basler uninstall # Uninstall Stereo ace package
Each setup script uses Typer for CLI and can be run independently.
StereoAceInstaller
StereoAceInstaller(
installation_method: str = "tarball",
install_dir: Optional[str] = None,
package_path: Optional[str] = None,
)
Bases: Mindtrace
Basler Stereo ace Supplementary Package installer with guided wizard.
This class provides an interactive installation wizard that guides users through downloading and installing the Stereo ace package from the official Basler website.
Initialize the Stereo ace installer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
installation_method
|
str
|
Installation method ("deb" or "tarball") |
'tarball'
|
install_dir
|
Optional[str]
|
Custom installation directory (for tarball method) |
None
|
package_path
|
Optional[str]
|
Path to pre-downloaded package file (optional) |
None
|
Install the Stereo ace Supplementary Package.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
setup_stereo_ace
Basler Stereo ace Setup Script
This script provides a guided installation wizard for the Basler pylon Supplementary Package for Stereo ace cameras on Linux systems. The package provides the GenTL Producer needed to connect and use Stereo ace camera systems.
Features: - Interactive guided wizard with browser integration - Supports both Debian package (.deb) and tar.gz archive installation - Custom installation path support (default: ~/.local/share/pylon_stereo) - Environment variable setup for GenTL Producer - Shell environment script generation - Support for pre-downloaded packages (--package flag) - Comprehensive logging and error handling - Uninstallation support
Installation Methods
- Debian Package (Recommended - requires sudo):
- Installs to /opt/pylon
- Automatic environment configuration
-
System-wide availability
-
tar.gz Archive (Portable - no sudo):
- Installs to user-specified or default directory
- Requires manual environment setup
- Per-user installation
Usage
python setup_stereo_ace.py # Interactive wizard python setup_stereo_ace.py --method deb # Use Debian package python setup_stereo_ace.py --method tarball # Use tar.gz archive python setup_stereo_ace.py --package /path/to/file # Use pre-downloaded file python setup_stereo_ace.py --install-dir ~/pylon # Custom install location python setup_stereo_ace.py --uninstall # Uninstall mindtrace-stereo-basler-install # Console script (install) mindtrace-stereo-basler-uninstall # Console script (uninstall)
Environment Setup
After installation, you must set environment variables:
For Debian package: source /opt/pylon/bin/pylon-setup-env.sh /opt/pylon
For tar.gz archive:
source
Or add to ~/.bashrc for persistence:
echo "source
StereoAceInstaller(
installation_method: str = "tarball",
install_dir: Optional[str] = None,
package_path: Optional[str] = None,
)
Bases: Mindtrace
Basler Stereo ace Supplementary Package installer with guided wizard.
This class provides an interactive installation wizard that guides users through downloading and installing the Stereo ace package from the official Basler website.
Initialize the Stereo ace installer.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
installation_method
|
str
|
Installation method ("deb" or "tarball") |
'tarball'
|
install_dir
|
Optional[str]
|
Custom installation directory (for tarball method) |
None
|
package_path
|
Optional[str]
|
Path to pre-downloaded package file (optional) |
None
|
Install the Stereo ace Supplementary Package.
Returns:
| Type | Description |
|---|---|
bool
|
True if installation successful, False otherwise |
install(
method: str = typer.Option(
"tarball",
"--method",
"-m",
help="Installation method: 'deb' (requires sudo) or 'tarball' (portable)",
),
package: Optional[Path] = typer.Option(
None,
"--package",
"-p",
help="Path to pre-downloaded package file (.deb or .tar.gz)",
exists=True,
dir_okay=False,
),
install_dir: Optional[Path] = typer.Option(
None,
"--install-dir",
"-d",
help="Custom installation directory (for tarball method)",
),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
) -> None
Install the Basler Stereo ace Supplementary Package using an interactive wizard.
The wizard will guide you through downloading and installing the package from Basler's official website where you'll accept their EULA.
For CI/automation, use --package to provide a pre-downloaded file.
uninstall(
method: str = typer.Option(
"tarball",
"--method",
"-m",
help="Installation method used: 'deb' or 'tarball'",
),
install_dir: Optional[Path] = typer.Option(
None,
"--install-dir",
"-d",
help="Custom installation directory (for tarball method)",
),
verbose: bool = typer.Option(
False, "--verbose", "-v", help="Enable verbose logging"
),
) -> None
Uninstall the Basler Stereo ace Supplementary Package.