Skip to content

Hardware Package API Reference

Mindtrace Hardware Module

A comprehensive hardware abstraction layer providing unified access to cameras, PLCs, sensors, and other industrial hardware components with lazy imports to prevent cross-contamination between different backends.

Key Features
  • Lazy import system to avoid loading all backends at startup
  • Unified interface for different hardware types
  • Async-first design for optimal performance
  • Thread-safe operations across all components
  • Comprehensive error handling and logging
  • Configuration management system
  • Mock backends for testing and development
Hardware Components
  • CameraManager: Unified camera management (Basler, OpenCV)
  • PLCManager: Unified PLC management (Allen-Bradley, Siemens, Modbus)
  • SensorManager: Sensor data acquisition and monitoring (MQTT, HTTP, Serial)
  • SensorManagerService: Service wrapper for SensorManager with MCP endpoints
  • ActuatorManager: Actuator control and positioning (Future)
Design Philosophy

This module uses lazy imports to prevent SWIG warnings from pycomm3 appearing in camera tests, and to avoid loading heavy SDKs unless they are actually needed. Each manager is only imported when accessed.

Usage

Import managers only when needed

from mindtrace.hardware import CameraManager, PLCManager, SensorManager from mindtrace.hardware.services.sensors import SensorManagerService

Camera operations

async with CameraManager() as camera_manager: cameras = camera_manager.discover() camera = await camera_manager.open(cameras[0]) image = await camera.capture()

PLC operations

async with PLCManager() as plc_manager: await plc_manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await plc_manager.connect_plc("PLC1") values = await plc_manager.read_tag("PLC1", ["Tag1", "Tag2"])

Sensor operations (direct manager)

async with SensorManager() as sensor_manager: await sensor_manager.connect_sensor("temp1", "mqtt", config, "sensors/temp") data = await sensor_manager.read_sensor_data("temp1")

Sensor operations (service with MCP endpoints)

service = SensorManagerService() response = await service.connect_sensor(connection_request)

Configuration

All hardware components use the unified configuration system: - Environment variables with MINDTRACE_HW_ prefix - JSON configuration files - Programmatic configuration via dataclasses - Hierarchical configuration inheritance

Thread Safety

All hardware managers are thread-safe and can be used concurrently from multiple threads without interference.

cameras

Camera module for mindtrace hardware.

Provides unified camera management across different camera manufacturers with graceful SDK handling and comprehensive error management.

CameraBackend
CameraBackend(
    camera_name: Optional[str] = None,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
)

Bases: MindtraceABC

Abstract base class for all camera implementations.

This class defines the async interface that all camera backends must implement to ensure consistent behavior across different camera types and manufacturers.

Thread Model

Backends declare their threading requirements via the REQUIRES_THREAD_AFFINITY class attribute:

  • When True, a dedicated single-thread executor is created per camera instance to ensure all SDK calls for that camera execute on the same OS thread. This is required by SDKs like Pypylon and Harvesters that bind camera objects to the thread that opened them.

  • When False, blocking calls are dispatched via asyncio.to_thread() using the default shared thread pool. This is suitable for thread-safe SDKs like OpenCV.

All blocking SDK calls should use the _run_blocking() method, which automatically selects the appropriate execution strategy based on REQUIRES_THREAD_AFFINITY.

Subclass Requirements
  • Set REQUIRES_THREAD_AFFINITY = True if the SDK requires thread affinity
  • Use _run_blocking() for all SDK calls that may block
  • Call await self._cleanup_executor() in close() to release thread resources

Attributes:

Name Type Description
REQUIRES_THREAD_AFFINITY bool

Class attribute indicating thread affinity requirement

camera_name

Unique identifier for the camera

camera_config_file

Path to camera configuration file

img_quality_enhancement

Whether image quality enhancement is enabled

retrieve_retry_count

Number of retries for image retrieval

camera Optional[Any]

The initialized camera object (implementation-specific)

device_manager Optional[Any]

Device manager object (implementation-specific)

initialized bool

Camera initialization status

Initialize base camera with configuration integration.

Parameters:

Name Type Description Default
camera_name Optional[str]

Unique identifier for the camera (auto-generated if None)

None
camera_config Optional[str]

Path to camera configuration file

None
img_quality_enhancement Optional[bool]

Whether to apply image quality enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of retries for image retrieval (uses config default if None)

None
setup_camera async
setup_camera()

Common setup method for camera initialization.

This method provides a standardized setup pattern that can be used by all camera backends. It calls the abstract initialize() method and handles common initialization patterns.

Raises:

Type Description
CameraNotFoundError

If camera cannot be found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps.

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit.

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization.

get_packet_size async
get_packet_size() -> int

Get current packet size.

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required
Note

This is a runtime-configurable parameter that can be changed without reinitializing the camera.

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_lens_status async
get_lens_status() -> Dict[str, Any]

Get liquid lens hardware state.

Returns:

Type Description
Dict[str, Any]

Dict with keys:

Dict[str, Any]
  • connected (bool): Whether a lens is physically connected
Dict[str, Any]
  • status (str): Lens status string (e.g., "Lens OK")
Dict[str, Any]
  • optical_power (float | None): Current optical power in diopters
get_optical_power async
get_optical_power() -> float

Get current lens optical power in diopters.

set_optical_power async
set_optical_power(diopters: float)

Set lens optical power in diopters (manual focus).

Parameters:

Name Type Description Default
diopters float

Target optical power within the lens range.

required
get_optical_power_range async
get_optical_power_range() -> List[float]

Get optical power range [min, max] in diopters.

trigger_autofocus async
trigger_autofocus(accuracy: str = 'Normal') -> bool

Trigger one-shot autofocus.

Parameters:

Name Type Description Default
accuracy str

Autofocus accuracy mode — "Fast", "Normal", or "Accurate".

'Normal'

Returns:

Type Description
bool

True when autofocus completes successfully.

get_focus_config async
get_focus_config() -> Dict[str, Any]

Get current focus/autofocus configuration.

Returns:

Type Description
Dict[str, Any]

Dict with keys: accuracy, stepper, stepper_lower_limit, stepper_upper_limit,

Dict[str, Any]

roi_size, focus_source, edge_detection, roi_offset_x, roi_offset_y.

set_focus_config async
set_focus_config(**settings)

Set focus/autofocus parameters.

Parameters:

Name Type Description Default
**settings

Keys matching get_focus_config() return values.

{}
backends

Camera backends for different manufacturers and types.

This module provides camera backend implementations for the Mindtrace hardware system. Each backend implements the CameraBackend interface for consistent camera operations.

Available Backends
  • CameraBackend: Abstract base class defining the camera interface
  • BaslerCameraBackend: Industrial cameras from Basler (when available)
  • OpenCVCameraBackend: USB cameras and webcams via OpenCV (when available)
  • GenICamCameraBackend: GenICam-compliant cameras via Harvesters (when available)

Usage: from mindtrace.hardware.cameras.backends import CameraBackend from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend from mindtrace.hardware.cameras.backends.genicam import GenICamCameraBackend

Configuration

Camera backends integrate with the Mindtrace configuration system to provide consistent default values and settings across all camera types.

CameraBackend
CameraBackend(
    camera_name: Optional[str] = None,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
)

Bases: MindtraceABC

Abstract base class for all camera implementations.

This class defines the async interface that all camera backends must implement to ensure consistent behavior across different camera types and manufacturers.

Thread Model

Backends declare their threading requirements via the REQUIRES_THREAD_AFFINITY class attribute:

  • When True, a dedicated single-thread executor is created per camera instance to ensure all SDK calls for that camera execute on the same OS thread. This is required by SDKs like Pypylon and Harvesters that bind camera objects to the thread that opened them.

  • When False, blocking calls are dispatched via asyncio.to_thread() using the default shared thread pool. This is suitable for thread-safe SDKs like OpenCV.

All blocking SDK calls should use the _run_blocking() method, which automatically selects the appropriate execution strategy based on REQUIRES_THREAD_AFFINITY.

Subclass Requirements
  • Set REQUIRES_THREAD_AFFINITY = True if the SDK requires thread affinity
  • Use _run_blocking() for all SDK calls that may block
  • Call await self._cleanup_executor() in close() to release thread resources

Attributes:

Name Type Description
REQUIRES_THREAD_AFFINITY bool

Class attribute indicating thread affinity requirement

camera_name

Unique identifier for the camera

camera_config_file

Path to camera configuration file

img_quality_enhancement

Whether image quality enhancement is enabled

retrieve_retry_count

Number of retries for image retrieval

camera Optional[Any]

The initialized camera object (implementation-specific)

device_manager Optional[Any]

Device manager object (implementation-specific)

initialized bool

Camera initialization status

Initialize base camera with configuration integration.

Parameters:

Name Type Description Default
camera_name Optional[str]

Unique identifier for the camera (auto-generated if None)

None
camera_config Optional[str]

Path to camera configuration file

None
img_quality_enhancement Optional[bool]

Whether to apply image quality enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of retries for image retrieval (uses config default if None)

None
setup_camera async
setup_camera()

Common setup method for camera initialization.

This method provides a standardized setup pattern that can be used by all camera backends. It calls the abstract initialize() method and handles common initialization patterns.

Raises:

Type Description
CameraNotFoundError

If camera cannot be found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps.

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit.

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization.

get_packet_size async
get_packet_size() -> int

Get current packet size.

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required
Note

This is a runtime-configurable parameter that can be changed without reinitializing the camera.

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_lens_status async
get_lens_status() -> Dict[str, Any]

Get liquid lens hardware state.

Returns:

Type Description
Dict[str, Any]

Dict with keys:

Dict[str, Any]
  • connected (bool): Whether a lens is physically connected
Dict[str, Any]
  • status (str): Lens status string (e.g., "Lens OK")
Dict[str, Any]
  • optical_power (float | None): Current optical power in diopters
get_optical_power async
get_optical_power() -> float

Get current lens optical power in diopters.

set_optical_power async
set_optical_power(diopters: float)

Set lens optical power in diopters (manual focus).

Parameters:

Name Type Description Default
diopters float

Target optical power within the lens range.

required
get_optical_power_range async
get_optical_power_range() -> List[float]

Get optical power range [min, max] in diopters.

trigger_autofocus async
trigger_autofocus(accuracy: str = 'Normal') -> bool

Trigger one-shot autofocus.

Parameters:

Name Type Description Default
accuracy str

Autofocus accuracy mode — "Fast", "Normal", or "Accurate".

'Normal'

Returns:

Type Description
bool

True when autofocus completes successfully.

get_focus_config async
get_focus_config() -> Dict[str, Any]

Get current focus/autofocus configuration.

Returns:

Type Description
Dict[str, Any]

Dict with keys: accuracy, stepper, stepper_lower_limit, stepper_upper_limit,

Dict[str, Any]

roi_size, focus_source, edge_detection, roi_offset_x, roi_offset_y.

set_focus_config async
set_focus_config(**settings)

Set focus/autofocus parameters.

Parameters:

Name Type Description Default
**settings

Keys matching get_focus_config() return values.

{}
basler

Basler Camera Backend

Provides support for Basler cameras via pypylon SDK with mock implementation for testing.

Components
  • BaslerCameraBackend: Real Basler camera implementation (requires pypylon SDK)
  • MockBaslerCameraBackend: Mock implementation for testing and development
Requirements
  • Real cameras: pypylon SDK (Pylon SDK for Python)
  • Mock cameras: No additional dependencies
Installation
  1. Install Pylon SDK from Basler
  2. pip install pypylon
  3. Configure camera permissions (Linux may require udev rules)
Usage

from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend, MockBaslerCameraBackend

Real camera

if BASLER_AVAILABLE: camera = BaslerCameraBackend("camera_name") success, cam_obj, remote_obj = await camera.initialize() # Initialize first if success: image = await camera.capture() await camera.close()

Mock camera (always available)

mock_camera = MockBaslerCameraBackend("mock_cam_0") success, cam_obj, remote_obj = await mock_camera.initialize() # Initialize first if success: image = await mock_camera.capture() await mock_camera.close()

BaslerCameraBackend
BaslerCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    multicast_enabled: Optional[bool] = None,
    target_ips: Optional[List[str]] = None,
    multicast_group: Optional[str] = None,
    multicast_port: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

Basler camera backend using the pypylon SDK.

This backend provides comprehensive support for Basler cameras including hardware triggers, exposure control, ROI settings, and image enhancement.

Thread Model

The pypylon SDK requires thread affinity - all SDK operations for a camera must execute on the same OS thread that opened it. This backend uses a dedicated single-thread executor per camera instance to satisfy this requirement, enabling reliable multi-camera concurrent operations.

Capture operations are atomic: the entire capture sequence (trigger, retrieve, convert) executes as a single blocking call on the dedicated thread, preventing thread-switching issues.

Features
  • Full pypylon SDK integration for USB3 and GigE cameras
  • Hardware trigger and continuous capture modes
  • Region of Interest (ROI) control
  • Automatic and manual exposure/gain control
  • CLAHE image quality enhancement
  • Pylon Feature Stream (.pfs) configuration import/export
  • Multicast streaming support for GigE cameras
Requirements
  • Basler Pylon SDK installed on system
  • pypylon package (pip install pypylon)
  • OpenCV for image processing

Example::

from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend

async with BaslerCameraBackend("cam1") as camera:
    await camera.set_exposure(20000)
    await camera.set_triggermode("continuous")
    image = await camera.capture()

Attributes:

Name Type Description
camera Optional[Any]

Underlying pypylon InstantCamera object

triggermode

Current trigger mode ("continuous" or "trigger")

timeout_ms

Capture timeout in milliseconds

buffer_count

Number of frame buffers for streaming

converter

Pypylon image format converter

grabbing_mode

Pylon grabbing strategy

multicast_enabled

Whether multicast streaming is enabled

Initialize Basler camera with configurable parameters.

Parameters:

Name Type Description Default
camera_name str

Camera identifier (serial number, IP, or user-defined name)

required
camera_config Optional[str]

Path to Pylon Feature Stream (.pfs) file (optional)

None
img_quality_enhancement Optional[bool]

Enable CLAHE image enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
multicast_enabled Optional[bool]

Enable multicast streaming mode (uses config default if None)

None
target_ips Optional[List[str]]

List of target IP addresses for multicast discovery (optional)

None
multicast_group Optional[str]

Multicast group IP address (uses config default if None)

None
multicast_port Optional[int]

Multicast port number (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - pixel_format: Default pixel format (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None)

{}

Raises:

Type Description
SDKNotAvailableError

If pypylon SDK is not available

CameraConfigurationError

If configuration is invalid

CameraInitializationError

If camera initialization fails

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available Basler cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False
target_ips Optional[List[str]]

Optional list of IP addresses to specifically discover

None

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names (user-defined names preferred, serial numbers as fallback) or dict with details

Raises:

Type Description
SDKNotAvailableError

If Basler SDK is not available

HardwareOperationError

If camera discovery fails

discover_async async classmethod
discover_async(
    include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]

Async wrapper for get_available_cameras() - runs discovery in threadpool.

Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.

Parameters:

Name Type Description Default
include_details bool

If True, return a dict of details per camera.

False
target_ips Optional[List[str]]

Optional list of specific IP addresses to target.

None

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details.

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the camera connection.

This searches for the camera by name, serial number, or IP and establishes a connection if found. Uses multicast-aware discovery if enabled.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, camera object, None)

Raises:

Type Description
CameraNotFoundError

If no cameras found or specified camera not found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

configure_streaming async
configure_streaming()

Configure multicast streaming settings for the camera.

This method sets up multicast parameters when multicast mode is enabled. It configures the camera using the StreamGrabber interface for multicast streaming.

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If multicast configuration fails

HardwareOperationError

If streaming configuration fails

get_image_quality_enhancement
get_image_quality_enhancement() -> bool

Get image quality enhancement setting.

set_image_quality_enhancement
set_image_quality_enhancement(value: bool)

Set image quality enhancement setting.

get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the supported exposure time range in microseconds.

Returns:

Type Description
List[Union[int, float]]

List with [min_exposure, max_exposure] in microseconds

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure range retrieval fails

get_exposure async
get_exposure() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure retrieval fails

set_exposure async
set_exposure(exposure: Union[int, float])

Set the camera exposure time in microseconds.

Parameters:

Name Type Description Default
exposure_value

Exposure time in microseconds

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If exposure value is out of range

HardwareOperationError

If exposure setting fails

get_triggermode async
get_triggermode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If trigger mode retrieval fails

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set the camera's trigger mode for image acquisition.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" or "trigger")

'continuous'

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If trigger mode is invalid

HardwareOperationError

If trigger mode setting fails

capture async
capture() -> np.ndarray

Capture a single image from the camera.

In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.

This method runs the entire capture operation atomically on a dedicated thread to ensure thread affinity for pypylon SDK calls. This is critical for multi-camera concurrent operations.

Returns:

Type Description
ndarray

Image array in BGR format

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

check_connection async
check_connection() -> bool

Check if camera is connected and operational.

Returns:

Type Description
bool

True if connected and operational, False otherwise

import_config async
import_config(config_path: str)

Import camera configuration from common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If configuration import fails

export_config async
export_config(config_path: str)

Export current camera configuration to common JSON format.

Parameters:

Name Type Description Default
config_path str

Path where to save configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If configuration export fails

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set the Region of Interest (ROI) for image acquisition.

Parameters:

Name Type Description Default
x int

X offset from sensor top-left

required
y int

Y offset from sensor top-left

required
width int

ROI width

required
height int

ROI height

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If ROI parameters are invalid

HardwareOperationError

If ROI setting fails

get_ROI async
get_ROI() -> Dict[str, int]

Get current Region of Interest settings.

Returns:

Type Description
Dict[str, int]

Dictionary with x, y, width, height

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If ROI retrieval fails

reset_ROI async
reset_ROI()

Reset ROI to maximum sensor area.

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If ROI reset fails

set_gain async
set_gain(gain: float)

Set the camera's gain value.

Parameters:

Name Type Description Default
gain float

Gain value (camera-specific range)

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If gain value is out of range

HardwareOperationError

If gain setting fails

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If gain retrieval fails

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get camera gain range.

Returns:

Type Description
List[Union[int, float]]

List containing [min_gain, max_gain]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If gain range retrieval fails

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps.

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit in Mbps.

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization.

get_packet_size async
get_packet_size() -> int

Get current packet size.

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes.

Returns:

Type Description
List[str]

List of available white balance modes (lowercase for API compatibility)

get_width_range async
get_width_range() -> List[int]

Get camera width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If width range retrieval fails

get_height_range async
get_height_range() -> List[int]

Get camera height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If height range retrieval fails

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get available pixel formats.

Returns:

Type Description
List[str]

List of available pixel formats

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If pixel format range retrieval fails

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If pixel format retrieval fails

set_pixel_format async
set_pixel_format(pixel_format: str)

Set pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format to set

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If pixel format is invalid

HardwareOperationError

If pixel format setting fails

get_wb async
get_wb() -> str

Get the current white balance auto setting.

Returns:

Type Description
str

White balance auto setting ("off", "once", "continuous")

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If white balance retrieval fails

set_auto_wb_once async
set_auto_wb_once(value: str)

Set the white balance auto mode.

Parameters:

Name Type Description Default
value str

White balance mode ("off", "once", "continuous")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If white balance mode is invalid

HardwareOperationError

If white balance setting fails

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes for Basler cameras.

Returns:

Type Description
List[str]

List of available trigger modes based on GenICam TriggerMode and TriggerSource

get_bandwidth_limit_range async
get_bandwidth_limit_range() -> List[float]

Get bandwidth limit range for GigE cameras.

Returns:

Type Description
List[float]

List containing [min_bandwidth, max_bandwidth] in Mbps

get_packet_size_range async
get_packet_size_range() -> List[int]

Get packet size range for GigE cameras.

Returns:

Type Description
List[int]

List containing [min_packet_size, max_packet_size] in bytes

get_inter_packet_delay_range async
get_inter_packet_delay_range() -> List[int]

Get inter-packet delay range for GigE cameras.

Returns:

Type Description
List[int]

List containing [min_delay, max_delay] in ticks

get_lens_status async
get_lens_status() -> Dict[str, Any]

Get liquid lens hardware state.

get_optical_power async
get_optical_power() -> float

Get current lens optical power in diopters.

set_optical_power async
set_optical_power(diopters: float)

Set lens optical power in diopters (manual focus).

get_optical_power_range async
get_optical_power_range() -> List[float]

Get optical power range [min, max] in diopters.

trigger_autofocus async
trigger_autofocus(accuracy: str = 'Normal') -> bool

Trigger one-shot autofocus.

Parameters:

Name Type Description Default
accuracy str

"Fast", "Normal", or "Accurate".

'Normal'

Returns:

Type Description
bool

True when autofocus completes successfully.

get_focus_config async
get_focus_config() -> Dict[str, Any]

Get current focus/autofocus configuration.

set_focus_config async
set_focus_config(**settings)

Set focus/autofocus parameters.

close async
close()

Close the camera and release resources.

Raises:

Type Description
CameraConnectionError

If camera closure fails

MockBaslerCameraBackend
MockBaslerCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

Mock Basler Camera Backend Implementation

This class provides a mock implementation of the Basler camera backend for testing and development. It simulates Basler camera functionality without requiring actual hardware, with configurable behavior and error simulation.

Features
  • Complete simulation of Basler camera API
  • Configurable image generation with realistic patterns
  • Error simulation for testing error handling
  • Configuration import/export simulation
  • Camera control features (exposure, ROI, trigger modes, etc.)
  • Realistic timing and behavior simulation

Usage::

from mindtrace.hardware.cameras.backends.basler import MockBaslerCameraBackend

camera = MockBaslerCameraBackend("mock_camera_1")
await camera.set_exposure(20000)
image = await camera.capture()
await camera.close()
Error Simulation

Enable error simulation via environment variables: - MOCK_BASLER_FAIL_INIT: Simulate initialization failure - MOCK_BASLER_FAIL_CAPTURE: Simulate capture failure - MOCK_BASLER_TIMEOUT: Simulate timeout errors

Attributes:

Name Type Description
initialized

Whether camera was successfully initialized

camera_name

Name/identifier of the mock camera

triggermode

Current trigger mode ("continuous" or "trigger")

img_quality_enhancement

Current image enhancement setting

timeout_ms

Capture timeout in milliseconds

retrieve_retry_count

Number of capture retry attempts

exposure_time

Current exposure time in microseconds

gain

Current gain value

roi

Current region of interest settings

white_balance_mode

Current white balance mode

image_counter

Counter for generating unique images

fail_init

Whether to simulate initialization failure

fail_capture

Whether to simulate capture failure

simulate_timeout

Whether to simulate timeout errors

Initialize mock Basler camera.

Parameters:

Name Type Description Default
camera_name str

Camera identifier

required
camera_config Optional[str]

Path to configuration file (simulated)

None
img_quality_enhancement Optional[bool]

Enable image enhancement simulation (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - pixel_format: Pixel format (simulated) - buffer_count: Buffer count (simulated) - timeout_ms: Timeout in milliseconds - fast_mode: If True, skip all sleep delays for fast unit tests (default: False) - simulate_fail_init: If True, simulate initialization failure (overrides env) - simulate_fail_capture: If True, simulate capture failure (overrides env) - simulate_timeout: If True, simulate timeout on capture (overrides env) - simulate_cancel: If True, simulate asyncio cancellation during capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_checker_size: Checker size (int) used when pattern is checkerboard - synthetic_overlay_text: If False, disables text overlays in synthetic images

{}

Raises:

Type Description
CameraConfigurationError

If configuration is invalid

CameraInitializationError

If initialization fails (when simulated)

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available mock Basler cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of mock camera names or dict with details

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the mock camera connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, mock camera object, None)

Raises:

Type Description
CameraNotFoundError

If no cameras found or specified camera not found

CameraInitializationError

If initialization fails (when simulated)

CameraConnectionError

If camera connection fails

capture async
capture() -> np.ndarray

Capture a single image from the mock camera.

Returns:

Type Description
ndarray

Captured BGR image array

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

IsGrabbing
IsGrabbing() -> bool

Return whether the mock camera is currently in a grabbing state.

StartGrabbing
StartGrabbing(grabbing_mode: Optional[str] = None) -> None

Enter grabbing state, optionally updating grabbing mode.

Parameters:

Name Type Description Default
grabbing_mode Optional[str]

Optional grabbing mode string; if provided, updates current mode.

None
StopGrabbing
StopGrabbing() -> None

Exit grabbing state.

get_image_quality_enhancement
get_image_quality_enhancement() -> bool

Get image quality enhancement setting.

Returns:

Type Description
bool

True if enhancement is enabled, otherwise False.

set_image_quality_enhancement
set_image_quality_enhancement(value: bool)

Set image quality enhancement setting.

Parameters:

Name Type Description Default
value bool

True to enable enhancement, False to disable.

required
get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the supported exposure time range in microseconds.

Returns:

Type Description
List[Union[int, float]]

List with [min_exposure, max_exposure] in microseconds

get_exposure async
get_exposure() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time

set_exposure async
set_exposure(exposure: Union[int, float])

Set the camera exposure time in microseconds.

Parameters:

Name Type Description Default
exposure Union[int, float]

Exposure time in microseconds

required

Raises:

Type Description
CameraConfigurationError

If exposure value is out of range

get_triggermode async
get_triggermode() -> str

Get current trigger mode.

Returns:

Type Description
str

Current trigger mode

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set trigger mode.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" or "trigger")

'continuous'

Raises:

Type Description
CameraConfigurationError

If trigger mode is invalid

check_connection async
check_connection() -> bool

Check if mock camera is connected and operational.

Returns:

Type Description
bool

True if connected and operational, False otherwise

import_config async
import_config(config_path: str)

Import camera configuration from common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to configuration file

required

Raises:

Type Description
CameraConfigurationError

If configuration file is not found or invalid

export_config async
export_config(config_path: str)

Export camera configuration to common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to save configuration file

required
set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set Region of Interest (ROI).

Parameters:

Name Type Description Default
x int

ROI x offset

required
y int

ROI y offset

required
width int

ROI width

required
height int

ROI height

required

Raises:

Type Description
CameraConfigurationError

If ROI parameters are invalid

get_ROI async
get_ROI() -> Dict[str, int]

Get current Region of Interest (ROI).

Returns:

Type Description
Dict[str, int]

Dictionary with ROI parameters

reset_ROI async
reset_ROI()

Reset ROI to full sensor size.

set_gain async
set_gain(gain: Union[int, float])

Set camera gain.

Parameters:

Name Type Description Default
gain Union[int, float]

Gain value

required

Raises:

Type Description
CameraConfigurationError

If gain value is out of range

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get the supported gain range.

Returns:

Type Description
List[Union[int, float]]

List with [min_gain, max_gain]

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

get_wb async
get_wb() -> str

Get current white balance mode.

Returns:

Type Description
str

Current white balance mode

set_auto_wb_once async
set_auto_wb_once(value: str)

Set white balance mode.

Parameters:

Name Type Description Default
value str

White balance mode

required
get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes.

Returns:

Type Description
List[str]

List of available white balance modes (lowercase for API compatibility)

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get available pixel formats.

Returns:

Type Description
List[str]

List of available pixel formats

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format

set_pixel_format async
set_pixel_format(pixel_format: str)

Set pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format to set

required

Raises:

Type Description
CameraConfigurationError

If pixel format is not supported

get_width_range async
get_width_range() -> List[int]

Get camera width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

get_height_range async
get_height_range() -> List[int]

Get camera height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps (simulated).

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit (simulated).

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization (simulated).

get_packet_size async
get_packet_size() -> int

Get current packet size (simulated).

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control (simulated).

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay (simulated).

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes for mock Basler cameras.

get_bandwidth_limit_range async
get_bandwidth_limit_range() -> List[float]

Get bandwidth limit range for mock GigE cameras.

get_packet_size_range async
get_packet_size_range() -> List[int]

Get packet size range for mock GigE cameras.

get_inter_packet_delay_range async
get_inter_packet_delay_range() -> List[int]

Get inter-packet delay range for mock GigE cameras.

close async
close()

Close the mock camera and release resources.

basler_camera_backend

Basler Camera Backend Module

BaslerCameraBackend
BaslerCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    multicast_enabled: Optional[bool] = None,
    target_ips: Optional[List[str]] = None,
    multicast_group: Optional[str] = None,
    multicast_port: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

Basler camera backend using the pypylon SDK.

This backend provides comprehensive support for Basler cameras including hardware triggers, exposure control, ROI settings, and image enhancement.

Thread Model

The pypylon SDK requires thread affinity - all SDK operations for a camera must execute on the same OS thread that opened it. This backend uses a dedicated single-thread executor per camera instance to satisfy this requirement, enabling reliable multi-camera concurrent operations.

Capture operations are atomic: the entire capture sequence (trigger, retrieve, convert) executes as a single blocking call on the dedicated thread, preventing thread-switching issues.

Features
  • Full pypylon SDK integration for USB3 and GigE cameras
  • Hardware trigger and continuous capture modes
  • Region of Interest (ROI) control
  • Automatic and manual exposure/gain control
  • CLAHE image quality enhancement
  • Pylon Feature Stream (.pfs) configuration import/export
  • Multicast streaming support for GigE cameras
Requirements
  • Basler Pylon SDK installed on system
  • pypylon package (pip install pypylon)
  • OpenCV for image processing

Example::

from mindtrace.hardware.cameras.backends.basler import BaslerCameraBackend

async with BaslerCameraBackend("cam1") as camera:
    await camera.set_exposure(20000)
    await camera.set_triggermode("continuous")
    image = await camera.capture()

Attributes:

Name Type Description
camera Optional[Any]

Underlying pypylon InstantCamera object

triggermode

Current trigger mode ("continuous" or "trigger")

timeout_ms

Capture timeout in milliseconds

buffer_count

Number of frame buffers for streaming

converter

Pypylon image format converter

grabbing_mode

Pylon grabbing strategy

multicast_enabled

Whether multicast streaming is enabled

Initialize Basler camera with configurable parameters.

Parameters:

Name Type Description Default
camera_name str

Camera identifier (serial number, IP, or user-defined name)

required
camera_config Optional[str]

Path to Pylon Feature Stream (.pfs) file (optional)

None
img_quality_enhancement Optional[bool]

Enable CLAHE image enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
multicast_enabled Optional[bool]

Enable multicast streaming mode (uses config default if None)

None
target_ips Optional[List[str]]

List of target IP addresses for multicast discovery (optional)

None
multicast_group Optional[str]

Multicast group IP address (uses config default if None)

None
multicast_port Optional[int]

Multicast port number (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - pixel_format: Default pixel format (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None)

{}

Raises:

Type Description
SDKNotAvailableError

If pypylon SDK is not available

CameraConfigurationError

If configuration is invalid

CameraInitializationError

If camera initialization fails

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available Basler cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False
target_ips Optional[List[str]]

Optional list of IP addresses to specifically discover

None

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names (user-defined names preferred, serial numbers as fallback) or dict with details

Raises:

Type Description
SDKNotAvailableError

If Basler SDK is not available

HardwareOperationError

If camera discovery fails

discover_async async classmethod
discover_async(
    include_details: bool = False, target_ips: Optional[List[str]] = None
) -> Union[List[str], Dict[str, Dict[str, str]]]

Async wrapper for get_available_cameras() - runs discovery in threadpool.

Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.

Parameters:

Name Type Description Default
include_details bool

If True, return a dict of details per camera.

False
target_ips Optional[List[str]]

Optional list of specific IP addresses to target.

None

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details.

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the camera connection.

This searches for the camera by name, serial number, or IP and establishes a connection if found. Uses multicast-aware discovery if enabled.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, camera object, None)

Raises:

Type Description
CameraNotFoundError

If no cameras found or specified camera not found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

configure_streaming async
configure_streaming()

Configure multicast streaming settings for the camera.

This method sets up multicast parameters when multicast mode is enabled. It configures the camera using the StreamGrabber interface for multicast streaming.

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If multicast configuration fails

HardwareOperationError

If streaming configuration fails

get_image_quality_enhancement
get_image_quality_enhancement() -> bool

Get image quality enhancement setting.

set_image_quality_enhancement
set_image_quality_enhancement(value: bool)

Set image quality enhancement setting.

get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the supported exposure time range in microseconds.

Returns:

Type Description
List[Union[int, float]]

List with [min_exposure, max_exposure] in microseconds

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure range retrieval fails

get_exposure async
get_exposure() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure retrieval fails

set_exposure async
set_exposure(exposure: Union[int, float])

Set the camera exposure time in microseconds.

Parameters:

Name Type Description Default
exposure_value

Exposure time in microseconds

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If exposure value is out of range

HardwareOperationError

If exposure setting fails

get_triggermode async
get_triggermode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If trigger mode retrieval fails

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set the camera's trigger mode for image acquisition.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" or "trigger")

'continuous'

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If trigger mode is invalid

HardwareOperationError

If trigger mode setting fails

capture async
capture() -> np.ndarray

Capture a single image from the camera.

In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.

This method runs the entire capture operation atomically on a dedicated thread to ensure thread affinity for pypylon SDK calls. This is critical for multi-camera concurrent operations.

Returns:

Type Description
ndarray

Image array in BGR format

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

check_connection async
check_connection() -> bool

Check if camera is connected and operational.

Returns:

Type Description
bool

True if connected and operational, False otherwise

import_config async
import_config(config_path: str)

Import camera configuration from common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If configuration import fails

export_config async
export_config(config_path: str)

Export current camera configuration to common JSON format.

Parameters:

Name Type Description Default
config_path str

Path where to save configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If configuration export fails

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set the Region of Interest (ROI) for image acquisition.

Parameters:

Name Type Description Default
x int

X offset from sensor top-left

required
y int

Y offset from sensor top-left

required
width int

ROI width

required
height int

ROI height

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If ROI parameters are invalid

HardwareOperationError

If ROI setting fails

get_ROI async
get_ROI() -> Dict[str, int]

Get current Region of Interest settings.

Returns:

Type Description
Dict[str, int]

Dictionary with x, y, width, height

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If ROI retrieval fails

reset_ROI async
reset_ROI()

Reset ROI to maximum sensor area.

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If ROI reset fails

set_gain async
set_gain(gain: float)

Set the camera's gain value.

Parameters:

Name Type Description Default
gain float

Gain value (camera-specific range)

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If gain value is out of range

HardwareOperationError

If gain setting fails

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If gain retrieval fails

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get camera gain range.

Returns:

Type Description
List[Union[int, float]]

List containing [min_gain, max_gain]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If gain range retrieval fails

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps.

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit in Mbps.

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization.

get_packet_size async
get_packet_size() -> int

Get current packet size.

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes.

Returns:

Type Description
List[str]

List of available white balance modes (lowercase for API compatibility)

get_width_range async
get_width_range() -> List[int]

Get camera width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If width range retrieval fails

get_height_range async
get_height_range() -> List[int]

Get camera height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If height range retrieval fails

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get available pixel formats.

Returns:

Type Description
List[str]

List of available pixel formats

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If pixel format range retrieval fails

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If pixel format retrieval fails

set_pixel_format async
set_pixel_format(pixel_format: str)

Set pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format to set

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If pixel format is invalid

HardwareOperationError

If pixel format setting fails

get_wb async
get_wb() -> str

Get the current white balance auto setting.

Returns:

Type Description
str

White balance auto setting ("off", "once", "continuous")

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If white balance retrieval fails

set_auto_wb_once async
set_auto_wb_once(value: str)

Set the white balance auto mode.

Parameters:

Name Type Description Default
value str

White balance mode ("off", "once", "continuous")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If white balance mode is invalid

HardwareOperationError

If white balance setting fails

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes for Basler cameras.

Returns:

Type Description
List[str]

List of available trigger modes based on GenICam TriggerMode and TriggerSource

get_bandwidth_limit_range async
get_bandwidth_limit_range() -> List[float]

Get bandwidth limit range for GigE cameras.

Returns:

Type Description
List[float]

List containing [min_bandwidth, max_bandwidth] in Mbps

get_packet_size_range async
get_packet_size_range() -> List[int]

Get packet size range for GigE cameras.

Returns:

Type Description
List[int]

List containing [min_packet_size, max_packet_size] in bytes

get_inter_packet_delay_range async
get_inter_packet_delay_range() -> List[int]

Get inter-packet delay range for GigE cameras.

Returns:

Type Description
List[int]

List containing [min_delay, max_delay] in ticks

get_lens_status async
get_lens_status() -> Dict[str, Any]

Get liquid lens hardware state.

get_optical_power async
get_optical_power() -> float

Get current lens optical power in diopters.

set_optical_power async
set_optical_power(diopters: float)

Set lens optical power in diopters (manual focus).

get_optical_power_range async
get_optical_power_range() -> List[float]

Get optical power range [min, max] in diopters.

trigger_autofocus async
trigger_autofocus(accuracy: str = 'Normal') -> bool

Trigger one-shot autofocus.

Parameters:

Name Type Description Default
accuracy str

"Fast", "Normal", or "Accurate".

'Normal'

Returns:

Type Description
bool

True when autofocus completes successfully.

get_focus_config async
get_focus_config() -> Dict[str, Any]

Get current focus/autofocus configuration.

set_focus_config async
set_focus_config(**settings)

Set focus/autofocus parameters.

close async
close()

Close the camera and release resources.

Raises:

Type Description
CameraConnectionError

If camera closure fails

mock_basler_camera_backend

Mock Basler Camera Backend Module

MockBaslerCameraBackend
MockBaslerCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

Mock Basler Camera Backend Implementation

This class provides a mock implementation of the Basler camera backend for testing and development. It simulates Basler camera functionality without requiring actual hardware, with configurable behavior and error simulation.

Features
  • Complete simulation of Basler camera API
  • Configurable image generation with realistic patterns
  • Error simulation for testing error handling
  • Configuration import/export simulation
  • Camera control features (exposure, ROI, trigger modes, etc.)
  • Realistic timing and behavior simulation

Usage::

from mindtrace.hardware.cameras.backends.basler import MockBaslerCameraBackend

camera = MockBaslerCameraBackend("mock_camera_1")
await camera.set_exposure(20000)
image = await camera.capture()
await camera.close()
Error Simulation

Enable error simulation via environment variables: - MOCK_BASLER_FAIL_INIT: Simulate initialization failure - MOCK_BASLER_FAIL_CAPTURE: Simulate capture failure - MOCK_BASLER_TIMEOUT: Simulate timeout errors

Attributes:

Name Type Description
initialized

Whether camera was successfully initialized

camera_name

Name/identifier of the mock camera

triggermode

Current trigger mode ("continuous" or "trigger")

img_quality_enhancement

Current image enhancement setting

timeout_ms

Capture timeout in milliseconds

retrieve_retry_count

Number of capture retry attempts

exposure_time

Current exposure time in microseconds

gain

Current gain value

roi

Current region of interest settings

white_balance_mode

Current white balance mode

image_counter

Counter for generating unique images

fail_init

Whether to simulate initialization failure

fail_capture

Whether to simulate capture failure

simulate_timeout

Whether to simulate timeout errors

Initialize mock Basler camera.

Parameters:

Name Type Description Default
camera_name str

Camera identifier

required
camera_config Optional[str]

Path to configuration file (simulated)

None
img_quality_enhancement Optional[bool]

Enable image enhancement simulation (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - pixel_format: Pixel format (simulated) - buffer_count: Buffer count (simulated) - timeout_ms: Timeout in milliseconds - fast_mode: If True, skip all sleep delays for fast unit tests (default: False) - simulate_fail_init: If True, simulate initialization failure (overrides env) - simulate_fail_capture: If True, simulate capture failure (overrides env) - simulate_timeout: If True, simulate timeout on capture (overrides env) - simulate_cancel: If True, simulate asyncio cancellation during capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_checker_size: Checker size (int) used when pattern is checkerboard - synthetic_overlay_text: If False, disables text overlays in synthetic images

{}

Raises:

Type Description
CameraConfigurationError

If configuration is invalid

CameraInitializationError

If initialization fails (when simulated)

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available mock Basler cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of mock camera names or dict with details

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the mock camera connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, mock camera object, None)

Raises:

Type Description
CameraNotFoundError

If no cameras found or specified camera not found

CameraInitializationError

If initialization fails (when simulated)

CameraConnectionError

If camera connection fails

capture async
capture() -> np.ndarray

Capture a single image from the mock camera.

Returns:

Type Description
ndarray

Captured BGR image array

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

IsGrabbing
IsGrabbing() -> bool

Return whether the mock camera is currently in a grabbing state.

StartGrabbing
StartGrabbing(grabbing_mode: Optional[str] = None) -> None

Enter grabbing state, optionally updating grabbing mode.

Parameters:

Name Type Description Default
grabbing_mode Optional[str]

Optional grabbing mode string; if provided, updates current mode.

None
StopGrabbing
StopGrabbing() -> None

Exit grabbing state.

get_image_quality_enhancement
get_image_quality_enhancement() -> bool

Get image quality enhancement setting.

Returns:

Type Description
bool

True if enhancement is enabled, otherwise False.

set_image_quality_enhancement
set_image_quality_enhancement(value: bool)

Set image quality enhancement setting.

Parameters:

Name Type Description Default
value bool

True to enable enhancement, False to disable.

required
get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the supported exposure time range in microseconds.

Returns:

Type Description
List[Union[int, float]]

List with [min_exposure, max_exposure] in microseconds

get_exposure async
get_exposure() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time

set_exposure async
set_exposure(exposure: Union[int, float])

Set the camera exposure time in microseconds.

Parameters:

Name Type Description Default
exposure Union[int, float]

Exposure time in microseconds

required

Raises:

Type Description
CameraConfigurationError

If exposure value is out of range

get_triggermode async
get_triggermode() -> str

Get current trigger mode.

Returns:

Type Description
str

Current trigger mode

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set trigger mode.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" or "trigger")

'continuous'

Raises:

Type Description
CameraConfigurationError

If trigger mode is invalid

check_connection async
check_connection() -> bool

Check if mock camera is connected and operational.

Returns:

Type Description
bool

True if connected and operational, False otherwise

import_config async
import_config(config_path: str)

Import camera configuration from common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to configuration file

required

Raises:

Type Description
CameraConfigurationError

If configuration file is not found or invalid

export_config async
export_config(config_path: str)

Export camera configuration to common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to save configuration file

required
set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set Region of Interest (ROI).

Parameters:

Name Type Description Default
x int

ROI x offset

required
y int

ROI y offset

required
width int

ROI width

required
height int

ROI height

required

Raises:

Type Description
CameraConfigurationError

If ROI parameters are invalid

get_ROI async
get_ROI() -> Dict[str, int]

Get current Region of Interest (ROI).

Returns:

Type Description
Dict[str, int]

Dictionary with ROI parameters

reset_ROI async
reset_ROI()

Reset ROI to full sensor size.

set_gain async
set_gain(gain: Union[int, float])

Set camera gain.

Parameters:

Name Type Description Default
gain Union[int, float]

Gain value

required

Raises:

Type Description
CameraConfigurationError

If gain value is out of range

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get the supported gain range.

Returns:

Type Description
List[Union[int, float]]

List with [min_gain, max_gain]

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

get_wb async
get_wb() -> str

Get current white balance mode.

Returns:

Type Description
str

Current white balance mode

set_auto_wb_once async
set_auto_wb_once(value: str)

Set white balance mode.

Parameters:

Name Type Description Default
value str

White balance mode

required
get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes.

Returns:

Type Description
List[str]

List of available white balance modes (lowercase for API compatibility)

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get available pixel formats.

Returns:

Type Description
List[str]

List of available pixel formats

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format

set_pixel_format async
set_pixel_format(pixel_format: str)

Set pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format to set

required

Raises:

Type Description
CameraConfigurationError

If pixel format is not supported

get_width_range async
get_width_range() -> List[int]

Get camera width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

get_height_range async
get_height_range() -> List[int]

Get camera height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps (simulated).

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit (simulated).

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization (simulated).

get_packet_size async
get_packet_size() -> int

Get current packet size (simulated).

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control (simulated).

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay (simulated).

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes for mock Basler cameras.

get_bandwidth_limit_range async
get_bandwidth_limit_range() -> List[float]

Get bandwidth limit range for mock GigE cameras.

get_packet_size_range async
get_packet_size_range() -> List[int]

Get packet size range for mock GigE cameras.

get_inter_packet_delay_range async
get_inter_packet_delay_range() -> List[int]

Get inter-packet delay range for mock GigE cameras.

close async
close()

Close the mock camera and release resources.

camera_backend
CameraBackend
CameraBackend(
    camera_name: Optional[str] = None,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
)

Bases: MindtraceABC

Abstract base class for all camera implementations.

This class defines the async interface that all camera backends must implement to ensure consistent behavior across different camera types and manufacturers.

Thread Model

Backends declare their threading requirements via the REQUIRES_THREAD_AFFINITY class attribute:

  • When True, a dedicated single-thread executor is created per camera instance to ensure all SDK calls for that camera execute on the same OS thread. This is required by SDKs like Pypylon and Harvesters that bind camera objects to the thread that opened them.

  • When False, blocking calls are dispatched via asyncio.to_thread() using the default shared thread pool. This is suitable for thread-safe SDKs like OpenCV.

All blocking SDK calls should use the _run_blocking() method, which automatically selects the appropriate execution strategy based on REQUIRES_THREAD_AFFINITY.

Subclass Requirements
  • Set REQUIRES_THREAD_AFFINITY = True if the SDK requires thread affinity
  • Use _run_blocking() for all SDK calls that may block
  • Call await self._cleanup_executor() in close() to release thread resources

Attributes:

Name Type Description
REQUIRES_THREAD_AFFINITY bool

Class attribute indicating thread affinity requirement

camera_name

Unique identifier for the camera

camera_config_file

Path to camera configuration file

img_quality_enhancement

Whether image quality enhancement is enabled

retrieve_retry_count

Number of retries for image retrieval

camera Optional[Any]

The initialized camera object (implementation-specific)

device_manager Optional[Any]

Device manager object (implementation-specific)

initialized bool

Camera initialization status

Initialize base camera with configuration integration.

Parameters:

Name Type Description Default
camera_name Optional[str]

Unique identifier for the camera (auto-generated if None)

None
camera_config Optional[str]

Path to camera configuration file

None
img_quality_enhancement Optional[bool]

Whether to apply image quality enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of retries for image retrieval (uses config default if None)

None
setup_camera async
setup_camera()

Common setup method for camera initialization.

This method provides a standardized setup pattern that can be used by all camera backends. It calls the abstract initialize() method and handles common initialization patterns.

Raises:

Type Description
CameraNotFoundError

If camera cannot be found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

set_bandwidth_limit async
set_bandwidth_limit(limit_mbps: Optional[float])

Set GigE camera bandwidth limit in Mbps.

get_bandwidth_limit async
get_bandwidth_limit() -> float

Get current bandwidth limit.

set_packet_size async
set_packet_size(size: int)

Set GigE packet size for network optimization.

get_packet_size async
get_packet_size() -> int

Get current packet size.

set_inter_packet_delay async
set_inter_packet_delay(delay_ticks: int)

Set inter-packet delay for network traffic control.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Get current inter-packet delay.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required
Note

This is a runtime-configurable parameter that can be changed without reinitializing the camera.

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_lens_status async
get_lens_status() -> Dict[str, Any]

Get liquid lens hardware state.

Returns:

Type Description
Dict[str, Any]

Dict with keys:

Dict[str, Any]
  • connected (bool): Whether a lens is physically connected
Dict[str, Any]
  • status (str): Lens status string (e.g., "Lens OK")
Dict[str, Any]
  • optical_power (float | None): Current optical power in diopters
get_optical_power async
get_optical_power() -> float

Get current lens optical power in diopters.

set_optical_power async
set_optical_power(diopters: float)

Set lens optical power in diopters (manual focus).

Parameters:

Name Type Description Default
diopters float

Target optical power within the lens range.

required
get_optical_power_range async
get_optical_power_range() -> List[float]

Get optical power range [min, max] in diopters.

trigger_autofocus async
trigger_autofocus(accuracy: str = 'Normal') -> bool

Trigger one-shot autofocus.

Parameters:

Name Type Description Default
accuracy str

Autofocus accuracy mode — "Fast", "Normal", or "Accurate".

'Normal'

Returns:

Type Description
bool

True when autofocus completes successfully.

get_focus_config async
get_focus_config() -> Dict[str, Any]

Get current focus/autofocus configuration.

Returns:

Type Description
Dict[str, Any]

Dict with keys: accuracy, stepper, stepper_lower_limit, stepper_upper_limit,

Dict[str, Any]

roi_size, focus_source, edge_detection, roi_offset_x, roi_offset_y.

set_focus_config async
set_focus_config(**settings)

Set focus/autofocus parameters.

Parameters:

Name Type Description Default
**settings

Keys matching get_focus_config() return values.

{}
genicam

GenICam Camera Backend Module

GenICamCameraBackend
GenICamCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

GenICam camera backend using the Harvesters library.

This backend provides support for any GenICam-compliant camera via a GenTL Producer (.cti file), including cameras from Keyence, Allied Vision, FLIR, and others.

Thread Model

The Harvesters library requires thread affinity - ImageAcquirer operations must execute on the same OS thread that created the acquirer. This backend uses a dedicated single-thread executor per camera instance.

The Harvester instance (which manages the GenTL Producer) is shared as a singleton across all GenICam camera instances to prevent device conflicts. Only the per-camera ImageAcquirer operations use the dedicated executor.

Features
  • GenICam-compliant camera support via Harvesters
  • Matrix Vision GenTL Producer integration
  • Hardware trigger and continuous capture modes
  • Region of Interest (ROI) control
  • Automatic and manual exposure/gain control
  • CLAHE image quality enhancement
  • Vendor-specific parameter handling (Keyence, Basler, etc.)
Requirements
  • Matrix Vision mvIMPACT Acquire SDK (provides GenTL Producer)
  • Harvesters package (pip install harvesters)
  • OpenCV for image processing

Example::

from mindtrace.hardware.cameras.backends.genicam import GenICamCameraBackend

async with GenICamCameraBackend("device_serial") as camera:
    await camera.set_exposure(50000)
    image = await camera.capture()

Attributes:

Name Type Description
image_acquirer Optional[Any]

Harvesters ImageAcquirer for this camera

harvester Optional[Harvester]

Shared Harvester instance (class-level singleton)

triggermode

Current trigger mode ("continuous" or "trigger")

timeout_ms

Capture timeout in milliseconds

cti_path

Path to the GenTL Producer file

vendor_quirks Dict[str, bool]

Vendor-specific parameter handling flags

Initialize GenICam camera with configurable parameters.

Parameters:

Name Type Description Default
camera_name str

Camera identifier (serial number, device ID, or user-defined name)

required
camera_config Optional[str]

Path to JSON configuration file (optional)

None
img_quality_enhancement Optional[bool]

Enable CLAHE image enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - cti_path: Path to GenTL Producer file (auto-detected if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None)

{}

Raises:

Type Description
SDKNotAvailableError

If Harvesters library is not available

CameraConfigurationError

If configuration is invalid

CameraInitializationError

If camera initialization fails

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available GenICam cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names (serial numbers or device IDs) or dict with details

Raises:

Type Description
SDKNotAvailableError

If Harvesters library is not available

HardwareOperationError

If camera discovery fails

discover_async async classmethod
discover_async(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Async wrapper for get_available_cameras() - runs discovery in threadpool.

Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera discovery.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed camera information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names or dict with details (same as get_available_cameras)

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the camera connection.

This searches for the camera by name, serial number, or device ID and establishes a connection if found.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, image_acquirer object, device_info)

Raises:

Type Description
CameraNotFoundError

If no cameras found or specified camera not found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the supported exposure time range in microseconds.

Returns:

Type Description
List[Union[int, float]]

List with [min_exposure, max_exposure] in microseconds

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure range retrieval fails

get_exposure async
get_exposure() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure retrieval fails

set_exposure async
set_exposure(exposure: Union[int, float])

Set the camera exposure time in microseconds.

Parameters:

Name Type Description Default
exposure Union[int, float]

Exposure time in microseconds

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If exposure value is out of range

HardwareOperationError

If exposure setting fails

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format string (e.g., "Mono8", "RGB8", "BayerRG8")

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If pixel format retrieval fails

get_width_range async
get_width_range() -> List[int]

Get camera width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If width range retrieval fails

get_height_range async
get_height_range() -> List[int]

Get camera height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If height range retrieval fails

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get list of supported pixel formats.

Returns:

Type Description
List[str]

List of supported pixel format strings

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If pixel format list retrieval fails

set_pixel_format async
set_pixel_format(pixel_format: str)

Set the camera pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format string (e.g., "Mono8", "RGB8")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If pixel format is not supported

HardwareOperationError

If pixel format setting fails

get_triggermode async
get_triggermode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If trigger mode retrieval fails

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set the camera's trigger mode for image acquisition.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" or "trigger")

'continuous'

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If trigger mode is invalid

HardwareOperationError

If trigger mode setting fails

capture async
capture() -> np.ndarray

Capture a single image from the camera.

In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.

Returns:

Type Description
ndarray

Image array in BGR format

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

check_connection async
check_connection() -> bool

Check if camera is connected and operational.

Returns:

Type Description
bool

True if connected and operational, False otherwise

close async
close()

Close the camera and release resources.

Raises:

Type Description
CameraConnectionError

If camera closure fails

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get camera gain range.

get_gain async
get_gain() -> float

Get current camera gain.

set_gain async
set_gain(gain: Union[int, float])

Set camera gain.

get_wb async
get_wb() -> str

Get current white balance mode using GenICam nodes.

Returns:

Type Description
str

Current white balance mode string

Raises:

Type Description
CameraConnectionError

If camera is not initialized

set_auto_wb_once async
set_auto_wb_once(value: str)

Execute automatic white balance once using GenICam nodes.

Parameters:

Name Type Description Default
value str

White balance mode ("auto", "once", "manual", "off")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If white balance setting fails

get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes using GenICam nodes.

Returns:

Type Description
List[str]

List of available white balance mode strings

Raises:

Type Description
CameraConnectionError

If camera is not initialized

import_config async
import_config(config_path: str)

Import camera configuration from JSON file.

Parameters:

Name Type Description Default
config_path str

Path to JSON configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If configuration file is invalid

HardwareOperationError

If configuration import fails

export_config async
export_config(config_path: str)

Export camera configuration to JSON file.

Parameters:

Name Type Description Default
config_path str

Path to save JSON configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If configuration export fails

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set Region of Interest using GenICam nodes.

Parameters:

Name Type Description Default
x int

Left offset

required
y int

Top offset

required
width int

Width of ROI

required
height int

Height of ROI

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If ROI parameters are invalid

HardwareOperationError

If ROI setting fails

get_ROI async
get_ROI() -> Dict[str, int]

Get current ROI settings from GenICam nodes.

Returns:

Type Description
Dict[str, int]

Dictionary with 'x', 'y', 'width', 'height' keys

Raises:

Type Description
CameraConnectionError

If camera is not initialized

reset_ROI async
reset_ROI()

Reset ROI to maximum sensor area using GenICam nodes.

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If ROI reset fails

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

MockGenICamCameraBackend
MockGenICamCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

Mock GenICam Camera Backend Implementation

This class provides a mock implementation of the GenICam camera backend for testing and development. It simulates GenICam camera functionality without requiring actual hardware, Harvesters library, or GenTL Producer files.

Features
  • Complete simulation of GenICam camera API
  • Configurable image generation with realistic patterns
  • Error simulation for testing error handling
  • Configuration import/export simulation
  • Camera control features (exposure, ROI, trigger modes, etc.)
  • Vendor-specific quirks simulation (Keyence, Basler, etc.)
  • Realistic timing and behavior simulation

Usage::

from mindtrace.hardware.cameras.backends.genicam import MockGenICamCameraBackend

camera = MockGenICamCameraBackend("mock_keyence_001", vendor="KEYENCE")
await camera.set_exposure(50000)
image = await camera.capture()
await camera.close()
Error Simulation

Enable error simulation via environment variables: - MOCK_GENICAM_FAIL_INIT: Simulate initialization failure - MOCK_GENICAM_FAIL_CAPTURE: Simulate capture failure - MOCK_GENICAM_TIMEOUT: Simulate timeout errors

Attributes:

Name Type Description
initialized bool

Whether camera was successfully initialized

camera_name

Name/identifier of the mock camera

triggermode

Current trigger mode ("continuous" or "trigger")

img_quality_enhancement

Current image enhancement setting

timeout_ms

Capture timeout in milliseconds

retrieve_retry_count

Number of capture retry attempts

exposure_time

Current exposure time in microseconds

gain

Current gain value

roi

Current region of interest settings

vendor

Simulated camera vendor

model

Simulated camera model

serial_number

Simulated serial number

vendor_quirks

Vendor-specific parameter handling flags

image_counter

Counter for generating unique images

fail_init

Whether to simulate initialization failure

fail_capture

Whether to simulate capture failure

simulate_timeout

Whether to simulate timeout errors

Initialize mock GenICam camera.

Parameters:

Name Type Description Default
camera_name str

Camera identifier

required
camera_config Optional[str]

Path to configuration file (simulated)

None
img_quality_enhancement Optional[bool]

Enable image enhancement simulation (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - vendor: Simulated vendor ("KEYENCE", "BASLER", "FLIR", etc.) - model: Simulated model name - serial_number: Simulated serial number - cti_path: Simulated CTI path (ignored in mock) - timeout_ms: Timeout in milliseconds - buffer_count: Buffer count (simulated) - simulate_fail_init: If True, simulate initialization failure - simulate_fail_capture: If True, simulate capture failure - simulate_timeout: If True, simulate timeout on capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_overlay_text: If False, disables text overlays in synthetic images

{}

Raises:

Type Description
CameraConfigurationError

If configuration is invalid

CameraInitializationError

If initialization fails (when simulated)

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available mock GenICam cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names or dict with details

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the mock camera connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, mock camera object, device_info)

Raises:

Type Description
CameraNotFoundError

If camera not found (when simulated)

CameraInitializationError

If initialization fails (when simulated)

CameraConnectionError

If connection fails (when simulated)

get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the simulated exposure time range in microseconds.

get_exposure async
get_exposure() -> float

Get current simulated exposure time in microseconds.

set_exposure async
set_exposure(exposure: Union[int, float])

Set the simulated camera exposure time in microseconds.

get_triggermode async
get_triggermode() -> str

Get current simulated trigger mode.

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set the simulated camera's trigger mode.

capture async
capture() -> np.ndarray

Capture a simulated image from the mock camera.

check_connection async
check_connection() -> bool

Check if mock camera is connected and operational.

close async
close()

Close the mock camera and release simulated resources.

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get simulated camera gain range.

get_gain async
get_gain() -> float

Get current simulated camera gain.

set_gain async
set_gain(gain: Union[int, float])

Set simulated camera gain.

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set simulated Region of Interest.

get_ROI async
get_ROI() -> Dict[str, int]

Get current simulated ROI settings.

reset_ROI async
reset_ROI()

Reset simulated ROI to maximum sensor area.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

import_config async
import_config(config_path: str)

Import simulated camera configuration from JSON file.

export_config async
export_config(config_path: str)

Export current simulated camera configuration to JSON file.

genicam_camera_backend

GenICam Camera Backend Module

GenICamCameraBackend
GenICamCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

GenICam camera backend using the Harvesters library.

This backend provides support for any GenICam-compliant camera via a GenTL Producer (.cti file), including cameras from Keyence, Allied Vision, FLIR, and others.

Thread Model

The Harvesters library requires thread affinity - ImageAcquirer operations must execute on the same OS thread that created the acquirer. This backend uses a dedicated single-thread executor per camera instance.

The Harvester instance (which manages the GenTL Producer) is shared as a singleton across all GenICam camera instances to prevent device conflicts. Only the per-camera ImageAcquirer operations use the dedicated executor.

Features
  • GenICam-compliant camera support via Harvesters
  • Matrix Vision GenTL Producer integration
  • Hardware trigger and continuous capture modes
  • Region of Interest (ROI) control
  • Automatic and manual exposure/gain control
  • CLAHE image quality enhancement
  • Vendor-specific parameter handling (Keyence, Basler, etc.)
Requirements
  • Matrix Vision mvIMPACT Acquire SDK (provides GenTL Producer)
  • Harvesters package (pip install harvesters)
  • OpenCV for image processing

Example::

from mindtrace.hardware.cameras.backends.genicam import GenICamCameraBackend

async with GenICamCameraBackend("device_serial") as camera:
    await camera.set_exposure(50000)
    image = await camera.capture()

Attributes:

Name Type Description
image_acquirer Optional[Any]

Harvesters ImageAcquirer for this camera

harvester Optional[Harvester]

Shared Harvester instance (class-level singleton)

triggermode

Current trigger mode ("continuous" or "trigger")

timeout_ms

Capture timeout in milliseconds

cti_path

Path to the GenTL Producer file

vendor_quirks Dict[str, bool]

Vendor-specific parameter handling flags

Initialize GenICam camera with configurable parameters.

Parameters:

Name Type Description Default
camera_name str

Camera identifier (serial number, device ID, or user-defined name)

required
camera_config Optional[str]

Path to JSON configuration file (optional)

None
img_quality_enhancement Optional[bool]

Enable CLAHE image enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - cti_path: Path to GenTL Producer file (auto-detected if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None) - buffer_count: Number of frame buffers (uses config default if None)

{}

Raises:

Type Description
SDKNotAvailableError

If Harvesters library is not available

CameraConfigurationError

If configuration is invalid

CameraInitializationError

If camera initialization fails

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available GenICam cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names (serial numbers or device IDs) or dict with details

Raises:

Type Description
SDKNotAvailableError

If Harvesters library is not available

HardwareOperationError

If camera discovery fails

discover_async async classmethod
discover_async(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Async wrapper for get_available_cameras() - runs discovery in threadpool.

Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera discovery.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed camera information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names or dict with details (same as get_available_cameras)

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the camera connection.

This searches for the camera by name, serial number, or device ID and establishes a connection if found.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, image_acquirer object, device_info)

Raises:

Type Description
CameraNotFoundError

If no cameras found or specified camera not found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the supported exposure time range in microseconds.

Returns:

Type Description
List[Union[int, float]]

List with [min_exposure, max_exposure] in microseconds

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure range retrieval fails

get_exposure async
get_exposure() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If exposure retrieval fails

set_exposure async
set_exposure(exposure: Union[int, float])

Set the camera exposure time in microseconds.

Parameters:

Name Type Description Default
exposure Union[int, float]

Exposure time in microseconds

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If exposure value is out of range

HardwareOperationError

If exposure setting fails

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format string (e.g., "Mono8", "RGB8", "BayerRG8")

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If pixel format retrieval fails

get_width_range async
get_width_range() -> List[int]

Get camera width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If width range retrieval fails

get_height_range async
get_height_range() -> List[int]

Get camera height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If height range retrieval fails

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get list of supported pixel formats.

Returns:

Type Description
List[str]

List of supported pixel format strings

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If pixel format list retrieval fails

set_pixel_format async
set_pixel_format(pixel_format: str)

Set the camera pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format string (e.g., "Mono8", "RGB8")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If pixel format is not supported

HardwareOperationError

If pixel format setting fails

get_triggermode async
get_triggermode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

HardwareOperationError

If trigger mode retrieval fails

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set the camera's trigger mode for image acquisition.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" or "trigger")

'continuous'

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraConfigurationError

If trigger mode is invalid

HardwareOperationError

If trigger mode setting fails

capture async
capture() -> np.ndarray

Capture a single image from the camera.

In continuous mode, returns the latest available frame. In trigger mode, executes a software trigger and waits for the image.

Returns:

Type Description
ndarray

Image array in BGR format

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

check_connection async
check_connection() -> bool

Check if camera is connected and operational.

Returns:

Type Description
bool

True if connected and operational, False otherwise

close async
close()

Close the camera and release resources.

Raises:

Type Description
CameraConnectionError

If camera closure fails

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get camera gain range.

get_gain async
get_gain() -> float

Get current camera gain.

set_gain async
set_gain(gain: Union[int, float])

Set camera gain.

get_wb async
get_wb() -> str

Get current white balance mode using GenICam nodes.

Returns:

Type Description
str

Current white balance mode string

Raises:

Type Description
CameraConnectionError

If camera is not initialized

set_auto_wb_once async
set_auto_wb_once(value: str)

Execute automatic white balance once using GenICam nodes.

Parameters:

Name Type Description Default
value str

White balance mode ("auto", "once", "manual", "off")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If white balance setting fails

get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes using GenICam nodes.

Returns:

Type Description
List[str]

List of available white balance mode strings

Raises:

Type Description
CameraConnectionError

If camera is not initialized

import_config async
import_config(config_path: str)

Import camera configuration from JSON file.

Parameters:

Name Type Description Default
config_path str

Path to JSON configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If configuration file is invalid

HardwareOperationError

If configuration import fails

export_config async
export_config(config_path: str)

Export camera configuration to JSON file.

Parameters:

Name Type Description Default
config_path str

Path to save JSON configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If configuration export fails

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set Region of Interest using GenICam nodes.

Parameters:

Name Type Description Default
x int

Left offset

required
y int

Top offset

required
width int

Width of ROI

required
height int

Height of ROI

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If ROI parameters are invalid

HardwareOperationError

If ROI setting fails

get_ROI async
get_ROI() -> Dict[str, int]

Get current ROI settings from GenICam nodes.

Returns:

Type Description
Dict[str, int]

Dictionary with 'x', 'y', 'width', 'height' keys

Raises:

Type Description
CameraConnectionError

If camera is not initialized

reset_ROI async
reset_ROI()

Reset ROI to maximum sensor area using GenICam nodes.

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If ROI reset fails

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

mock_genicam_camera_backend

Mock GenICam Camera Backend Module

MockGenICamCameraBackend
MockGenICamCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

Mock GenICam Camera Backend Implementation

This class provides a mock implementation of the GenICam camera backend for testing and development. It simulates GenICam camera functionality without requiring actual hardware, Harvesters library, or GenTL Producer files.

Features
  • Complete simulation of GenICam camera API
  • Configurable image generation with realistic patterns
  • Error simulation for testing error handling
  • Configuration import/export simulation
  • Camera control features (exposure, ROI, trigger modes, etc.)
  • Vendor-specific quirks simulation (Keyence, Basler, etc.)
  • Realistic timing and behavior simulation

Usage::

from mindtrace.hardware.cameras.backends.genicam import MockGenICamCameraBackend

camera = MockGenICamCameraBackend("mock_keyence_001", vendor="KEYENCE")
await camera.set_exposure(50000)
image = await camera.capture()
await camera.close()
Error Simulation

Enable error simulation via environment variables: - MOCK_GENICAM_FAIL_INIT: Simulate initialization failure - MOCK_GENICAM_FAIL_CAPTURE: Simulate capture failure - MOCK_GENICAM_TIMEOUT: Simulate timeout errors

Attributes:

Name Type Description
initialized bool

Whether camera was successfully initialized

camera_name

Name/identifier of the mock camera

triggermode

Current trigger mode ("continuous" or "trigger")

img_quality_enhancement

Current image enhancement setting

timeout_ms

Capture timeout in milliseconds

retrieve_retry_count

Number of capture retry attempts

exposure_time

Current exposure time in microseconds

gain

Current gain value

roi

Current region of interest settings

vendor

Simulated camera vendor

model

Simulated camera model

serial_number

Simulated serial number

vendor_quirks

Vendor-specific parameter handling flags

image_counter

Counter for generating unique images

fail_init

Whether to simulate initialization failure

fail_capture

Whether to simulate capture failure

simulate_timeout

Whether to simulate timeout errors

Initialize mock GenICam camera.

Parameters:

Name Type Description Default
camera_name str

Camera identifier

required
camera_config Optional[str]

Path to configuration file (simulated)

None
img_quality_enhancement Optional[bool]

Enable image enhancement simulation (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of capture retry attempts (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - vendor: Simulated vendor ("KEYENCE", "BASLER", "FLIR", etc.) - model: Simulated model name - serial_number: Simulated serial number - cti_path: Simulated CTI path (ignored in mock) - timeout_ms: Timeout in milliseconds - buffer_count: Buffer count (simulated) - simulate_fail_init: If True, simulate initialization failure - simulate_fail_capture: If True, simulate capture failure - simulate_timeout: If True, simulate timeout on capture - synthetic_width: Override synthetic image width (int) - synthetic_height: Override synthetic image height (int) - synthetic_pattern: One of {"auto","gradient","checkerboard","circular","noise"} - synthetic_overlay_text: If False, disables text overlays in synthetic images

{}

Raises:

Type Description
CameraConfigurationError

If configuration is invalid

CameraInitializationError

If initialization fails (when simulated)

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Get available mock GenICam cameras.

Parameters:

Name Type Description Default
include_details bool

If True, return detailed information

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

List of camera names or dict with details

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the mock camera connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success status, mock camera object, device_info)

Raises:

Type Description
CameraNotFoundError

If camera not found (when simulated)

CameraInitializationError

If initialization fails (when simulated)

CameraConnectionError

If connection fails (when simulated)

get_exposure_range async
get_exposure_range() -> List[Union[int, float]]

Get the simulated exposure time range in microseconds.

get_exposure async
get_exposure() -> float

Get current simulated exposure time in microseconds.

set_exposure async
set_exposure(exposure: Union[int, float])

Set the simulated camera exposure time in microseconds.

get_triggermode async
get_triggermode() -> str

Get current simulated trigger mode.

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set the simulated camera's trigger mode.

capture async
capture() -> np.ndarray

Capture a simulated image from the mock camera.

check_connection async
check_connection() -> bool

Check if mock camera is connected and operational.

close async
close()

Close the mock camera and release simulated resources.

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get simulated camera gain range.

get_gain async
get_gain() -> float

Get current simulated camera gain.

set_gain async
set_gain(gain: Union[int, float])

Set simulated camera gain.

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set simulated Region of Interest.

get_ROI async
get_ROI() -> Dict[str, int]

Get current simulated ROI settings.

reset_ROI async
reset_ROI()

Reset simulated ROI to maximum sensor area.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

import_config async
import_config(config_path: str)

Import simulated camera configuration from JSON file.

export_config async
export_config(config_path: str)

Export current simulated camera configuration to JSON file.

opencv

OpenCV Camera Backend

Provides support for USB cameras and webcams via OpenCV with comprehensive error handling.

Components
  • OpenCVCameraBackend: OpenCV camera implementation (requires opencv-python)
Requirements
  • opencv-python: For camera access and image processing
  • numpy: For image array operations
Installation

pip install opencv-python numpy

Usage

from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend

USB camera (index 0)

if OPENCV_AVAILABLE: camera = OpenCVCameraBackend("0") success, cam_obj, remote_obj = await camera.initialize() # Initialize first if success: image = await camera.capture() await camera.close()

OpenCVCameraBackend
OpenCVCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

OpenCV camera implementation for USB cameras and webcams.

This backend provides a comprehensive interface to USB cameras, webcams, and other video capture devices using OpenCV's VideoCapture with robust error handling and resource management. It works across Windows, Linux, and macOS with platform-aware discovery.

Thread Model

OpenCV's VideoCapture is thread-safe and does not require thread affinity. This backend uses the default shared thread pool via asyncio.to_thread() for blocking operations, avoiding the overhead of per-camera dedicated executors. A per-instance asyncio.Lock serializes mutating operations to prevent concurrent set/read races.

Features
  • USB camera and webcam support across Windows, Linux, and macOS
  • Automatic camera discovery and enumeration
  • Configurable resolution, frame rate, and exposure settings
  • Optional image quality enhancement (CLAHE)
  • Robust error handling with retries and bounded timeouts
  • BGR to RGB conversion for consistency
  • Platform-specific optimizations
Configuration

All parameters are configurable via the hardware configuration system: - MINDTRACE_CAMERA_OPENCV_DEFAULT_WIDTH: Default frame width (1280) - MINDTRACE_CAMERA_OPENCV_DEFAULT_HEIGHT: Default frame height (720) - MINDTRACE_CAMERA_OPENCV_DEFAULT_FPS: Default frame rate (30) - MINDTRACE_CAMERA_OPENCV_DEFAULT_EXPOSURE: Default exposure (-1 for auto) - MINDTRACE_CAMERA_OPENCV_MAX_CAMERA_INDEX: Maximum camera index to test (10) - MINDTRACE_CAMERA_IMAGE_QUALITY_ENHANCEMENT: Enable CLAHE enhancement - MINDTRACE_CAMERA_RETRIEVE_RETRY_COUNT: Number of capture retry attempts - MINDTRACE_CAMERA_TIMEOUT_MS: Capture timeout in milliseconds

Attributes:

Name Type Description
camera_index

Camera device index or path

cap Optional[VideoCapture]

OpenCV VideoCapture object

initialized bool

Camera initialization status

width bool

Current frame width

height bool

Current frame height

fps bool

Current frame rate

exposure bool

Current exposure setting

timeout_ms

Capture timeout in milliseconds

Example::

from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend

async def main():
    camera = OpenCVCameraBackend("0", width=1280, height=720)
    ok, cap, _ = await camera.initialize()
    if ok:
        image = await camera.capture()
        await camera.close()

Initialize OpenCV camera with configuration.

Parameters:

Name Type Description Default
camera_name str

Camera identifier (index number or device path)

required
camera_config Optional[str]

Path to camera config file (not used for OpenCV)

None
img_quality_enhancement Optional[bool]

Whether to apply image quality enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of times to retry capture (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - width: Frame width (uses config default if None) - height: Frame height (uses config default if None) - fps: Frame rate (uses config default if None) - exposure: Exposure value (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None)

{}

Raises:

Type Description
SDKNotAvailableError

If OpenCV is not installed

CameraConfigurationError

If configuration is invalid

CameraInitializationError

If camera initialization fails

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the camera and establish connection.

Returns:

Type Description
bool

Tuple[bool, Any, Any]: (success, camera_object, remote_control_object). For OpenCV

Any

cameras, both objects are the same VideoCapture instance.

Raises:

Type Description
CameraNotFoundError

If camera cannot be opened

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Discover cameras with backend-aware probing.

  • Linux: prefer CAP_V4L2 probing across indices
  • Windows: try CAP_DSHOW then CAP_MSMF
  • macOS: try CAP_AVFOUNDATION
  • Fallback: default backend probing

Parameters:

Name Type Description Default
include_details bool

If True, return a dict of details per camera.

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

Union[List[str], Dict[str, Dict[str, str]]]: List of camera names (e.g.,

Union[List[str], Dict[str, Dict[str, str]]]

["opencv_camera_0"]) or a dict of details when include_details=True.

discover_async async classmethod
discover_async(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Async wrapper for get_available_cameras() - runs discovery in threadpool.

Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.

Parameters:

Name Type Description Default
include_details bool

If True, return a dict of details per camera.

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details.

capture async
capture() -> np.ndarray

Capture an image from the camera.

Implements retry logic and proper error handling for robust image capture. Converts OpenCV's default BGR format to RGB for consistency.

Returns:

Type Description
ndarray

np.ndarray: Captured image as an RGB numpy array.

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

check_connection async
check_connection() -> bool

Check if camera connection is active and healthy.

Returns:

Type Description
bool

True if camera is connected and responsive, False otherwise

close async
close()

Close camera connection and cleanup resources.

Properly releases the VideoCapture object and resets camera state.

Raises:

Type Description
CameraConnectionError

If camera closure fails

is_exposure_control_supported async
is_exposure_control_supported() -> bool

Check if exposure control is actually supported for this camera. Tests both reading and setting exposure to verify true support. Returns: True if exposure control is supported, False otherwise

set_exposure async
set_exposure(exposure: Union[int, float])

Set camera exposure time.

Parameters:

Name Type Description Default
exposure Union[int, float]

Exposure value (OpenCV uses log scale, typically -13 to -1)

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If exposure value is invalid or unsupported

HardwareOperationError

If exposure setting fails

get_exposure async
get_exposure() -> float

Get current camera exposure time.

Returns:

Type Description
float

Current exposure time value

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If exposure retrieval fails

get_exposure_range async
get_exposure_range() -> Optional[List[Union[int, float]]]

Get camera exposure time range.

Returns:

Type Description
Optional[List[Union[int, float]]]

List containing [min_exposure, max_exposure] in OpenCV log scale, or None if exposure control not supported

get_width_range async
get_width_range() -> List[int]

Get supported width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

get_height_range async
get_height_range() -> List[int]

Get supported height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get the supported gain range.

Returns:

Type Description
List[Union[int, float]]

List with [min_gain, max_gain]

set_gain async
set_gain(gain: Union[int, float])

Set camera gain.

Parameters:

Name Type Description Default
gain Union[int, float]

Gain value

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If gain value is out of range or setting fails

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set Region of Interest (ROI).

Note: OpenCV cameras typically don't support hardware ROI; implement in software if needed.

Parameters:

Name Type Description Default
x int

ROI x offset

required
y int

ROI y offset

required
width int

ROI width

required
height int

ROI height

required

Raises:

Type Description
NotImplementedError

ROI is not supported by the OpenCV backend

get_ROI async
get_ROI() -> Dict[str, int]

Get current Region of Interest (ROI).

Returns:

Type Description
Dict[str, int]

Dictionary with full frame dimensions (ROI not supported)

reset_ROI async
reset_ROI()

Reset ROI to full sensor size.

Raises:

Type Description
NotImplementedError

ROI reset is not supported by the OpenCV backend

get_wb async
get_wb() -> str

Get current white balance mode.

Returns:

Type Description
str

Current white balance mode ("auto" or "manual")

set_auto_wb_once async
set_auto_wb_once(value: str)

Set white balance mode.

Parameters:

Name Type Description Default
value str

White balance mode ("auto", "manual", "off")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If value is invalid

HardwareOperationError

If the operation fails

get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes.

Returns:

Type Description
List[str]

List of available white balance modes

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get available pixel formats.

Returns:

Type Description
List[str]

List of available pixel formats (OpenCV always uses BGR internally)

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (always BGR8 for OpenCV, converted to RGB8 in capture)

set_pixel_format async
set_pixel_format(pixel_format: str)

Set pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format to set

required

Raises:

Type Description
CameraConfigurationError

If pixel format is not supported

get_triggermode async
get_triggermode() -> str

Get trigger mode (always continuous for USB cameras).

Returns:

Type Description
str

"continuous" (USB cameras only support continuous mode)

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set trigger mode.

USB cameras only support continuous mode.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" only)

'continuous'

Raises:

Type Description
CameraConfigurationError

If trigger mode is not supported

get_image_quality_enhancement
get_image_quality_enhancement() -> bool

Get image quality enhancement status.

set_image_quality_enhancement
set_image_quality_enhancement(img_quality_enhancement: bool)

Set image quality enhancement.

Parameters:

Name Type Description Default
img_quality_enhancement bool

Whether to enable image quality enhancement

required

Raises:

Type Description
HardwareOperationError

If setting cannot be applied

export_config async
export_config(config_path: str)

Export current camera configuration to common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to save configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not connected

CameraConfigurationError

If configuration export fails

import_config async
import_config(config_path: str)

Import camera configuration from common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not connected

CameraConfigurationError

If configuration import fails

get_bandwidth_limit async
get_bandwidth_limit() -> float

Bandwidth limiting not applicable for OpenCV cameras.

get_packet_size async
get_packet_size() -> int

Packet size not applicable for OpenCV cameras.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Inter-packet delay not applicable for OpenCV cameras.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes for OpenCV cameras.

Returns:

Type Description
List[str]

List of available trigger modes (OpenCV only supports continuous)

opencv_camera_backend

OpenCV camera backend module.

OpenCVCameraBackend
OpenCVCameraBackend(
    camera_name: str,
    camera_config: Optional[str] = None,
    img_quality_enhancement: Optional[bool] = None,
    retrieve_retry_count: Optional[int] = None,
    **backend_kwargs
)

Bases: CameraBackend

OpenCV camera implementation for USB cameras and webcams.

This backend provides a comprehensive interface to USB cameras, webcams, and other video capture devices using OpenCV's VideoCapture with robust error handling and resource management. It works across Windows, Linux, and macOS with platform-aware discovery.

Thread Model

OpenCV's VideoCapture is thread-safe and does not require thread affinity. This backend uses the default shared thread pool via asyncio.to_thread() for blocking operations, avoiding the overhead of per-camera dedicated executors. A per-instance asyncio.Lock serializes mutating operations to prevent concurrent set/read races.

Features
  • USB camera and webcam support across Windows, Linux, and macOS
  • Automatic camera discovery and enumeration
  • Configurable resolution, frame rate, and exposure settings
  • Optional image quality enhancement (CLAHE)
  • Robust error handling with retries and bounded timeouts
  • BGR to RGB conversion for consistency
  • Platform-specific optimizations
Configuration

All parameters are configurable via the hardware configuration system: - MINDTRACE_CAMERA_OPENCV_DEFAULT_WIDTH: Default frame width (1280) - MINDTRACE_CAMERA_OPENCV_DEFAULT_HEIGHT: Default frame height (720) - MINDTRACE_CAMERA_OPENCV_DEFAULT_FPS: Default frame rate (30) - MINDTRACE_CAMERA_OPENCV_DEFAULT_EXPOSURE: Default exposure (-1 for auto) - MINDTRACE_CAMERA_OPENCV_MAX_CAMERA_INDEX: Maximum camera index to test (10) - MINDTRACE_CAMERA_IMAGE_QUALITY_ENHANCEMENT: Enable CLAHE enhancement - MINDTRACE_CAMERA_RETRIEVE_RETRY_COUNT: Number of capture retry attempts - MINDTRACE_CAMERA_TIMEOUT_MS: Capture timeout in milliseconds

Attributes:

Name Type Description
camera_index

Camera device index or path

cap Optional[VideoCapture]

OpenCV VideoCapture object

initialized bool

Camera initialization status

width bool

Current frame width

height bool

Current frame height

fps bool

Current frame rate

exposure bool

Current exposure setting

timeout_ms

Capture timeout in milliseconds

Example::

from mindtrace.hardware.cameras.backends.opencv import OpenCVCameraBackend

async def main():
    camera = OpenCVCameraBackend("0", width=1280, height=720)
    ok, cap, _ = await camera.initialize()
    if ok:
        image = await camera.capture()
        await camera.close()

Initialize OpenCV camera with configuration.

Parameters:

Name Type Description Default
camera_name str

Camera identifier (index number or device path)

required
camera_config Optional[str]

Path to camera config file (not used for OpenCV)

None
img_quality_enhancement Optional[bool]

Whether to apply image quality enhancement (uses config default if None)

None
retrieve_retry_count Optional[int]

Number of times to retry capture (uses config default if None)

None
**backend_kwargs

Backend-specific parameters: - width: Frame width (uses config default if None) - height: Frame height (uses config default if None) - fps: Frame rate (uses config default if None) - exposure: Exposure value (uses config default if None) - timeout_ms: Capture timeout in milliseconds (uses config default if None)

{}

Raises:

Type Description
SDKNotAvailableError

If OpenCV is not installed

CameraConfigurationError

If configuration is invalid

CameraInitializationError

If camera initialization fails

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the camera and establish connection.

Returns:

Type Description
bool

Tuple[bool, Any, Any]: (success, camera_object, remote_control_object). For OpenCV

Any

cameras, both objects are the same VideoCapture instance.

Raises:

Type Description
CameraNotFoundError

If camera cannot be opened

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

get_available_cameras staticmethod
get_available_cameras(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Discover cameras with backend-aware probing.

  • Linux: prefer CAP_V4L2 probing across indices
  • Windows: try CAP_DSHOW then CAP_MSMF
  • macOS: try CAP_AVFOUNDATION
  • Fallback: default backend probing

Parameters:

Name Type Description Default
include_details bool

If True, return a dict of details per camera.

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

Union[List[str], Dict[str, Dict[str, str]]]: List of camera names (e.g.,

Union[List[str], Dict[str, Dict[str, str]]]

["opencv_camera_0"]) or a dict of details when include_details=True.

discover_async async classmethod
discover_async(
    include_details: bool = False,
) -> Union[List[str], Dict[str, Dict[str, str]]]

Async wrapper for get_available_cameras() - runs discovery in threadpool.

Use this instead of get_available_cameras() when calling from async context to avoid blocking the event loop during camera enumeration.

Parameters:

Name Type Description Default
include_details bool

If True, return a dict of details per camera.

False

Returns:

Type Description
Union[List[str], Dict[str, Dict[str, str]]]

Union[List[str], Dict[str, Dict[str, str]]]: List of camera names or dict of details.

capture async
capture() -> np.ndarray

Capture an image from the camera.

Implements retry logic and proper error handling for robust image capture. Converts OpenCV's default BGR format to RGB for consistency.

Returns:

Type Description
ndarray

np.ndarray: Captured image as an RGB numpy array.

Raises:

Type Description
CameraConnectionError

If camera is not initialized or accessible

CameraCaptureError

If image capture fails

CameraTimeoutError

If capture times out

check_connection async
check_connection() -> bool

Check if camera connection is active and healthy.

Returns:

Type Description
bool

True if camera is connected and responsive, False otherwise

close async
close()

Close camera connection and cleanup resources.

Properly releases the VideoCapture object and resets camera state.

Raises:

Type Description
CameraConnectionError

If camera closure fails

is_exposure_control_supported async
is_exposure_control_supported() -> bool

Check if exposure control is actually supported for this camera. Tests both reading and setting exposure to verify true support. Returns: True if exposure control is supported, False otherwise

set_exposure async
set_exposure(exposure: Union[int, float])

Set camera exposure time.

Parameters:

Name Type Description Default
exposure Union[int, float]

Exposure value (OpenCV uses log scale, typically -13 to -1)

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If exposure value is invalid or unsupported

HardwareOperationError

If exposure setting fails

get_exposure async
get_exposure() -> float

Get current camera exposure time.

Returns:

Type Description
float

Current exposure time value

Raises:

Type Description
CameraConnectionError

If camera is not initialized

HardwareOperationError

If exposure retrieval fails

get_exposure_range async
get_exposure_range() -> Optional[List[Union[int, float]]]

Get camera exposure time range.

Returns:

Type Description
Optional[List[Union[int, float]]]

List containing [min_exposure, max_exposure] in OpenCV log scale, or None if exposure control not supported

get_width_range async
get_width_range() -> List[int]

Get supported width range.

Returns:

Type Description
List[int]

List containing [min_width, max_width]

get_height_range async
get_height_range() -> List[int]

Get supported height range.

Returns:

Type Description
List[int]

List containing [min_height, max_height]

get_gain_range async
get_gain_range() -> List[Union[int, float]]

Get the supported gain range.

Returns:

Type Description
List[Union[int, float]]

List with [min_gain, max_gain]

set_gain async
set_gain(gain: Union[int, float])

Set camera gain.

Parameters:

Name Type Description Default
gain Union[int, float]

Gain value

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If gain value is out of range or setting fails

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

set_ROI async
set_ROI(x: int, y: int, width: int, height: int)

Set Region of Interest (ROI).

Note: OpenCV cameras typically don't support hardware ROI; implement in software if needed.

Parameters:

Name Type Description Default
x int

ROI x offset

required
y int

ROI y offset

required
width int

ROI width

required
height int

ROI height

required

Raises:

Type Description
NotImplementedError

ROI is not supported by the OpenCV backend

get_ROI async
get_ROI() -> Dict[str, int]

Get current Region of Interest (ROI).

Returns:

Type Description
Dict[str, int]

Dictionary with full frame dimensions (ROI not supported)

reset_ROI async
reset_ROI()

Reset ROI to full sensor size.

Raises:

Type Description
NotImplementedError

ROI reset is not supported by the OpenCV backend

get_wb async
get_wb() -> str

Get current white balance mode.

Returns:

Type Description
str

Current white balance mode ("auto" or "manual")

set_auto_wb_once async
set_auto_wb_once(value: str)

Set white balance mode.

Parameters:

Name Type Description Default
value str

White balance mode ("auto", "manual", "off")

required

Raises:

Type Description
CameraConnectionError

If camera is not initialized

CameraConfigurationError

If value is invalid

HardwareOperationError

If the operation fails

get_wb_range async
get_wb_range() -> List[str]

Get available white balance modes.

Returns:

Type Description
List[str]

List of available white balance modes

get_pixel_format_range async
get_pixel_format_range() -> List[str]

Get available pixel formats.

Returns:

Type Description
List[str]

List of available pixel formats (OpenCV always uses BGR internally)

get_current_pixel_format async
get_current_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (always BGR8 for OpenCV, converted to RGB8 in capture)

set_pixel_format async
set_pixel_format(pixel_format: str)

Set pixel format.

Parameters:

Name Type Description Default
pixel_format str

Pixel format to set

required

Raises:

Type Description
CameraConfigurationError

If pixel format is not supported

get_triggermode async
get_triggermode() -> str

Get trigger mode (always continuous for USB cameras).

Returns:

Type Description
str

"continuous" (USB cameras only support continuous mode)

set_triggermode async
set_triggermode(triggermode: str = 'continuous')

Set trigger mode.

USB cameras only support continuous mode.

Parameters:

Name Type Description Default
triggermode str

Trigger mode ("continuous" only)

'continuous'

Raises:

Type Description
CameraConfigurationError

If trigger mode is not supported

get_image_quality_enhancement
get_image_quality_enhancement() -> bool

Get image quality enhancement status.

set_image_quality_enhancement
set_image_quality_enhancement(img_quality_enhancement: bool)

Set image quality enhancement.

Parameters:

Name Type Description Default
img_quality_enhancement bool

Whether to enable image quality enhancement

required

Raises:

Type Description
HardwareOperationError

If setting cannot be applied

export_config async
export_config(config_path: str)

Export current camera configuration to common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to save configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not connected

CameraConfigurationError

If configuration export fails

import_config async
import_config(config_path: str)

Import camera configuration from common JSON format.

Parameters:

Name Type Description Default
config_path str

Path to configuration file

required

Raises:

Type Description
CameraConnectionError

If camera is not connected

CameraConfigurationError

If configuration import fails

get_bandwidth_limit async
get_bandwidth_limit() -> float

Bandwidth limiting not applicable for OpenCV cameras.

get_packet_size async
get_packet_size() -> int

Packet size not applicable for OpenCV cameras.

get_inter_packet_delay async
get_inter_packet_delay() -> int

Inter-packet delay not applicable for OpenCV cameras.

set_capture_timeout async
set_capture_timeout(timeout_ms: int)

Set capture timeout in milliseconds.

Parameters:

Name Type Description Default
timeout_ms int

Timeout value in milliseconds

required

Raises:

Type Description
ValueError

If timeout_ms is negative

get_capture_timeout async
get_capture_timeout() -> int

Get current capture timeout in milliseconds.

Returns:

Type Description
int

Current timeout value in milliseconds

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes for OpenCV cameras.

Returns:

Type Description
List[str]

List of available trigger modes (OpenCV only supports continuous)

homography

Homography-based planar measurement system.

This module provides homography calibration and measurement capabilities for converting pixel-space object detections to real-world metric dimensions on planar surfaces.

Features
  • Automatic checkerboard calibration
  • Manual point correspondence calibration
  • RANSAC-based robust homography estimation
  • Multi-unit measurement support (mm, cm, m, in, ft)
  • Batch processing for multiple objects
  • Framework-integrated logging and configuration

Typical Usage::

from mindtrace.hardware import HomographyCalibrator, HomographyMeasurer

# One-time calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(12, 12),
    square_size=25.0,
    world_unit="mm"
)
calibration.save("camera_calibration.json")

# Repeated measurement
measurer = HomographyMeasurer(calibration)
detections = yolo.detect(frame)
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")

for measured in measurements:
    print(f"Size: {measured.width_world:.1f} × {measured.height_world:.1f} cm")
HomographyCalibrator
HomographyCalibrator(**kwargs)

Bases: Mindtrace

Calibrates planar homography for pixel-to-world coordinate mapping.

Establishes a homography matrix H that maps planar world coordinates (X, Y, Z=0) in metric units to image pixel coordinates (u, v). Supports both automatic checkerboard-based calibration and manual point correspondence calibration.

The homography enables real-world measurements from camera images for objects lying on a known planar surface (e.g., overhead cameras, objects on tables/floors).

Features
  • Automatic checkerboard pattern detection and calibration
  • Manual point correspondence calibration
  • RANSAC-based robust homography estimation
  • Sub-pixel corner refinement for improved accuracy
  • Lens distortion correction support
  • Camera intrinsics estimation from FOV
Typical Workflow
  1. Place calibration target (checkerboard) on measurement plane
  2. Capture image with known world coordinates
  3. Calibrate to obtain homography matrix
  4. Use calibration for repeated measurements

Usage::

from mindtrace.hardware import HomographyCalibrator

# Automatic checkerboard calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(12, 12),     # Inner corners
    square_width=25.0,       # mm width per square
    square_height=25.0,      # mm height per square
    world_unit="mm"
)

# Manual point correspondence calibration
calibration = calibrator.calibrate_from_correspondences(
    world_points=[(0, 0), (300, 0), (300, 200), (0, 200)],  # mm
    image_points=[(100, 50), (500, 50), (500, 400), (100, 400)],  # pixels
    world_unit="mm"
)

# Save for later use
calibration.save("camera_calibration.json")
Configuration

All parameters can be configured via hardware config: - ransac_threshold: RANSAC reprojection error threshold (default: 3.0 pixels) - refine_corners: Enable sub-pixel corner refinement (default: True) - corner_refinement_window: Refinement window size (default: 11) - min_correspondences: Minimum points needed (default: 4) - default_world_unit: Default measurement unit (default: "mm")

Limitations
  • Only works for planar surfaces (Z=0 assumption)
  • Requires camera to remain fixed after calibration
  • Accuracy degrades with severe viewing angles

Initialize homography calibrator.

Parameters:

Name Type Description Default
**kwargs

Additional arguments passed to Mindtrace base class

{}
estimate_intrinsics_from_fov
estimate_intrinsics_from_fov(
    image_size: Tuple[int, int],
    fov_horizontal_deg: float,
    fov_vertical_deg: float,
    principal_point: Optional[Tuple[float, float]] = None,
) -> np.ndarray

Estimate camera intrinsics matrix from field-of-view parameters.

Computes a simple pinhole camera model intrinsics matrix (K) from the camera's horizontal and vertical field of view angles and image dimensions. Useful when full camera calibration is not available.

Parameters:

Name Type Description Default
image_size Tuple[int, int]

Image dimensions as (width, height) in pixels

required
fov_horizontal_deg float

Horizontal field of view in degrees

required
fov_vertical_deg float

Vertical field of view in degrees

required
principal_point Optional[Tuple[float, float]]

Optional (cx, cy) principal point in pixels. Defaults to image center if not provided.

None

Returns:

Type Description
ndarray

3x3 camera intrinsics matrix K

Example::

K = calibrator.estimate_intrinsics_from_fov(
    image_size=(1920, 1080),
    fov_horizontal_deg=70.0,
    fov_vertical_deg=45.0
)
calibrate_from_correspondences
calibrate_from_correspondences(
    world_points: ndarray,
    image_points: ndarray,
    world_unit: Optional[str] = None,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
) -> CalibrationData

Compute homography from known point correspondences.

Establishes the homography matrix H given known world coordinates (on Z=0 plane) and their corresponding image pixel coordinates. Uses RANSAC for robust estimation in the presence of outliers.

Parameters:

Name Type Description Default
world_points ndarray

Nx2 array of world coordinates in metric units (X, Y on Z=0 plane)

required
image_points ndarray

Nx2 array of corresponding image coordinates in pixels (u, v)

required
world_unit Optional[str]

Unit of world coordinates (e.g., 'mm', 'cm', 'm'). Uses config default if None.

None
camera_matrix Optional[ndarray]

Optional 3x3 camera intrinsics matrix for undistortion

None
dist_coeffs Optional[ndarray]

Optional distortion coefficients for undistortion

None

Returns:

Type Description
CalibrationData

CalibrationData containing homography matrix and metadata

Raises:

Type Description
CameraConfigurationError

If inputs are invalid (wrong shape, too few points)

HardwareOperationError

If homography estimation fails

Example::

# Four corner correspondences (world in mm, image in pixels)
world_pts = np.array([[0, 0], [300, 0], [300, 200], [0, 200]])
image_pts = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])

calibration = calibrator.calibrate_from_correspondences(
    world_points=world_pts,
    image_points=image_pts,
    world_unit="mm"
)
calibrate_checkerboard
calibrate_checkerboard(
    image: Union[Image, ndarray],
    board_size: Optional[Tuple[int, int]] = None,
    square_size: Optional[float] = None,
    world_unit: Optional[str] = None,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
    refine_corners: Optional[bool] = None,
) -> CalibrationData

Automatic calibration from checkerboard pattern detection.

Detects a checkerboard calibration pattern in the image, extracts corner correspondences, and computes the homography matrix. The checkerboard is assumed to lie on the Z=0 plane with known square dimensions.

Parameters:

Name Type Description Default
image Union[Image, ndarray]

PIL Image or BGR numpy array containing checkerboard pattern

required
board_size Optional[Tuple[int, int]]

Number of inner corners as (columns, rows). Uses config default if None. For a standard 8x8 checkerboard, use (7, 7).

None
square_size Optional[float]

Physical size of one checkerboard square in world units. Uses config default if None.

None
world_unit Optional[str]

Unit of square_size (e.g., 'mm', 'cm', 'm'). Uses config default if None.

None
camera_matrix Optional[ndarray]

Optional 3x3 camera intrinsics matrix for undistortion

None
dist_coeffs Optional[ndarray]

Optional distortion coefficients for undistortion

None
refine_corners Optional[bool]

Enable sub-pixel corner refinement. Uses config default if None.

None

Returns:

Type Description
CalibrationData

CalibrationData containing homography matrix and metadata

Raises:

Type Description
CameraConfigurationError

If image format is unsupported

HardwareOperationError

If checkerboard detection fails

Example::

# Use config defaults for standard calibration board
calibration = calibrator.calibrate_checkerboard(image=checkerboard_image)

# Or override specific parameters
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(9, 6),         # Custom board size
    square_size=30.0,          # Custom square size
    world_unit="mm"
)
Notes
  • board_size is the number of INNER corners, not squares
  • A standard 8x8 checkerboard has 7x7 inner corners
  • Ensure good lighting and focus for accurate detection
  • Checkerboard should fill significant portion of image
  • If using a standard calibration board, configure dimensions via: MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_COLS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_ROWS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_SQUARE
calibrate_checkerboard_multi_view
calibrate_checkerboard_multi_view(
    images: list[Union[Image, ndarray]],
    positions: list[Tuple[float, float]],
    board_size: Optional[Tuple[int, int]] = None,
    square_width: Optional[float] = None,
    square_height: Optional[float] = None,
    world_unit: Optional[str] = None,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
    refine_corners: Optional[bool] = None,
) -> CalibrationData

Calibrate from multiple checkerboard positions on the same plane.

Combines corner detections from multiple images where the checkerboard is placed at different positions on the measurement plane. This provides better calibration coverage over large areas without requiring an oversized calibration target.

Ideal for calibrating long surfaces (e.g., metallic bars, conveyor belts) using a standard-sized checkerboard moved to multiple positions.

Parameters:

Name Type Description Default
images list[Union[Image, ndarray]]

List of images, each containing the checkerboard at a different position

required
positions list[Tuple[float, float]]

List of (x_offset, y_offset) tuples in world units specifying the checkerboard's origin position in each image. The first position is typically (0, 0), and subsequent positions indicate how far the checkerboard was moved.

required
board_size Optional[Tuple[int, int]]

Number of inner corners as (columns, rows). Uses config default if None.

None
square_width Optional[float]

Physical width of one checkerboard square in world units. Uses config default if None.

None
square_height Optional[float]

Physical height of one checkerboard square in world units. Uses config default if None.

None
world_unit Optional[str]

Unit of positions and square_width/height. Uses config default if None.

None
camera_matrix Optional[ndarray]

Optional 3x3 camera intrinsics matrix for undistortion

None
dist_coeffs Optional[ndarray]

Optional distortion coefficients for undistortion

None
refine_corners Optional[bool]

Enable sub-pixel corner refinement. Uses config default if None.

None

Returns:

Type Description
CalibrationData

CalibrationData containing homography matrix computed from all positions

Raises:

Type Description
CameraConfigurationError

If inputs are invalid or inconsistent

HardwareOperationError

If checkerboard detection fails in any image

Example::

# Calibrate a 2-meter long bar using 3 checkerboard positions
images = [image1, image2, image3]
positions = [
    (0, 0),      # Start of bar
    (1000, 0),   # Middle (1000mm from start)
    (2000, 0)    # End (2000mm from start)
]

calibration = calibrator.calibrate_checkerboard_multi_view(
    images=images,
    positions=positions,
    board_size=(12, 12),
    square_width=25.0,    # 25mm wide squares
    square_height=25.0,   # 25mm tall squares
    world_unit="mm"
)
Notes
  • All images must show the same plane (Z=0)
  • Positions specify where the checkerboard origin (top-left corner) is located
  • Use more positions for better coverage of large measurement areas
  • Typical usage: 3-5 positions for long surfaces
  • RANSAC automatically handles slight inaccuracies in position measurements
CalibrationData dataclass
CalibrationData(
    H: ndarray,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
    world_unit: str = "mm",
    plane_normal_camera: Optional[ndarray] = None,
)

Immutable container for homography calibration data.

Holds the homography matrix and optional camera intrinsics derived or provided during calibration. The homography maps world plane coordinates (Z=0) in metric units to image pixel coordinates.

Attributes:

Name Type Description
H ndarray

3x3 homography matrix from world plane (Z=0) to image pixels

camera_matrix Optional[ndarray]

3x3 camera intrinsics matrix (K) if known or estimated

dist_coeffs Optional[ndarray]

Lens distortion coefficients if available

world_unit str

Unit used for world coordinates (e.g., 'mm', 'cm', 'm', 'in', 'ft')

plane_normal_camera Optional[ndarray]

Optional 3D normal of the plane in camera frame if recovered

save
save(filepath: str) -> None

Save calibration data to JSON file.

Parameters:

Name Type Description Default
filepath str

Path to save the calibration data

required
Note

NumPy arrays are converted to lists for JSON serialization.

load classmethod
load(filepath: str) -> CalibrationData

Load calibration data from JSON file.

Parameters:

Name Type Description Default
filepath str

Path to the calibration data file

required

Returns:

Type Description
CalibrationData

CalibrationData instance loaded from file

Raises:

Type Description
FileNotFoundError

If the file doesn't exist

ValueError

If the file format is invalid

MeasuredBox dataclass
MeasuredBox(
    corners_world: ndarray,
    width_world: float,
    height_world: float,
    area_world: float,
    unit: str,
)

Immutable container for metric-space measurement of a bounding box.

Stores the result of projecting a pixel-space bounding box to world coordinates on a planar surface using homography inversion. Contains the projected corner points and computed physical dimensions.

Attributes:

Name Type Description
corners_world ndarray

4x2 array of corner coordinates in world units (top-left, top-right, bottom-right, bottom-left)

width_world float

Width in world units (distance between top-left and top-right)

height_world float

Height in world units (distance between top-left and bottom-left)

area_world float

Area in square world units (computed via shoelace formula)

unit str

Unit of measurement (e.g., 'mm', 'cm', 'm', 'in', 'ft')

to_dict
to_dict() -> dict

Convert measurement to dictionary.

Returns:

Type Description
dict

Dictionary representation with corners as list

HomographyMeasurer
HomographyMeasurer(calibration: CalibrationData, **kwargs)

Bases: Mindtrace

Measures physical dimensions of objects using planar homography.

Projects pixel-space bounding boxes from object detection to real-world metric coordinates on a planar surface using a pre-calibrated homography matrix. Enables accurate physical size measurements from camera images.

The measurer uses the inverse homography (H⁻¹) to map image pixels back to world coordinates, then computes Euclidean distances and polygon areas for size measurements.

Features
  • Pixel-to-world coordinate projection
  • Bounding box dimension measurement (width, height, area)
  • Multi-unit support with automatic conversion
  • Batch processing for multiple detections
  • Pre-computed inverse homography for performance
Typical Workflow
  1. Calibrate camera view to obtain HomographyCalibrator.calibrate_*()
  2. Create measurer with calibration data
  3. Detect objects with vision model (YOLO, etc.)
  4. Measure physical dimensions from bounding boxes
  5. Apply size-based filtering or quality control

Usage::

from mindtrace.hardware import HomographyCalibrator, HomographyMeasurer
from mindtrace.core.types.bounding_box import BoundingBox

# One-time calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(12, 12),
    square_size=25.0,
    world_unit="mm"
)

# Create measurer (reuse for all measurements)
measurer = HomographyMeasurer(calibration)

# Measure objects from detection results
detections = yolo.detect(frame)  # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")

for measured in measurements:
    print(f"Width: {measured.width_world:.2f} cm")
    print(f"Height: {measured.height_world:.2f} cm")
    print(f"Area: {measured.area_world:.2f} cm²")

    # Size-based filtering
    if measured.width_world > 10.0:
        reject_oversized_part(measured)
Configuration
  • Supported units: mm, cm, m, in, ft (configurable via hardware config)
  • Default world unit: Inherited from calibration data
Limitations
  • Only works for planar surfaces (Z=0 assumption)
  • Accuracy depends on calibration quality and viewing angle
  • Assumes objects lie flat on the calibrated plane
  • Camera must remain fixed after calibration

Initialize homography measurer with calibration data.

Parameters:

Name Type Description Default
calibration CalibrationData

CalibrationData from HomographyCalibrator

required
**kwargs

Additional arguments passed to Mindtrace base class

{}

Raises:

Type Description
CameraConfigurationError

If homography matrix is invalid

pixels_to_world
pixels_to_world(points_px: ndarray) -> np.ndarray

Project pixel coordinates to world plane coordinates.

Maps Nx2 pixel coordinates to world plane coordinates using the inverse homography matrix H⁻¹. This is the core projection operation for all measurement functionality.

Parameters:

Name Type Description Default
points_px ndarray

Nx2 array of pixel coordinates (u, v)

required

Returns:

Type Description
ndarray

Nx2 array of world coordinates (X, Y) in calibration world unit

Raises:

Type Description
CameraConfigurationError

If input array has wrong shape

Example::

# Project single point
world_point = measurer.pixels_to_world(np.array([[320, 240]]))

# Project multiple points
pixel_corners = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])
world_corners = measurer.pixels_to_world(pixel_corners)
measure_bounding_box
measure_bounding_box(
    box: BoundingBox, target_unit: Optional[str] = None
) -> MeasuredBox

Measure physical dimensions of a bounding box on the calibrated plane.

Projects the four corners of a pixel-space bounding box to world coordinates, then computes width, height, and area in the specified unit.

Parameters:

Name Type Description Default
box BoundingBox

BoundingBox from object detection (x, y, width, height in pixels)

required
target_unit Optional[str]

Unit for output measurements (e.g., 'cm', 'm'). Uses calibration unit if None.

None

Returns:

Type Description
MeasuredBox

MeasuredBox with physical dimensions and corner coordinates

Example::

# From object detection
detection = BoundingBox(x=100, y=50, width=400, height=350)
measured = measurer.measure_bounding_box(detection, target_unit="cm")

print(f"Object is {measured.width_world:.1f} × {measured.height_world:.1f} cm")
print(f"Area: {measured.area_world:.1f} cm²")
measure_bounding_boxes
measure_bounding_boxes(
    boxes: Sequence[BoundingBox], target_unit: Optional[str] = None
) -> List[MeasuredBox]

Measure physical dimensions of multiple bounding boxes.

Batch processing of multiple object detections. More efficient than calling measure_bounding_box() in a loop.

Parameters:

Name Type Description Default
boxes Sequence[BoundingBox]

Sequence of BoundingBox objects from object detection

required
target_unit Optional[str]

Unit for output measurements. Uses calibration unit if None.

None

Returns:

Type Description
List[MeasuredBox]

List of MeasuredBox objects with physical dimensions

Example::

# Batch measurement from multiple detections
detections = yolo.detect(frame)  # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")

# Size-based filtering
large_objects = [m for m in measurements if m.width_world > 15.0]

# Quality control
for measured in measurements:
    if not (10.0 <= measured.width_world <= 20.0):
        reject_part(measured)
measure_distance
measure_distance(
    point1: Union[Tuple[float, float], ndarray],
    point2: Union[Tuple[float, float], ndarray],
    target_unit: Optional[str] = None,
) -> Tuple[float, str]

Measure Euclidean distance between two points on the calibrated plane.

Converts pixel coordinates to world coordinates and computes the distance. Useful for measuring gaps, spacing, or verifying known distances.

Parameters:

Name Type Description Default
point1 Union[Tuple[float, float], ndarray]

First point as (x, y) pixel coordinates

required
point2 Union[Tuple[float, float], ndarray]

Second point as (x, y) pixel coordinates

required
target_unit Optional[str]

Unit for output distance. Uses calibration unit if None.

None

Returns:

Type Description
Tuple[float, str]

Tuple of (distance, unit)

Raises:

Type Description
ValueError

If target_unit is not supported

Example::

# Measure distance between two detected points
point1 = (150, 200)  # pixels
point2 = (350, 200)  # pixels
distance, unit = measurer.measure_distance(point1, point2, target_unit="mm")
print(f"Distance: {distance:.2f} {unit}")

# Verify calibration accuracy
known_distance_mm = 100.0
measured_distance, _ = measurer.measure_distance(ref_point1, ref_point2, "mm")
error_percent = abs(measured_distance - known_distance_mm) / known_distance_mm * 100
print(f"Calibration error: {error_percent:.2f}%")
calibrator

Homography calibration for planar surface measurement.

This module provides calibration methods for establishing the relationship between image pixel coordinates and real-world metric coordinates on a planar surface.

HomographyCalibrator
HomographyCalibrator(**kwargs)

Bases: Mindtrace

Calibrates planar homography for pixel-to-world coordinate mapping.

Establishes a homography matrix H that maps planar world coordinates (X, Y, Z=0) in metric units to image pixel coordinates (u, v). Supports both automatic checkerboard-based calibration and manual point correspondence calibration.

The homography enables real-world measurements from camera images for objects lying on a known planar surface (e.g., overhead cameras, objects on tables/floors).

Features
  • Automatic checkerboard pattern detection and calibration
  • Manual point correspondence calibration
  • RANSAC-based robust homography estimation
  • Sub-pixel corner refinement for improved accuracy
  • Lens distortion correction support
  • Camera intrinsics estimation from FOV
Typical Workflow
  1. Place calibration target (checkerboard) on measurement plane
  2. Capture image with known world coordinates
  3. Calibrate to obtain homography matrix
  4. Use calibration for repeated measurements

Usage::

from mindtrace.hardware import HomographyCalibrator

# Automatic checkerboard calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(12, 12),     # Inner corners
    square_width=25.0,       # mm width per square
    square_height=25.0,      # mm height per square
    world_unit="mm"
)

# Manual point correspondence calibration
calibration = calibrator.calibrate_from_correspondences(
    world_points=[(0, 0), (300, 0), (300, 200), (0, 200)],  # mm
    image_points=[(100, 50), (500, 50), (500, 400), (100, 400)],  # pixels
    world_unit="mm"
)

# Save for later use
calibration.save("camera_calibration.json")
Configuration

All parameters can be configured via hardware config: - ransac_threshold: RANSAC reprojection error threshold (default: 3.0 pixels) - refine_corners: Enable sub-pixel corner refinement (default: True) - corner_refinement_window: Refinement window size (default: 11) - min_correspondences: Minimum points needed (default: 4) - default_world_unit: Default measurement unit (default: "mm")

Limitations
  • Only works for planar surfaces (Z=0 assumption)
  • Requires camera to remain fixed after calibration
  • Accuracy degrades with severe viewing angles

Initialize homography calibrator.

Parameters:

Name Type Description Default
**kwargs

Additional arguments passed to Mindtrace base class

{}
estimate_intrinsics_from_fov
estimate_intrinsics_from_fov(
    image_size: Tuple[int, int],
    fov_horizontal_deg: float,
    fov_vertical_deg: float,
    principal_point: Optional[Tuple[float, float]] = None,
) -> np.ndarray

Estimate camera intrinsics matrix from field-of-view parameters.

Computes a simple pinhole camera model intrinsics matrix (K) from the camera's horizontal and vertical field of view angles and image dimensions. Useful when full camera calibration is not available.

Parameters:

Name Type Description Default
image_size Tuple[int, int]

Image dimensions as (width, height) in pixels

required
fov_horizontal_deg float

Horizontal field of view in degrees

required
fov_vertical_deg float

Vertical field of view in degrees

required
principal_point Optional[Tuple[float, float]]

Optional (cx, cy) principal point in pixels. Defaults to image center if not provided.

None

Returns:

Type Description
ndarray

3x3 camera intrinsics matrix K

Example::

K = calibrator.estimate_intrinsics_from_fov(
    image_size=(1920, 1080),
    fov_horizontal_deg=70.0,
    fov_vertical_deg=45.0
)
calibrate_from_correspondences
calibrate_from_correspondences(
    world_points: ndarray,
    image_points: ndarray,
    world_unit: Optional[str] = None,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
) -> CalibrationData

Compute homography from known point correspondences.

Establishes the homography matrix H given known world coordinates (on Z=0 plane) and their corresponding image pixel coordinates. Uses RANSAC for robust estimation in the presence of outliers.

Parameters:

Name Type Description Default
world_points ndarray

Nx2 array of world coordinates in metric units (X, Y on Z=0 plane)

required
image_points ndarray

Nx2 array of corresponding image coordinates in pixels (u, v)

required
world_unit Optional[str]

Unit of world coordinates (e.g., 'mm', 'cm', 'm'). Uses config default if None.

None
camera_matrix Optional[ndarray]

Optional 3x3 camera intrinsics matrix for undistortion

None
dist_coeffs Optional[ndarray]

Optional distortion coefficients for undistortion

None

Returns:

Type Description
CalibrationData

CalibrationData containing homography matrix and metadata

Raises:

Type Description
CameraConfigurationError

If inputs are invalid (wrong shape, too few points)

HardwareOperationError

If homography estimation fails

Example::

# Four corner correspondences (world in mm, image in pixels)
world_pts = np.array([[0, 0], [300, 0], [300, 200], [0, 200]])
image_pts = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])

calibration = calibrator.calibrate_from_correspondences(
    world_points=world_pts,
    image_points=image_pts,
    world_unit="mm"
)
calibrate_checkerboard
calibrate_checkerboard(
    image: Union[Image, ndarray],
    board_size: Optional[Tuple[int, int]] = None,
    square_size: Optional[float] = None,
    world_unit: Optional[str] = None,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
    refine_corners: Optional[bool] = None,
) -> CalibrationData

Automatic calibration from checkerboard pattern detection.

Detects a checkerboard calibration pattern in the image, extracts corner correspondences, and computes the homography matrix. The checkerboard is assumed to lie on the Z=0 plane with known square dimensions.

Parameters:

Name Type Description Default
image Union[Image, ndarray]

PIL Image or BGR numpy array containing checkerboard pattern

required
board_size Optional[Tuple[int, int]]

Number of inner corners as (columns, rows). Uses config default if None. For a standard 8x8 checkerboard, use (7, 7).

None
square_size Optional[float]

Physical size of one checkerboard square in world units. Uses config default if None.

None
world_unit Optional[str]

Unit of square_size (e.g., 'mm', 'cm', 'm'). Uses config default if None.

None
camera_matrix Optional[ndarray]

Optional 3x3 camera intrinsics matrix for undistortion

None
dist_coeffs Optional[ndarray]

Optional distortion coefficients for undistortion

None
refine_corners Optional[bool]

Enable sub-pixel corner refinement. Uses config default if None.

None

Returns:

Type Description
CalibrationData

CalibrationData containing homography matrix and metadata

Raises:

Type Description
CameraConfigurationError

If image format is unsupported

HardwareOperationError

If checkerboard detection fails

Example::

# Use config defaults for standard calibration board
calibration = calibrator.calibrate_checkerboard(image=checkerboard_image)

# Or override specific parameters
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(9, 6),         # Custom board size
    square_size=30.0,          # Custom square size
    world_unit="mm"
)
Notes
  • board_size is the number of INNER corners, not squares
  • A standard 8x8 checkerboard has 7x7 inner corners
  • Ensure good lighting and focus for accurate detection
  • Checkerboard should fill significant portion of image
  • If using a standard calibration board, configure dimensions via: MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_COLS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_ROWS MINDTRACE_HW_HOMOGRAPHY_CHECKERBOARD_SQUARE
calibrate_checkerboard_multi_view
calibrate_checkerboard_multi_view(
    images: list[Union[Image, ndarray]],
    positions: list[Tuple[float, float]],
    board_size: Optional[Tuple[int, int]] = None,
    square_width: Optional[float] = None,
    square_height: Optional[float] = None,
    world_unit: Optional[str] = None,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
    refine_corners: Optional[bool] = None,
) -> CalibrationData

Calibrate from multiple checkerboard positions on the same plane.

Combines corner detections from multiple images where the checkerboard is placed at different positions on the measurement plane. This provides better calibration coverage over large areas without requiring an oversized calibration target.

Ideal for calibrating long surfaces (e.g., metallic bars, conveyor belts) using a standard-sized checkerboard moved to multiple positions.

Parameters:

Name Type Description Default
images list[Union[Image, ndarray]]

List of images, each containing the checkerboard at a different position

required
positions list[Tuple[float, float]]

List of (x_offset, y_offset) tuples in world units specifying the checkerboard's origin position in each image. The first position is typically (0, 0), and subsequent positions indicate how far the checkerboard was moved.

required
board_size Optional[Tuple[int, int]]

Number of inner corners as (columns, rows). Uses config default if None.

None
square_width Optional[float]

Physical width of one checkerboard square in world units. Uses config default if None.

None
square_height Optional[float]

Physical height of one checkerboard square in world units. Uses config default if None.

None
world_unit Optional[str]

Unit of positions and square_width/height. Uses config default if None.

None
camera_matrix Optional[ndarray]

Optional 3x3 camera intrinsics matrix for undistortion

None
dist_coeffs Optional[ndarray]

Optional distortion coefficients for undistortion

None
refine_corners Optional[bool]

Enable sub-pixel corner refinement. Uses config default if None.

None

Returns:

Type Description
CalibrationData

CalibrationData containing homography matrix computed from all positions

Raises:

Type Description
CameraConfigurationError

If inputs are invalid or inconsistent

HardwareOperationError

If checkerboard detection fails in any image

Example::

# Calibrate a 2-meter long bar using 3 checkerboard positions
images = [image1, image2, image3]
positions = [
    (0, 0),      # Start of bar
    (1000, 0),   # Middle (1000mm from start)
    (2000, 0)    # End (2000mm from start)
]

calibration = calibrator.calibrate_checkerboard_multi_view(
    images=images,
    positions=positions,
    board_size=(12, 12),
    square_width=25.0,    # 25mm wide squares
    square_height=25.0,   # 25mm tall squares
    world_unit="mm"
)
Notes
  • All images must show the same plane (Z=0)
  • Positions specify where the checkerboard origin (top-left corner) is located
  • Use more positions for better coverage of large measurement areas
  • Typical usage: 3-5 positions for long surfaces
  • RANSAC automatically handles slight inaccuracies in position measurements
data

Data structures for homography calibration and measurement.

This module defines immutable data containers for homography-based measurement operations.

CalibrationData dataclass
CalibrationData(
    H: ndarray,
    camera_matrix: Optional[ndarray] = None,
    dist_coeffs: Optional[ndarray] = None,
    world_unit: str = "mm",
    plane_normal_camera: Optional[ndarray] = None,
)

Immutable container for homography calibration data.

Holds the homography matrix and optional camera intrinsics derived or provided during calibration. The homography maps world plane coordinates (Z=0) in metric units to image pixel coordinates.

Attributes:

Name Type Description
H ndarray

3x3 homography matrix from world plane (Z=0) to image pixels

camera_matrix Optional[ndarray]

3x3 camera intrinsics matrix (K) if known or estimated

dist_coeffs Optional[ndarray]

Lens distortion coefficients if available

world_unit str

Unit used for world coordinates (e.g., 'mm', 'cm', 'm', 'in', 'ft')

plane_normal_camera Optional[ndarray]

Optional 3D normal of the plane in camera frame if recovered

save
save(filepath: str) -> None

Save calibration data to JSON file.

Parameters:

Name Type Description Default
filepath str

Path to save the calibration data

required
Note

NumPy arrays are converted to lists for JSON serialization.

load classmethod
load(filepath: str) -> CalibrationData

Load calibration data from JSON file.

Parameters:

Name Type Description Default
filepath str

Path to the calibration data file

required

Returns:

Type Description
CalibrationData

CalibrationData instance loaded from file

Raises:

Type Description
FileNotFoundError

If the file doesn't exist

ValueError

If the file format is invalid

MeasuredBox dataclass
MeasuredBox(
    corners_world: ndarray,
    width_world: float,
    height_world: float,
    area_world: float,
    unit: str,
)

Immutable container for metric-space measurement of a bounding box.

Stores the result of projecting a pixel-space bounding box to world coordinates on a planar surface using homography inversion. Contains the projected corner points and computed physical dimensions.

Attributes:

Name Type Description
corners_world ndarray

4x2 array of corner coordinates in world units (top-left, top-right, bottom-right, bottom-left)

width_world float

Width in world units (distance between top-left and top-right)

height_world float

Height in world units (distance between top-left and bottom-left)

area_world float

Area in square world units (computed via shoelace formula)

unit str

Unit of measurement (e.g., 'mm', 'cm', 'm', 'in', 'ft')

to_dict
to_dict() -> dict

Convert measurement to dictionary.

Returns:

Type Description
dict

Dictionary representation with corners as list

measurer

Homography-based measurement for planar objects.

This module provides measurement operations that project pixel-space bounding boxes to real-world metric coordinates using calibrated homography transformations.

HomographyMeasurer
HomographyMeasurer(calibration: CalibrationData, **kwargs)

Bases: Mindtrace

Measures physical dimensions of objects using planar homography.

Projects pixel-space bounding boxes from object detection to real-world metric coordinates on a planar surface using a pre-calibrated homography matrix. Enables accurate physical size measurements from camera images.

The measurer uses the inverse homography (H⁻¹) to map image pixels back to world coordinates, then computes Euclidean distances and polygon areas for size measurements.

Features
  • Pixel-to-world coordinate projection
  • Bounding box dimension measurement (width, height, area)
  • Multi-unit support with automatic conversion
  • Batch processing for multiple detections
  • Pre-computed inverse homography for performance
Typical Workflow
  1. Calibrate camera view to obtain HomographyCalibrator.calibrate_*()
  2. Create measurer with calibration data
  3. Detect objects with vision model (YOLO, etc.)
  4. Measure physical dimensions from bounding boxes
  5. Apply size-based filtering or quality control

Usage::

from mindtrace.hardware import HomographyCalibrator, HomographyMeasurer
from mindtrace.core.types.bounding_box import BoundingBox

# One-time calibration
calibrator = HomographyCalibrator()
calibration = calibrator.calibrate_checkerboard(
    image=checkerboard_image,
    board_size=(12, 12),
    square_size=25.0,
    world_unit="mm"
)

# Create measurer (reuse for all measurements)
measurer = HomographyMeasurer(calibration)

# Measure objects from detection results
detections = yolo.detect(frame)  # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")

for measured in measurements:
    print(f"Width: {measured.width_world:.2f} cm")
    print(f"Height: {measured.height_world:.2f} cm")
    print(f"Area: {measured.area_world:.2f} cm²")

    # Size-based filtering
    if measured.width_world > 10.0:
        reject_oversized_part(measured)
Configuration
  • Supported units: mm, cm, m, in, ft (configurable via hardware config)
  • Default world unit: Inherited from calibration data
Limitations
  • Only works for planar surfaces (Z=0 assumption)
  • Accuracy depends on calibration quality and viewing angle
  • Assumes objects lie flat on the calibrated plane
  • Camera must remain fixed after calibration

Initialize homography measurer with calibration data.

Parameters:

Name Type Description Default
calibration CalibrationData

CalibrationData from HomographyCalibrator

required
**kwargs

Additional arguments passed to Mindtrace base class

{}

Raises:

Type Description
CameraConfigurationError

If homography matrix is invalid

pixels_to_world
pixels_to_world(points_px: ndarray) -> np.ndarray

Project pixel coordinates to world plane coordinates.

Maps Nx2 pixel coordinates to world plane coordinates using the inverse homography matrix H⁻¹. This is the core projection operation for all measurement functionality.

Parameters:

Name Type Description Default
points_px ndarray

Nx2 array of pixel coordinates (u, v)

required

Returns:

Type Description
ndarray

Nx2 array of world coordinates (X, Y) in calibration world unit

Raises:

Type Description
CameraConfigurationError

If input array has wrong shape

Example::

# Project single point
world_point = measurer.pixels_to_world(np.array([[320, 240]]))

# Project multiple points
pixel_corners = np.array([[100, 50], [500, 50], [500, 400], [100, 400]])
world_corners = measurer.pixels_to_world(pixel_corners)
measure_bounding_box
measure_bounding_box(
    box: BoundingBox, target_unit: Optional[str] = None
) -> MeasuredBox

Measure physical dimensions of a bounding box on the calibrated plane.

Projects the four corners of a pixel-space bounding box to world coordinates, then computes width, height, and area in the specified unit.

Parameters:

Name Type Description Default
box BoundingBox

BoundingBox from object detection (x, y, width, height in pixels)

required
target_unit Optional[str]

Unit for output measurements (e.g., 'cm', 'm'). Uses calibration unit if None.

None

Returns:

Type Description
MeasuredBox

MeasuredBox with physical dimensions and corner coordinates

Example::

# From object detection
detection = BoundingBox(x=100, y=50, width=400, height=350)
measured = measurer.measure_bounding_box(detection, target_unit="cm")

print(f"Object is {measured.width_world:.1f} × {measured.height_world:.1f} cm")
print(f"Area: {measured.area_world:.1f} cm²")
measure_bounding_boxes
measure_bounding_boxes(
    boxes: Sequence[BoundingBox], target_unit: Optional[str] = None
) -> List[MeasuredBox]

Measure physical dimensions of multiple bounding boxes.

Batch processing of multiple object detections. More efficient than calling measure_bounding_box() in a loop.

Parameters:

Name Type Description Default
boxes Sequence[BoundingBox]

Sequence of BoundingBox objects from object detection

required
target_unit Optional[str]

Unit for output measurements. Uses calibration unit if None.

None

Returns:

Type Description
List[MeasuredBox]

List of MeasuredBox objects with physical dimensions

Example::

# Batch measurement from multiple detections
detections = yolo.detect(frame)  # List[BoundingBox]
measurements = measurer.measure_bounding_boxes(detections, target_unit="cm")

# Size-based filtering
large_objects = [m for m in measurements if m.width_world > 15.0]

# Quality control
for measured in measurements:
    if not (10.0 <= measured.width_world <= 20.0):
        reject_part(measured)
measure_distance
measure_distance(
    point1: Union[Tuple[float, float], ndarray],
    point2: Union[Tuple[float, float], ndarray],
    target_unit: Optional[str] = None,
) -> Tuple[float, str]

Measure Euclidean distance between two points on the calibrated plane.

Converts pixel coordinates to world coordinates and computes the distance. Useful for measuring gaps, spacing, or verifying known distances.

Parameters:

Name Type Description Default
point1 Union[Tuple[float, float], ndarray]

First point as (x, y) pixel coordinates

required
point2 Union[Tuple[float, float], ndarray]

Second point as (x, y) pixel coordinates

required
target_unit Optional[str]

Unit for output distance. Uses calibration unit if None.

None

Returns:

Type Description
Tuple[float, str]

Tuple of (distance, unit)

Raises:

Type Description
ValueError

If target_unit is not supported

Example::

# Measure distance between two detected points
point1 = (150, 200)  # pixels
point2 = (350, 200)  # pixels
distance, unit = measurer.measure_distance(point1, point2, target_unit="mm")
print(f"Distance: {distance:.2f} {unit}")

# Verify calibration accuracy
known_distance_mm = 100.0
measured_distance, _ = measurer.measure_distance(ref_point1, ref_point2, "mm")
error_percent = abs(measured_distance - known_distance_mm) / known_distance_mm * 100
print(f"Calibration error: {error_percent:.2f}%")
setup

Camera Setup Module

This module provides setup scripts for various camera SDKs and utilities for configuring camera hardware in the Mindtrace system.

Available CLI commands (after package installation): mindtrace-camera-setup install # Install all camera SDKs mindtrace-camera-setup uninstall # Uninstall all camera SDKs mindtrace-camera-basler install # Install Basler Pylon SDK mindtrace-camera-basler uninstall # Uninstall Basler Pylon SDK mindtrace-camera-genicam install # Install GenICam CTI files mindtrace-camera-genicam uninstall # Uninstall GenICam SDK mindtrace-camera-genicam verify # Verify GenICam installation

Each setup script uses Typer for CLI and can be run independently.

PylonSDKInstaller
PylonSDKInstaller(package_path: Optional[str] = None)

Bases: Mindtrace

Basler Pylon SDK installer with guided wizard.

This class provides an interactive installation wizard that guides users through downloading and installing the Basler Pylon SDK from the official Basler website.

Initialize the Pylon SDK installer.

Parameters:

Name Type Description Default
package_path Optional[str]

Optional path to pre-downloaded package file

None
install
install() -> bool

Install the Pylon SDK using interactive wizard or pre-downloaded package.

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall
uninstall() -> bool

Uninstall the Pylon SDK.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

CameraSystemSetup
CameraSystemSetup()

Bases: Mindtrace

Unified camera system setup and configuration manager.

This class handles the installation and configuration of all camera SDKs and related network settings for the Mindtrace hardware system.

Initialize the camera system setup manager.

install_all_sdks
install_all_sdks(release_version: str = 'v1.0-stable') -> bool

Install all camera SDKs.

Parameters:

Name Type Description Default
release_version str

SDK release version to install

'v1.0-stable'

Returns:

Type Description
bool

True if all installations successful, False otherwise

uninstall_all_sdks
uninstall_all_sdks() -> bool

Uninstall all camera SDKs.

Returns:

Type Description
bool

True if all uninstallations successful, False otherwise

configure_firewall
configure_firewall(ip_range: Optional[str] = None) -> bool

Configure firewall rules to allow camera communication.

This method configures platform-specific firewall rules to allow communication with GigE Vision cameras on the specified IP range.

Parameters:

Name Type Description Default
ip_range Optional[str]

IP range to allow (uses config default if None)

None

Returns:

Type Description
bool

True if firewall configuration successful, False otherwise

GenICamCTIInstaller
GenICamCTIInstaller(release_version: str = 'latest')

Bases: Mindtrace

Matrix Vision GenICam CTI installer and manager.

This class handles the download, installation, and uninstallation of the Matrix Vision Impact Acquire SDK and GenTL Producer across different platforms.

Initialize the GenICam CTI installer.

Parameters:

Name Type Description Default
release_version str

SDK release version to download

'latest'
get_cti_path
get_cti_path() -> str

Get the expected CTI file path for the current platform.

Returns:

Type Description
str

Path to the CTI file for the current platform

verify_installation
verify_installation() -> bool

Verify that the CTI file is properly installed.

Returns:

Type Description
bool

True if CTI file exists and is accessible, False otherwise

install
install() -> bool

Install the Matrix Vision Impact Acquire SDK for the current platform.

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall
uninstall() -> bool

Uninstall the Impact Acquire SDK.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

configure_firewall_helper
configure_firewall_helper(ip_range: Optional[str] = None) -> bool

Configure firewall rules to allow camera communication.

This function provides a simple interface to configure firewall rules for camera network communication. It works on both Windows and Linux.

Parameters:

Name Type Description Default
ip_range Optional[str]

IP range to allow (uses config default if None)

None

Returns:

Type Description
bool

True if firewall configuration successful, False otherwise

setup_basler

Basler Pylon SDK Setup Script

This script provides a guided installation wizard for the Basler Pylon SDK for both Linux and Windows systems. The Pylon SDK provides tools like Pylon Viewer and IP Configurator for camera management.

Note: pypylon (the Python package) is self-contained for camera operations. This SDK installation is only needed for the GUI tools.

Features: - Interactive guided wizard with browser integration - Platform-specific installation instructions - Support for pre-downloaded packages (--package flag) - Comprehensive logging and error handling - Uninstallation support

Usage

python setup_basler.py # Interactive wizard python setup_basler.py --package /path/to/file # Use pre-downloaded file python setup_basler.py --uninstall # Uninstall SDK mindtrace-camera-basler-install # Console script (install) mindtrace-camera-basler-uninstall # Console script (uninstall)

PylonSDKInstaller
PylonSDKInstaller(package_path: Optional[str] = None)

Bases: Mindtrace

Basler Pylon SDK installer with guided wizard.

This class provides an interactive installation wizard that guides users through downloading and installing the Basler Pylon SDK from the official Basler website.

Initialize the Pylon SDK installer.

Parameters:

Name Type Description Default
package_path Optional[str]

Optional path to pre-downloaded package file

None
install
install() -> bool

Install the Pylon SDK using interactive wizard or pre-downloaded package.

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall
uninstall() -> bool

Uninstall the Pylon SDK.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

install
install(
    package: Optional[Path] = typer.Option(
        None,
        "--package",
        "-p",
        help="Path to pre-downloaded Pylon SDK package file",
        exists=True,
        dir_okay=False,
    ),
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
) -> None

Install the Basler Pylon SDK using an interactive wizard.

The wizard will guide you through downloading and installing the SDK from Basler's official website where you'll accept their EULA.

For CI/automation, use --package to provide a pre-downloaded file.

uninstall
uninstall(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Uninstall the Basler Pylon SDK.

main
main() -> None

Main entry point for the script.

setup_cameras

Camera Setup and Configuration Script

This script provides a unified interface for installing and configuring all camera SDKs and related network settings for the Mindtrace hardware system. It combines Basler SDK installation with firewall configuration for camera network communication.

Features: - Combined installation of all camera SDKs (Basler Pylon, Matrix Vision GenICam CTI) - Firewall configuration for camera network communication - Cross-platform support (Windows, Linux, and macOS) - Individual SDK uninstallation support - Comprehensive logging and error handling - Configurable IP range and firewall settings - Integration with Mindtrace configuration system

Configuration

The script uses the Mindtrace hardware configuration system for default values. Settings can be customized via:

  1. Environment Variables:
  2. MINDTRACE_HW_NETWORK_CAMERA_IP_RANGE: IP range for firewall rules (default: 192.168.50.0/24)
  3. MINDTRACE_HW_NETWORK_FIREWALL_RULE_NAME: Name for firewall rules (default: "Allow Camera Network")

  4. Configuration File (hardware_config.json): { "network": { "camera_ip_range": "192.168.50.0/24", "firewall_rule_name": "Allow Camera Network" } }

  5. Command Line Arguments (highest priority)

Usage

python setup_cameras.py # Install all SDKs python setup_cameras.py --uninstall # Uninstall all SDKs python setup_cameras.py --configure-firewall # Configure firewall only python setup_cameras.py --ip-range 10.0.0.0/24 # Use custom IP range mindtrace-setup-cameras # Console script

Network Configuration

The script configures firewall rules to allow camera communication on the specified IP range. This is essential for GigE Vision cameras that communicate over Ethernet. The default IP range (192.168.50.0/24) follows industrial camera networking standards.

CameraSystemSetup
CameraSystemSetup()

Bases: Mindtrace

Unified camera system setup and configuration manager.

This class handles the installation and configuration of all camera SDKs and related network settings for the Mindtrace hardware system.

Initialize the camera system setup manager.

install_all_sdks
install_all_sdks(release_version: str = 'v1.0-stable') -> bool

Install all camera SDKs.

Parameters:

Name Type Description Default
release_version str

SDK release version to install

'v1.0-stable'

Returns:

Type Description
bool

True if all installations successful, False otherwise

uninstall_all_sdks
uninstall_all_sdks() -> bool

Uninstall all camera SDKs.

Returns:

Type Description
bool

True if all uninstallations successful, False otherwise

configure_firewall
configure_firewall(ip_range: Optional[str] = None) -> bool

Configure firewall rules to allow camera communication.

This method configures platform-specific firewall rules to allow communication with GigE Vision cameras on the specified IP range.

Parameters:

Name Type Description Default
ip_range Optional[str]

IP range to allow (uses config default if None)

None

Returns:

Type Description
bool

True if firewall configuration successful, False otherwise

configure_firewall_helper
configure_firewall_helper(ip_range: Optional[str] = None) -> bool

Configure firewall rules to allow camera communication.

This function provides a simple interface to configure firewall rules for camera network communication. It works on both Windows and Linux.

Parameters:

Name Type Description Default
ip_range Optional[str]

IP range to allow (uses config default if None)

None

Returns:

Type Description
bool

True if firewall configuration successful, False otherwise

install
install(
    version: str = typer.Option(
        "v1.0-stable", "--version", help="SDK release version to install"
    ),
    ip_range: Optional[str] = typer.Option(
        None,
        "--ip-range",
        help="IP range to allow in firewall (uses config default if not specified)",
    ),
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
) -> None

Install all camera SDKs and configure firewall.

Installs Basler Pylon SDK and Matrix Vision GenICam CTI files, then configures firewall rules for GigE Vision camera communication.

uninstall
uninstall(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Uninstall all camera SDKs.

configure_firewall
configure_firewall(
    ip_range: Optional[str] = typer.Option(
        None,
        "--ip-range",
        help="IP range to allow in firewall (uses config default if not specified)",
    ),
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
) -> None

Configure firewall rules for camera network communication.

Configures platform-specific firewall rules to allow GigE Vision camera communication on the specified IP range.

Windows: Uses netsh advfirewall commands Linux: Uses UFW (Uncomplicated Firewall)

main
main() -> None

Main entry point for the camera setup script.

setup_genicam

Matrix Vision GenICam CTI Setup Script

This script automates the download and installation of the Matrix Vision Impact Acquire SDK and GenTL Producer (CTI files) for Linux, Windows, and macOS. The CTI files are required for GenICam camera communication via Harvesters.

Features: - Automatic SDK download from Matrix Vision or GitHub releases - Platform-specific installation (Linux .deb/.tar.gz, Windows .exe, macOS .dmg/.pkg) - CTI file detection and verification - Administrative privilege handling - Comprehensive logging and error handling - Uninstallation support - Harvesters CTI path configuration

CTI File Locations: - Linux: /opt/ImpactAcquire/lib/x86_64/mvGenTLProducer.cti - Windows: C:\Program Files\MATRIX VISION\mvIMPACT Acquire\bin\x64\mvGenTLProducer.cti - macOS: /Applications/mvIMPACT_Acquire.app/Contents/Libraries/x86_64/mvGenTLProducer.cti

Usage

python setup_genicam.py # Install CTI files python setup_genicam.py --uninstall # Uninstall SDK python setup_genicam.py --verify # Verify CTI installation mindtrace-camera-genicam-install # Console script (install) mindtrace-camera-genicam-uninstall # Console script (uninstall) mindtrace-camera-genicam-verify # Console script (verify)

GenICamCTIInstaller
GenICamCTIInstaller(release_version: str = 'latest')

Bases: Mindtrace

Matrix Vision GenICam CTI installer and manager.

This class handles the download, installation, and uninstallation of the Matrix Vision Impact Acquire SDK and GenTL Producer across different platforms.

Initialize the GenICam CTI installer.

Parameters:

Name Type Description Default
release_version str

SDK release version to download

'latest'
get_cti_path
get_cti_path() -> str

Get the expected CTI file path for the current platform.

Returns:

Type Description
str

Path to the CTI file for the current platform

verify_installation
verify_installation() -> bool

Verify that the CTI file is properly installed.

Returns:

Type Description
bool

True if CTI file exists and is accessible, False otherwise

install
install() -> bool

Install the Matrix Vision Impact Acquire SDK for the current platform.

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall
uninstall() -> bool

Uninstall the Impact Acquire SDK.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

install_genicam_cti
install_genicam_cti(release_version: str = 'latest') -> bool

Install the Matrix Vision GenICam CTI files.

Parameters:

Name Type Description Default
release_version str

SDK release version to install

'latest'

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall_genicam_cti
uninstall_genicam_cti() -> bool

Uninstall the Matrix Vision Impact Acquire SDK.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

verify_genicam_cti
verify_genicam_cti() -> bool

Verify the Matrix Vision CTI installation.

Returns:

Type Description
bool

True if CTI files are properly installed, False otherwise

install
install(
    version: str = typer.Option(
        "latest", "--version", help="SDK release version to install"
    ),
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
) -> None

Install the Matrix Vision Impact Acquire SDK and CTI files.

Downloads and installs the SDK from the official Balluff/Matrix Vision servers. The CTI files are required for GenICam camera communication via Harvesters.

uninstall
uninstall(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Uninstall the Matrix Vision Impact Acquire SDK.

verify
verify(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Verify that CTI files are properly installed.

main
main() -> None

Main entry point for the script.

cli

Mindtrace Hardware CLI - Command-line interface for hardware management.

commands

CLI command modules.

status_command
status_command()

Show status of all hardware services.

camera

Camera service commands.

start
start(
    api_host: Annotated[
        str,
        Option(--api - host, help="API service host", envvar=CAMERA_API_HOST),
    ] = "localhost",
    api_port: Annotated[
        int,
        Option(--api - port, help="API service port", envvar=CAMERA_API_PORT),
    ] = 8002,
    include_mocks: Annotated[
        bool, Option(--include - mocks, help="Include mock cameras")
    ] = False,
    open_docs: Annotated[
        bool, Option(--open - docs, help="Open API documentation in browser")
    ] = False,
)

Start camera API service (headless).

stop
stop()

Stop camera API service.

status
status()

Show camera API service status.

logs
logs()

View camera API service logs.

plc

PLC service commands.

start
start(
    api_host: Annotated[
        str, Option(--api - host, help="API service host", envvar=PLC_API_HOST)
    ] = "localhost",
    api_port: Annotated[
        int, Option(--api - port, help="API service port", envvar=PLC_API_PORT)
    ] = 8003,
)

Start PLC API service.

stop
stop()

Stop PLC API service.

status
status()

Show PLC service status.

logs
logs()

View PLC service logs.

scanner

3D Scanner service commands.

start
start(
    api_host: Annotated[
        str,
        Option(
            --api - host,
            help="3D Scanner API service host",
            envvar=SCANNER_3D_API_HOST,
        ),
    ] = "localhost",
    api_port: Annotated[
        int,
        Option(
            --api - port,
            help="3D Scanner API service port",
            envvar=SCANNER_3D_API_PORT,
        ),
    ] = 8005,
    open_docs: Annotated[
        bool, Option(--open - docs, help="Open API documentation in browser")
    ] = False,
)

Start 3D scanner API service.

stop
stop()

Stop 3D scanner API service.

status
status()

Show 3D scanner service status.

logs
logs()

View 3D scanner service logs.

status

Status command for all hardware services.

status_command
status_command()

Show status of all hardware services.

stereo

Stereo camera service commands.

start
start(
    api_host: Annotated[
        str,
        Option(
            --api - host,
            help="Stereo Camera API service host",
            envvar=STEREO_CAMERA_API_HOST,
        ),
    ] = "localhost",
    api_port: Annotated[
        int,
        Option(
            --api - port,
            help="Stereo Camera API service port",
            envvar=STEREO_CAMERA_API_PORT,
        ),
    ] = 8004,
    open_docs: Annotated[
        bool, Option(--open - docs, help="Open API documentation in browser")
    ] = False,
)

Start stereo camera API service.

stop
stop()

Stop stereo camera API service.

status
status()

Show stereo camera service status.

logs
logs()

View stereo camera service logs.

core

Core CLI functionality.

ProcessManager
ProcessManager()

Manages hardware service processes.

Initialize process manager.

load_pids
load_pids()

Load saved PIDs from file.

save_pids
save_pids()

Save PIDs to file.

start_camera_api
start_camera_api(
    host: str = None, port: int = None, include_mocks: bool = False
) -> subprocess.Popen

Launch camera API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: CAMERA_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: CAMERA_API_PORT env var or 8002)

None
include_mocks bool

Include mock cameras in discovery

False

Returns:

Type Description
Popen

The subprocess handle

start_plc_api
start_plc_api(host: str = None, port: int = None) -> subprocess.Popen

Launch PLC API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: PLC_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: PLC_API_PORT env var or 8003)

None

Returns:

Type Description
Popen

The subprocess handle

start_stereo_camera_api
start_stereo_camera_api(host: str = None, port: int = None) -> subprocess.Popen

Launch Stereo Camera API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: STEREO_CAMERA_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: STEREO_CAMERA_API_PORT env var or 8004)

None

Returns:

Type Description
Popen

The subprocess handle

start_scanner_3d_api
start_scanner_3d_api(host: str = None, port: int = None) -> subprocess.Popen

Launch 3D Scanner API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: SCANNER_3D_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: SCANNER_3D_API_PORT env var or 8005)

None

Returns:

Type Description
Popen

The subprocess handle

stop_service
stop_service(service_name: str) -> bool

Stop a service by name.

Parameters:

Name Type Description Default
service_name str

Name of the service to stop

required

Returns:

Type Description
bool

True if stopped successfully

stop_all
stop_all()

Stop all running services.

get_status
get_status() -> Dict[str, Any]

Get status of all services.

Returns:

Type Description
Dict[str, Any]

Dictionary with service status information

is_service_running
is_service_running(service_name: str) -> bool

Check if a specific service is running.

Parameters:

Name Type Description Default
service_name str

Name of the service

required

Returns:

Type Description
bool

True if the service is running

setup_logger
setup_logger(
    name: str = "mindtrace-hw-cli",
    log_file: Optional[Path] = None,
    verbose: bool = False,
) -> logging.Logger

Set up logger for the CLI.

Parameters:

Name Type Description Default
name str

Logger name

'mindtrace-hw-cli'
log_file Optional[Path]

Optional log file path

None
verbose bool

Enable verbose logging

False

Returns:

Type Description
Logger

Configured logger instance

logger

Logging configuration for the CLI using Rich.

RichLogger
RichLogger(console: Optional[Console] = None)

Logger that uses Rich Console for professional output.

Initialize RichLogger.

Parameters:

Name Type Description Default
console Optional[Console]

Optional Rich Console instance. Creates new one if not provided.

None
info
info(message: str)

Log info message.

Parameters:

Name Type Description Default
message str

Message to log

required
success
success(message: str)

Log success message.

Parameters:

Name Type Description Default
message str

Success message to log

required
warning
warning(message: str)

Log warning message.

Parameters:

Name Type Description Default
message str

Warning message to log

required
error
error(message: str)

Log error message.

Parameters:

Name Type Description Default
message str

Error message to log

required
progress
progress(message: str)

Log progress message.

Parameters:

Name Type Description Default
message str

Progress message to log

required
setup_logger
setup_logger(
    name: str = "mindtrace-hw-cli",
    log_file: Optional[Path] = None,
    verbose: bool = False,
) -> logging.Logger

Set up logger for the CLI.

Parameters:

Name Type Description Default
name str

Logger name

'mindtrace-hw-cli'
log_file Optional[Path]

Optional log file path

None
verbose bool

Enable verbose logging

False

Returns:

Type Description
Logger

Configured logger instance

process_manager

Process management for hardware services.

ProcessManager
ProcessManager()

Manages hardware service processes.

Initialize process manager.

load_pids
load_pids()

Load saved PIDs from file.

save_pids
save_pids()

Save PIDs to file.

start_camera_api
start_camera_api(
    host: str = None, port: int = None, include_mocks: bool = False
) -> subprocess.Popen

Launch camera API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: CAMERA_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: CAMERA_API_PORT env var or 8002)

None
include_mocks bool

Include mock cameras in discovery

False

Returns:

Type Description
Popen

The subprocess handle

start_plc_api
start_plc_api(host: str = None, port: int = None) -> subprocess.Popen

Launch PLC API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: PLC_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: PLC_API_PORT env var or 8003)

None

Returns:

Type Description
Popen

The subprocess handle

start_stereo_camera_api
start_stereo_camera_api(host: str = None, port: int = None) -> subprocess.Popen

Launch Stereo Camera API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: STEREO_CAMERA_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: STEREO_CAMERA_API_PORT env var or 8004)

None

Returns:

Type Description
Popen

The subprocess handle

start_scanner_3d_api
start_scanner_3d_api(host: str = None, port: int = None) -> subprocess.Popen

Launch 3D Scanner API service.

Parameters:

Name Type Description Default
host str

Host to bind the service to (default: SCANNER_3D_API_HOST env var or 'localhost')

None
port int

Port to run the service on (default: SCANNER_3D_API_PORT env var or 8005)

None

Returns:

Type Description
Popen

The subprocess handle

stop_service
stop_service(service_name: str) -> bool

Stop a service by name.

Parameters:

Name Type Description Default
service_name str

Name of the service to stop

required

Returns:

Type Description
bool

True if stopped successfully

stop_all
stop_all()

Stop all running services.

get_status
get_status() -> Dict[str, Any]

Get status of all services.

Returns:

Type Description
Dict[str, Any]

Dictionary with service status information

is_service_running
is_service_running(service_name: str) -> bool

Check if a specific service is running.

Parameters:

Name Type Description Default
service_name str

Name of the service

required

Returns:

Type Description
bool

True if the service is running

utils

CLI utility functions.

format_status
format_status(status: Dict[str, Any]) -> None

Format and display service status.

Parameters:

Name Type Description Default
status Dict[str, Any]

Status dictionary from ProcessManager

required
print_table
print_table(data: List[Dict[str, Any]], headers: Optional[List[str]] = None)

Print data as a formatted table.

Parameters:

Name Type Description Default
data List[Dict[str, Any]]

List of dictionaries to display

required
headers Optional[List[str]]

Optional header names

None
display

Display utilities for CLI output using Rich.

format_status
format_status(status: Dict[str, Any]) -> None

Format and display service status.

Parameters:

Name Type Description Default
status Dict[str, Any]

Status dictionary from ProcessManager

required
print_table
print_table(data: List[Dict[str, Any]], headers: Optional[List[str]] = None)

Print data as a formatted table.

Parameters:

Name Type Description Default
data List[Dict[str, Any]]

List of dictionaries to display

required
headers Optional[List[str]]

Optional header names

None
print_service_box
print_service_box(title: str, services: Dict[str, Dict[str, Any]])

Print services in a nice panel format.

Parameters:

Name Type Description Default
title str

Panel title

required
services Dict[str, Dict[str, Any]]

Service information dictionary

required
show_banner
show_banner()

Display the CLI banner.

print_list
print_list(items: List[str], title: Optional[str] = None, style: str = 'white')

Print a list of items.

Parameters:

Name Type Description Default
items List[str]

List of strings to display

required
title Optional[str]

Optional title for the list

None
style str

Rich style for items

'white'

plcs

PLC module for Mindtrace hardware system.

Provides unified interface for managing PLCs from different manufacturers with support for discovery, registration, and batch operations.

PLCManager
PLCManager()

Bases: Mindtrace

Unified PLC management system for industrial automation.

This manager provides a comprehensive interface for managing PLCs from different manufacturers with support for discovery, registration, connection management, and batch tag operations. It handles multiple PLC backends transparently and provides thread-safe operations with proper error handling.

The manager supports: - Automatic PLC discovery across multiple backends - Dynamic PLC registration and connection management - Batch tag read/write operations for optimal performance - Connection monitoring and automatic reconnection - Comprehensive error handling and logging - Thread-safe operations with proper resource management

Supported PLC Types: - Allen-Bradley: ControlLogix, CompactLogix, MicroLogix PLCs - Siemens: S7-300, S7-400, S7-1200, S7-1500 PLCs (Future) - Modbus: Modbus TCP/RTU devices (Future) - Mock PLCs: For testing and development

Attributes:

Name Type Description
plcs Dict[str, BasePLC]

Dictionary mapping PLC names to PLC instances

config

Hardware configuration manager instance

logger

Centralized logger for PLC operations

Example
Basic usage

async with PLCManager() as manager: # Discover available PLCs discovered = await manager.discover_plcs()

# Register and connect to a PLC
await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100")
await manager.connect_plc("PLC1")

# Read and write tags
values = await manager.read_tag("PLC1", ["Temperature", "Pressure"])
await manager.write_tag("PLC1", [("Setpoint", 75.0)])
Batch operations

async with PLCManager() as manager: # Register multiple PLCs await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.register_plc("PLC2", "AllenBradley", "192.168.1.101")

# Batch read from multiple PLCs
read_requests = [
    ("PLC1", ["Temperature", "Pressure"]),
    ("PLC2", ["Speed", "Position"])
]
results = await manager.read_tags_batch(read_requests)

Initialize the PLC manager.

discover_plcs async
discover_plcs() -> Dict[str, List[str]]

Discover available PLCs from all enabled backends.

Returns:

Type Description
Dict[str, List[str]]

Dictionary mapping backend names to lists of discovered PLCs

register_plc async
register_plc(
    plc_name: str,
    backend: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    **kwargs
) -> bool

Register a PLC with the manager.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
backend str

Backend type ("AllenBradley", "Siemens", "Modbus")

required
ip_address str

IP address of the PLC

required
plc_type Optional[str]

Specific PLC type (backend-dependent)

None
**kwargs

Additional backend-specific parameters

{}

Returns:

Type Description
bool

True if registration successful, False otherwise

unregister_plc async
unregister_plc(plc_name: str) -> bool

Unregister a PLC from the manager.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC to unregister

required

Returns:

Type Description
bool

True if unregistration successful, False otherwise

connect_plc async
connect_plc(plc_name: str) -> bool

Connect to a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC to connect

required

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect_plc async
disconnect_plc(plc_name: str) -> bool

Disconnect from a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC to disconnect

required

Returns:

Type Description
bool

True if disconnection successful, False otherwise

connect_all_plcs async
connect_all_plcs() -> Dict[str, bool]

Connect to all registered PLCs.

Returns:

Type Description
Dict[str, bool]

Dictionary mapping PLC names to connection success status

disconnect_all_plcs async
disconnect_all_plcs() -> Dict[str, bool]

Disconnect from all registered PLCs.

Returns:

Type Description
Dict[str, bool]

Dictionary mapping PLC names to disconnection success status

read_tag async
read_tag(plc_name: str, tags: Union[str, List[str]]) -> Dict[str, Any]

Read tags from a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tag async
write_tag(
    plc_name: str, tags: Union[Tuple[str, Any], List[Tuple[str, Any]]]
) -> Dict[str, bool]

Write tags to a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

read_tags_batch async
read_tags_batch(
    requests: List[Tuple[str, Union[str, List[str]]]],
) -> Dict[str, Dict[str, Any]]

Read tags from multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[str, List[str]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping PLC names to their tag read results

write_tags_batch async
write_tags_batch(
    requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]

Write tags to multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, bool]]

Dictionary mapping PLC names to their tag write results

get_plc_status async
get_plc_status(plc_name: str) -> Dict[str, Any]

Get status information for a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required

Returns:

Type Description
Dict[str, Any]

Dictionary with PLC status information

get_all_plc_status async
get_all_plc_status() -> Dict[str, Dict[str, Any]]

Get status information for all registered PLCs.

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping PLC names to their status information

get_plc_tags async
get_plc_tags(plc_name: str) -> List[str]

Get list of available tags for a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required

Returns:

Type Description
List[str]

List of available tag names

get_registered_plcs
get_registered_plcs() -> List[str]

Get list of registered PLC names.

Returns:

Type Description
List[str]

List of registered PLC names

get_backend_info
get_backend_info() -> Dict[str, Dict[str, Any]]

Get information about available PLC backends.

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping backend names to their information

cleanup async
cleanup()

Clean up all PLC connections and resources.

backends

PLC backends for different manufacturers and protocols.

This module contains implementations for various PLC types including Allen Bradley, Siemens, Modbus, and other industrial protocols.

allen_bradley

Allen Bradley PLC Backend.

Implements PLC communication for Allen Bradley PLCs using the pycomm3 library.

AllenBradleyPLC
AllenBradleyPLC(
    plc_name: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    plc_config_file: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
)

Bases: BasePLC

Allen Bradley PLC implementation using pycomm3.

Supports multiple PLC types and Ethernet/IP devices: - ControlLogix, CompactLogix, Micro800 (LogixDriver) - SLC500, MicroLogix (SLCDriver) - Generic Ethernet/IP devices (CIPDriver)

Attributes:

Name Type Description
plc

pycomm3 driver instance (LogixDriver, SLCDriver, or CIPDriver)

driver_type

Type of driver being used

plc_type

Type of PLC (auto-detected or specified)

_tags_cache Optional[List[str]]

Cached list of available tags

_cache_timestamp float

Timestamp of last tag cache update

_cache_ttl float

Time-to-live for tag cache in seconds

Initialize Allen Bradley PLC.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
ip_address str

IP address of the PLC

required
plc_type Optional[str]

PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection)

None
plc_config_file Optional[str]

Path to PLC configuration file

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None

Raises:

Type Description
SDKNotAvailableError

If pycomm3 is not installed

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the Allen Bradley PLC connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success, plc_object, device_manager)

connect async
connect() -> bool

Establish connection to the Allen Bradley PLC.

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect async
disconnect() -> bool

Disconnect from the Allen Bradley PLC.

Returns:

Type Description
bool

True if disconnection successful, False otherwise

is_connected async
is_connected() -> bool

Check if Allen Bradley PLC is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

read_tag async
read_tag(tags: Union[str, List[str]]) -> Dict[str, Any]

Read values from Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

Raises:

Type Description
PLCTagReadError

If tag reading fails

write_tag async
write_tag(
    tags: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Dict[str, bool]

Write values to Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

Raises:

Type Description
PLCTagWriteError

If tag writing fails

get_all_tags async
get_all_tags() -> List[str]

Get list of all available tags on the Allen Bradley PLC.

Returns:

Type Description
List[str]

List of tag names

get_tag_info async
get_tag_info(tag_name: str) -> Dict[str, Any]

Get detailed information about a specific tag.

Parameters:

Name Type Description Default
tag_name str

Name of the tag

required

Returns:

Type Description
Dict[str, Any]

Dictionary with tag information (type, description, etc.)

get_plc_info async
get_plc_info() -> Dict[str, Any]

Get detailed information about the connected PLC using proper pycomm3 methods.

Returns:

Type Description
Dict[str, Any]

Dictionary with PLC information

get_available_plcs staticmethod
get_available_plcs() -> List[str]

Discover available Allen Bradley PLCs using proper pycomm3 discovery methods.

Returns:

Type Description
List[str]

List of PLC identifiers in format "AllenBradley:IP:Type"

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for get_available_plcs() - runs discovery in threadpool.

Use this instead of get_available_plcs() when calling from async context to avoid blocking the event loop during PLC network discovery.

Returns:

Type Description
List[str]

List[str]: List of discovered PLC IP addresses.

get_backend_info staticmethod
get_backend_info() -> Dict[str, Any]

Get information about the Allen Bradley PLC backend.

Returns:

Type Description
Dict[str, Any]

Dictionary with backend information

MockAllenBradleyPLC
MockAllenBradleyPLC(
    plc_name: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    plc_config_file: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
)

Bases: BasePLC

Mock implementation of Allen Bradley PLC for testing and development.

This class provides a complete simulation of the Allen Bradley PLC API without requiring actual hardware. It simulates all three driver types and provides realistic tag behavior for comprehensive testing.

Attributes:

Name Type Description
plc_name

User-defined PLC identifier

ip_address

Simulated IP address

plc_type

PLC type ("logix", "slc", "cip", or "auto")

driver_type

Detected/simulated driver type

_is_connected

Connection status simulation

_tag_values Dict[str, Any]

Simulated tag values storage

_tag_types Dict[str, str]

Tag type mapping for different driver types

_cache_ttl

Tag cache time-to-live

_tags_cache Optional[List[str]]

Cached list of available tags

_cache_timestamp float

Timestamp of last cache update

Initialize mock Allen Bradley PLC.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
ip_address str

Simulated IP address

required
plc_type Optional[str]

PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection)

None
plc_config_file Optional[str]

Path to configuration file (simulated)

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None
initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the mock Allen Bradley PLC connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success, mock_plc_object, mock_device_manager)

connect async
connect() -> bool

Simulate connection to the Allen Bradley PLC.

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect async
disconnect() -> bool

Simulate disconnection from the Allen Bradley PLC.

Returns:

Type Description
bool

True if disconnection successful, False otherwise

is_connected async
is_connected() -> bool

Check if mock Allen Bradley PLC is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

read_tag async
read_tag(tags: Union[str, List[str]]) -> Dict[str, Any]

Simulate reading values from Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tag async
write_tag(
    tags: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Dict[str, bool]

Simulate writing values to Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

get_all_tags async
get_all_tags() -> List[str]

Get list of all available mock tags.

Returns:

Type Description
List[str]

List of tag names

get_tag_info async
get_tag_info(tag_name: str) -> Dict[str, Any]

Get detailed information about a mock tag.

Parameters:

Name Type Description Default
tag_name str

Name of the tag

required

Returns:

Type Description
Dict[str, Any]

Dictionary with tag information

get_plc_info async
get_plc_info() -> Dict[str, Any]

Get detailed information about the mock PLC.

Returns:

Type Description
Dict[str, Any]

Dictionary with PLC information

get_available_plcs staticmethod
get_available_plcs() -> List[str]

Discover available mock Allen Bradley PLCs.

Returns:

Type Description
List[str]

List of PLC identifiers in format "AllenBradley:IP:Type"

get_backend_info staticmethod
get_backend_info() -> Dict[str, Any]

Get information about the mock Allen Bradley PLC backend.

Returns:

Type Description
Dict[str, Any]

Dictionary with backend information

allen_bradley_plc

Allen Bradley PLC implementation using pycomm3.

Provides communication interface for Allen Bradley PLCs and other Ethernet/IP devices using CIPDriver, LogixDriver, and SLCDriver from pycomm3 library.

AllenBradleyPLC
AllenBradleyPLC(
    plc_name: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    plc_config_file: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
)

Bases: BasePLC

Allen Bradley PLC implementation using pycomm3.

Supports multiple PLC types and Ethernet/IP devices: - ControlLogix, CompactLogix, Micro800 (LogixDriver) - SLC500, MicroLogix (SLCDriver) - Generic Ethernet/IP devices (CIPDriver)

Attributes:

Name Type Description
plc

pycomm3 driver instance (LogixDriver, SLCDriver, or CIPDriver)

driver_type

Type of driver being used

plc_type

Type of PLC (auto-detected or specified)

_tags_cache Optional[List[str]]

Cached list of available tags

_cache_timestamp float

Timestamp of last tag cache update

_cache_ttl float

Time-to-live for tag cache in seconds

Initialize Allen Bradley PLC.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
ip_address str

IP address of the PLC

required
plc_type Optional[str]

PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection)

None
plc_config_file Optional[str]

Path to PLC configuration file

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None

Raises:

Type Description
SDKNotAvailableError

If pycomm3 is not installed

initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the Allen Bradley PLC connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success, plc_object, device_manager)

connect async
connect() -> bool

Establish connection to the Allen Bradley PLC.

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect async
disconnect() -> bool

Disconnect from the Allen Bradley PLC.

Returns:

Type Description
bool

True if disconnection successful, False otherwise

is_connected async
is_connected() -> bool

Check if Allen Bradley PLC is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

read_tag async
read_tag(tags: Union[str, List[str]]) -> Dict[str, Any]

Read values from Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

Raises:

Type Description
PLCTagReadError

If tag reading fails

write_tag async
write_tag(
    tags: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Dict[str, bool]

Write values to Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

Raises:

Type Description
PLCTagWriteError

If tag writing fails

get_all_tags async
get_all_tags() -> List[str]

Get list of all available tags on the Allen Bradley PLC.

Returns:

Type Description
List[str]

List of tag names

get_tag_info async
get_tag_info(tag_name: str) -> Dict[str, Any]

Get detailed information about a specific tag.

Parameters:

Name Type Description Default
tag_name str

Name of the tag

required

Returns:

Type Description
Dict[str, Any]

Dictionary with tag information (type, description, etc.)

get_plc_info async
get_plc_info() -> Dict[str, Any]

Get detailed information about the connected PLC using proper pycomm3 methods.

Returns:

Type Description
Dict[str, Any]

Dictionary with PLC information

get_available_plcs staticmethod
get_available_plcs() -> List[str]

Discover available Allen Bradley PLCs using proper pycomm3 discovery methods.

Returns:

Type Description
List[str]

List of PLC identifiers in format "AllenBradley:IP:Type"

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for get_available_plcs() - runs discovery in threadpool.

Use this instead of get_available_plcs() when calling from async context to avoid blocking the event loop during PLC network discovery.

Returns:

Type Description
List[str]

List[str]: List of discovered PLC IP addresses.

get_backend_info staticmethod
get_backend_info() -> Dict[str, Any]

Get information about the Allen Bradley PLC backend.

Returns:

Type Description
Dict[str, Any]

Dictionary with backend information

mock_allen_bradley

Mock Allen Bradley PLC Implementation

This module provides a mock implementation of Allen Bradley PLCs for testing and development without requiring actual hardware or the pycomm3 SDK.

Features
  • Complete simulation of all three driver types (Logix, SLC, CIP)
  • Realistic tag data generation and management
  • Configurable number of mock PLCs
  • Error simulation capabilities for testing
  • No hardware dependencies
Components
  • MockAllenBradleyPLC: Mock PLC implementation
Usage

from mindtrace.hardware.plcs.backends.allen_bradley import MockAllenBradleyPLC

Initialize mock PLC

plc = MockAllenBradleyPLC("TestPLC", "192.168.1.100", plc_type="logix")

Use exactly like real PLC

await plc.connect() tags = await plc.read_tag(["Motor1_Speed", "Conveyor_Status"]) await plc.write_tag([("Pump1_Command", True)]) await plc.disconnect()

MockAllenBradleyPLC
MockAllenBradleyPLC(
    plc_name: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    plc_config_file: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
)

Bases: BasePLC

Mock implementation of Allen Bradley PLC for testing and development.

This class provides a complete simulation of the Allen Bradley PLC API without requiring actual hardware. It simulates all three driver types and provides realistic tag behavior for comprehensive testing.

Attributes:

Name Type Description
plc_name

User-defined PLC identifier

ip_address

Simulated IP address

plc_type

PLC type ("logix", "slc", "cip", or "auto")

driver_type

Detected/simulated driver type

_is_connected

Connection status simulation

_tag_values Dict[str, Any]

Simulated tag values storage

_tag_types Dict[str, str]

Tag type mapping for different driver types

_cache_ttl

Tag cache time-to-live

_tags_cache Optional[List[str]]

Cached list of available tags

_cache_timestamp float

Timestamp of last cache update

Initialize mock Allen Bradley PLC.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
ip_address str

Simulated IP address

required
plc_type Optional[str]

PLC type ('logix', 'slc', 'cip', or 'auto' for auto-detection)

None
plc_config_file Optional[str]

Path to configuration file (simulated)

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None
initialize async
initialize() -> Tuple[bool, Any, Any]

Initialize the mock Allen Bradley PLC connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success, mock_plc_object, mock_device_manager)

connect async
connect() -> bool

Simulate connection to the Allen Bradley PLC.

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect async
disconnect() -> bool

Simulate disconnection from the Allen Bradley PLC.

Returns:

Type Description
bool

True if disconnection successful, False otherwise

is_connected async
is_connected() -> bool

Check if mock Allen Bradley PLC is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

read_tag async
read_tag(tags: Union[str, List[str]]) -> Dict[str, Any]

Simulate reading values from Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tag async
write_tag(
    tags: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Dict[str, bool]

Simulate writing values to Allen Bradley PLC tags.

Parameters:

Name Type Description Default
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

get_all_tags async
get_all_tags() -> List[str]

Get list of all available mock tags.

Returns:

Type Description
List[str]

List of tag names

get_tag_info async
get_tag_info(tag_name: str) -> Dict[str, Any]

Get detailed information about a mock tag.

Parameters:

Name Type Description Default
tag_name str

Name of the tag

required

Returns:

Type Description
Dict[str, Any]

Dictionary with tag information

get_plc_info async
get_plc_info() -> Dict[str, Any]

Get detailed information about the mock PLC.

Returns:

Type Description
Dict[str, Any]

Dictionary with PLC information

get_available_plcs staticmethod
get_available_plcs() -> List[str]

Discover available mock Allen Bradley PLCs.

Returns:

Type Description
List[str]

List of PLC identifiers in format "AllenBradley:IP:Type"

get_backend_info staticmethod
get_backend_info() -> Dict[str, Any]

Get information about the mock Allen Bradley PLC backend.

Returns:

Type Description
Dict[str, Any]

Dictionary with backend information

base

Abstract base classes for PLC implementations.

This module defines the interface that all PLC backends must implement, providing a consistent API for PLC operations across different manufacturers and communication protocols.

Features
  • Abstract base class with comprehensive async PLC interface
  • Consistent async pattern matching camera backends
  • Type-safe method signatures with full type hints
  • Configuration system integration
  • Resource management and cleanup
  • Default implementations for optional features
  • Standardized constructor signature across all backends
  • Retry logic with exponential backoff
  • Connection management and monitoring
Usage

This is an abstract base class and cannot be instantiated directly. PLC backends should inherit from BasePLC and implement all abstract methods.

Example

class MyPLCBackend(BasePLC): async def initialize(self) -> Tuple[bool, Any, Any]: # Implementation here pass

async def connect(self) -> bool:
    # Implementation here
    pass

async def read_tag(self, tags: Union[str, List[str]]) -> Dict[str, Any]:
    # Implementation here
    pass

# ... implement other abstract methods
Backend Requirements

All PLC backends must implement the following abstract methods: - initialize(): Establish initial connection and setup - connect(): Connect to the PLC - disconnect(): Disconnect from the PLC - is_connected(): Check connection status - read_tag(): Read tag values from PLC - write_tag(): Write tag values to PLC - get_all_tags(): List all available tags - get_tag_info(): Get detailed tag information - get_available_plcs(): Static method for PLC discovery - get_backend_info(): Static method for backend information

Error Handling

Backends should raise appropriate exceptions from the PLC exception hierarchy: - PLCError: Base exception for all PLC-related errors - PLCNotFoundError: PLC not found during discovery - PLCConnectionError: Connection establishment or maintenance failures - PLCInitializationError: PLC initialization failures - PLCCommunicationError: Communication protocol errors - PLCTagError: Tag-related operation errors - PLCTimeoutError: Operation timeout errors - PLCConfigurationError: Configuration-related errors

BasePLC
BasePLC(
    plc_name: str,
    ip_address: str,
    plc_config_file: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
)

Bases: MindtraceABC

Abstract base class for PLC implementations.

This class defines the interface that all PLC backends must implement to ensure consistent behavior across different manufacturers and protocols.

Attributes:

Name Type Description
plc_name

Unique identifier for the PLC instance

plc_config_file

Path to PLC-specific configuration file

ip_address

IP address of the PLC

connection_timeout

Connection timeout in seconds

read_timeout

Tag read timeout in seconds

write_timeout

Tag write timeout in seconds

retry_count

Number of retry attempts for operations

retry_delay

Delay between retry attempts in seconds

plc

The underlying PLC connection object

device_manager

Device-specific manager instance

initialized

Whether the PLC has been initialized

Initialize the PLC instance.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
ip_address str

IP address of the PLC

required
plc_config_file Optional[str]

Path to PLC configuration file

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None
initialize abstractmethod async
initialize() -> Tuple[bool, Any, Any]

Initialize the PLC connection.

Returns:

Type Description
Tuple[bool, Any, Any]

Tuple of (success, plc_object, device_manager)

connect abstractmethod async
connect() -> bool

Establish connection to the PLC.

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect abstractmethod async
disconnect() -> bool

Disconnect from the PLC.

Returns:

Type Description
bool

True if disconnection successful, False otherwise

is_connected abstractmethod async
is_connected() -> bool

Check if PLC is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

read_tag abstractmethod async
read_tag(tags: Union[str, List[str]]) -> Dict[str, Any]

Read values from PLC tags.

Parameters:

Name Type Description Default
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tag abstractmethod async
write_tag(
    tags: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Dict[str, bool]

Write values to PLC tags.

Parameters:

Name Type Description Default
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

get_all_tags abstractmethod async
get_all_tags() -> List[str]

Get list of all available tags on the PLC.

Returns:

Type Description
List[str]

List of tag names

get_tag_info abstractmethod async
get_tag_info(tag_name: str) -> Dict[str, Any]

Get detailed information about a specific tag.

Parameters:

Name Type Description Default
tag_name str

Name of the tag

required

Returns:

Type Description
Dict[str, Any]

Dictionary with tag information (type, description, etc.)

get_available_plcs abstractmethod staticmethod
get_available_plcs() -> List[str]

Discover available PLCs for this backend.

Returns:

Type Description
List[str]

List of PLC identifiers in format "Backend:Identifier"

get_backend_info abstractmethod staticmethod
get_backend_info() -> Dict[str, Any]

Get information about this PLC backend.

Returns:

Type Description
Dict[str, Any]

Dictionary with backend information

reconnect async
reconnect() -> bool

Attempt to reconnect to the PLC.

Returns:

Type Description
bool

True if reconnection successful, False otherwise

read_tag_with_retry async
read_tag_with_retry(tags: Union[str, List[str]]) -> Dict[str, Any]

Read tags with retry mechanism.

Parameters:

Name Type Description Default
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

Raises:

Type Description
PLCTagError

If all retry attempts fail

write_tag_with_retry async
write_tag_with_retry(
    tags: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Dict[str, bool]

Write tags with retry mechanism.

Parameters:

Name Type Description Default
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

Raises:

Type Description
PLCTagError

If all retry attempts fail

plc_manager

Modern PLC Manager for Mindtrace Hardware System

A comprehensive PLC management system that provides unified access to multiple PLC backends with async operations, proper resource management, and batch processing capabilities.

Key Features
  • Automatic PLC discovery and registration
  • Unified interface for different PLC manufacturers
  • Async operations with proper error handling
  • Batch tag read/write operations
  • Connection management and monitoring
  • Thread-safe operations with proper locking
  • Comprehensive configuration management
  • Integrated logging and status reporting
Supported Backends
  • Allen-Bradley: ControlLogix, CompactLogix PLCs (pycomm3)
  • Siemens: S7-300, S7-400, S7-1200, S7-1500 PLCs (python-snap7)
  • Modbus: Modbus TCP/RTU devices (pymodbus)
  • Mock backends for testing and development
Requirements
  • pycomm3: Allen-Bradley PLC communication
  • python-snap7: Siemens PLC communication
  • pymodbus: Modbus device communication
  • asyncio: Async operations support
Installation

pip install pycomm3 python-snap7 pymodbus

Usage
Simple usage with discovery

async with PLCManager() as manager: plcs = await manager.discover_plcs() await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.connect_plc("PLC1")

# Read tags
values = await manager.read_tag("PLC1", ["Tag1", "Tag2"])

# Write tags
await manager.write_tag("PLC1", [("Tag1", 100), ("Tag2", 200)])
Batch operations

async with PLCManager() as manager: # Register multiple PLCs await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.register_plc("PLC2", "Siemens", "192.168.1.101")

# Connect all PLCs
results = await manager.connect_all_plcs()

# Batch read from multiple PLCs
read_requests = [
    ("PLC1", ["Temperature", "Pressure"]),
    ("PLC2", ["Speed", "Position"])
]
values = await manager.read_tags_batch(read_requests)
Configuration

All parameters are configurable via the hardware configuration system: - MINDTRACE_HW_PLC_AUTO_DISCOVERY: Enable automatic PLC discovery - MINDTRACE_HW_PLC_CONNECTION_TIMEOUT: Connection timeout in seconds - MINDTRACE_HW_PLC_READ_TIMEOUT: Tag read timeout in seconds - MINDTRACE_HW_PLC_WRITE_TIMEOUT: Tag write timeout in seconds - MINDTRACE_HW_PLC_RETRY_COUNT: Number of retry attempts - MINDTRACE_HW_PLC_MAX_CONCURRENT_CONNECTIONS: Maximum concurrent connections - MINDTRACE_HW_PLC_ALLEN_BRADLEY_ENABLED: Enable Allen-Bradley backend - MINDTRACE_HW_PLC_SIEMENS_ENABLED: Enable Siemens backend - MINDTRACE_HW_PLC_MODBUS_ENABLED: Enable Modbus backend

Error Handling

The module uses a comprehensive exception hierarchy for precise error reporting: - PLCError: Base exception for all PLC-related errors - PLCNotFoundError: PLC not found during discovery or registration - PLCConnectionError: Connection establishment or maintenance failures - PLCInitializationError: PLC initialization failures - PLCCommunicationError: Communication protocol errors - PLCTagError: Tag-related operation errors - PLCTagReadError: Tag read operation failures - PLCTagWriteError: Tag write operation failures - HardwareOperationError: General hardware operation failures

Thread Safety

All PLC operations are thread-safe. Multiple PLCs can be operated simultaneously from different threads without interference.

Performance Notes
  • PLC discovery may take several seconds depending on network size
  • Batch operations are more efficient than individual tag operations
  • Connection pooling is used for optimal performance
  • Consider PLC-specific optimizations for production use
PLCManager
PLCManager()

Bases: Mindtrace

Unified PLC management system for industrial automation.

This manager provides a comprehensive interface for managing PLCs from different manufacturers with support for discovery, registration, connection management, and batch tag operations. It handles multiple PLC backends transparently and provides thread-safe operations with proper error handling.

The manager supports: - Automatic PLC discovery across multiple backends - Dynamic PLC registration and connection management - Batch tag read/write operations for optimal performance - Connection monitoring and automatic reconnection - Comprehensive error handling and logging - Thread-safe operations with proper resource management

Supported PLC Types: - Allen-Bradley: ControlLogix, CompactLogix, MicroLogix PLCs - Siemens: S7-300, S7-400, S7-1200, S7-1500 PLCs (Future) - Modbus: Modbus TCP/RTU devices (Future) - Mock PLCs: For testing and development

Attributes:

Name Type Description
plcs Dict[str, BasePLC]

Dictionary mapping PLC names to PLC instances

config

Hardware configuration manager instance

logger

Centralized logger for PLC operations

Example
Basic usage

async with PLCManager() as manager: # Discover available PLCs discovered = await manager.discover_plcs()

# Register and connect to a PLC
await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100")
await manager.connect_plc("PLC1")

# Read and write tags
values = await manager.read_tag("PLC1", ["Temperature", "Pressure"])
await manager.write_tag("PLC1", [("Setpoint", 75.0)])
Batch operations

async with PLCManager() as manager: # Register multiple PLCs await manager.register_plc("PLC1", "AllenBradley", "192.168.1.100") await manager.register_plc("PLC2", "AllenBradley", "192.168.1.101")

# Batch read from multiple PLCs
read_requests = [
    ("PLC1", ["Temperature", "Pressure"]),
    ("PLC2", ["Speed", "Position"])
]
results = await manager.read_tags_batch(read_requests)

Initialize the PLC manager.

discover_plcs async
discover_plcs() -> Dict[str, List[str]]

Discover available PLCs from all enabled backends.

Returns:

Type Description
Dict[str, List[str]]

Dictionary mapping backend names to lists of discovered PLCs

register_plc async
register_plc(
    plc_name: str,
    backend: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    **kwargs
) -> bool

Register a PLC with the manager.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
backend str

Backend type ("AllenBradley", "Siemens", "Modbus")

required
ip_address str

IP address of the PLC

required
plc_type Optional[str]

Specific PLC type (backend-dependent)

None
**kwargs

Additional backend-specific parameters

{}

Returns:

Type Description
bool

True if registration successful, False otherwise

unregister_plc async
unregister_plc(plc_name: str) -> bool

Unregister a PLC from the manager.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC to unregister

required

Returns:

Type Description
bool

True if unregistration successful, False otherwise

connect_plc async
connect_plc(plc_name: str) -> bool

Connect to a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC to connect

required

Returns:

Type Description
bool

True if connection successful, False otherwise

disconnect_plc async
disconnect_plc(plc_name: str) -> bool

Disconnect from a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC to disconnect

required

Returns:

Type Description
bool

True if disconnection successful, False otherwise

connect_all_plcs async
connect_all_plcs() -> Dict[str, bool]

Connect to all registered PLCs.

Returns:

Type Description
Dict[str, bool]

Dictionary mapping PLC names to connection success status

disconnect_all_plcs async
disconnect_all_plcs() -> Dict[str, bool]

Disconnect from all registered PLCs.

Returns:

Type Description
Dict[str, bool]

Dictionary mapping PLC names to disconnection success status

read_tag async
read_tag(plc_name: str, tags: Union[str, List[str]]) -> Dict[str, Any]

Read tags from a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tag async
write_tag(
    plc_name: str, tags: Union[Tuple[str, Any], List[Tuple[str, Any]]]
) -> Dict[str, bool]

Write tags to a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

read_tags_batch async
read_tags_batch(
    requests: List[Tuple[str, Union[str, List[str]]]],
) -> Dict[str, Dict[str, Any]]

Read tags from multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[str, List[str]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping PLC names to their tag read results

write_tags_batch async
write_tags_batch(
    requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]

Write tags to multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, bool]]

Dictionary mapping PLC names to their tag write results

get_plc_status async
get_plc_status(plc_name: str) -> Dict[str, Any]

Get status information for a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required

Returns:

Type Description
Dict[str, Any]

Dictionary with PLC status information

get_all_plc_status async
get_all_plc_status() -> Dict[str, Dict[str, Any]]

Get status information for all registered PLCs.

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping PLC names to their status information

get_plc_tags async
get_plc_tags(plc_name: str) -> List[str]

Get list of available tags for a specific PLC.

Parameters:

Name Type Description Default
plc_name str

Name of the PLC

required

Returns:

Type Description
List[str]

List of available tag names

get_registered_plcs
get_registered_plcs() -> List[str]

Get list of registered PLC names.

Returns:

Type Description
List[str]

List of registered PLC names

get_backend_info
get_backend_info() -> Dict[str, Dict[str, Any]]

Get information about available PLC backends.

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping backend names to their information

cleanup async
cleanup()

Clean up all PLC connections and resources.

scanners_3d

3D Scanner module for structured light and other 3D scanning technologies.

This module provides support for 3D scanners including: - Photoneo PhoXi structured light scanners - Future: Time-of-Flight (ToF) cameras - Future: LiDAR sensors

Usage

from mindtrace.hardware.scanners_3d import Scanner3D, AsyncScanner3D

Synchronous usage

with Scanner3D() as scanner: ... result = scanner.capture() ... point_cloud = scanner.capture_point_cloud() ... point_cloud.save_ply("output.ply")

Async usage

async with await AsyncScanner3D.open() as scanner: ... result = await scanner.capture() ... point_cloud = await scanner.capture_point_cloud()

PhotoneoBackend
PhotoneoBackend(
    serial_number: Optional[str] = None,
    cti_path: Optional[str] = None,
    op_timeout_s: float = 30.0,
    buffer_count: int = 5,
)

Bases: Scanner3DBackend

Backend for Photoneo PhoXi 3D scanners using Harvesters.

Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.

This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.

Requirements
  • Harvesters library: pip install harvesters
  • Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
  • Photoneo PhoXi firmware version 1.13.0 or later
Usage

backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of specific scanner. If None, opens first available Photoneo device.

None
cti_path Optional[str]

Path to GenTL Producer (.cti file). Auto-detected if None.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0
buffer_count int

Number of frame buffers for acquisition.

5

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

CameraConfigurationError

If CTI file not found

name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available Photoneo devices.

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Photoneo devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

initialize async
initialize() -> bool

Initialize scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture 3D scan data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color/intensity

True
include_confidence bool

Whether to include confidence values

False
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

close async
close() -> None

Close scanner and release resources.

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

AsyncScanner3D
AsyncScanner3D(backend)

Bases: Mindtrace

Async 3D scanner interface.

Provides high-level 3D scanning operations including multi-component capture and point cloud generation.

Usage

scanner = await AsyncScanner3D.open() result = await scanner.capture() print(result.range_shape) await scanner.close()

Initialize async 3D scanner.

Parameters:

Name Type Description Default
backend

Backend instance (e.g., PhotoneoBackend)

required
name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

open async classmethod
open(name: Optional[str] = None) -> 'AsyncScanner3D'

Open and initialize a 3D scanner.

Parameters:

Name Type Description Default
name Optional[str]

Scanner identifier. Format: "Backend:serial_number" Supported backends: "Photoneo", "MockPhotoneo". If None, opens first available Photoneo scanner.

None

Returns:

Type Description
'AsyncScanner3D'

Initialized AsyncScanner3D instance

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

Examples:

>>> scanner = await AsyncScanner3D.open()
>>> scanner = await AsyncScanner3D.open("Photoneo:ABC123")
>>> scanner = await AsyncScanner3D.open("MockPhotoneo:MOCK-001")
close async
close() -> None

Close scanner and release resources.

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture multi-component 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> result = await scanner.capture()
>>> print(f"Range: {result.range_shape}")
>>> print(f"Intensity: {result.intensity_shape}")
capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
include_confidence bool

Whether to include confidence values

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> point_cloud = await scanner.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

Returns:

Type Description
ScannerCapabilities

ScannerCapabilities with available options and ranges

Examples:

>>> caps = await scanner.get_capabilities()
>>> print(f"Coding qualities: {caps.coding_qualities}")
get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

Returns:

Type Description
ScannerConfiguration

ScannerConfiguration with current settings

Examples:

>>> config = await scanner.get_configuration()
>>> print(f"Exposure: {config.exposure_time}ms")
set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

Only non-None values in the configuration will be applied.

Parameters:

Name Type Description Default
config ScannerConfiguration

Configuration to apply

required

Examples:

>>> config = ScannerConfiguration(exposure_time=15.0, coding_quality=CodingQuality.HIGH)
>>> await scanner.set_configuration(config)
set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

Parameters:

Name Type Description Default
milliseconds float

Exposure time in milliseconds

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await scanner.set_exposure_time(10.24)  # 10.24ms exposure
get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

Returns:

Type Description
float

Current exposure time in milliseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> exposure = await scanner.get_exposure_time()
>>> print(f"Exposure: {exposure}ms")
set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ("Continuous", "Software", or "Hardware")

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await scanner.set_trigger_mode("Software")
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

Returns:

Type Description
str

"Continuous", "Software", or "Hardware"

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> mode = await scanner.get_trigger_mode()
>>> print(f"Mode: {mode}")
CoordinateMap dataclass
CoordinateMap(
    x_map: Optional[ndarray] = None,
    y_map: Optional[ndarray] = None,
    width: int = 0,
    height: int = 0,
    scale: float = 1.0,
    offset: float = 0.0,
    is_valid: bool = False,
)

Coordinate map for efficient point cloud generation.

Photoneo devices can provide pre-computed coordinate maps that allow efficient conversion from range-only data to full 3D point clouds. This enables faster transfers (only Z data) with local point cloud computation.

Attributes:

Name Type Description
x_map Optional[ndarray]

X coordinate map (H, W) - multiply by range to get X

y_map Optional[ndarray]

Y coordinate map (H, W) - multiply by range to get Y

width int

Map width in pixels

height int

Map height in pixels

scale float

Coordinate scale factor

offset float

Coordinate offset

is_valid bool

Whether the map has been initialized

from_projected_c classmethod
from_projected_c(
    projected_c: ndarray,
    width: int,
    height: int,
    scale: float = 1.0,
    offset: float = 0.0,
) -> "CoordinateMap"

Create coordinate map from Photoneo ProjectedC component.

The ProjectedC component contains pre-computed X,Y coordinates that can be cached and reused for faster point cloud generation.

Parameters:

Name Type Description Default
projected_c ndarray

ProjectedC data from Photoneo (H, W, 3) float32

required
width int

Image width

required
height int

Image height

required
scale float

Coordinate scale factor

1.0
offset float

Coordinate offset

0.0

Returns:

Type Description
'CoordinateMap'

CoordinateMap instance

compute_point_cloud
compute_point_cloud(
    range_map: ndarray, valid_mask: Optional[ndarray] = None
) -> np.ndarray

Compute 3D point cloud from range map using cached coordinates.

Parameters:

Name Type Description Default
range_map ndarray

Depth/range map (H, W)

required
valid_mask Optional[ndarray]

Optional mask of valid pixels

None

Returns:

Type Description
ndarray

Point cloud array (N, 3) with X, Y, Z coordinates

PointCloudData dataclass
PointCloudData(
    points: ndarray,
    colors: Optional[ndarray] = None,
    normals: Optional[ndarray] = None,
    confidence: Optional[ndarray] = None,
    num_points: int = 0,
    has_colors: bool = False,
)

3D point cloud data with optional attributes.

Attributes:

Name Type Description
points ndarray

Array of 3D points (N, 3) - (x, y, z) in meters

colors Optional[ndarray]

Optional RGB colors (N, 3) - values in [0, 1]

normals Optional[ndarray]

Optional surface normals (N, 3) - unit vectors

confidence Optional[ndarray]

Optional per-point confidence (N,)

num_points int

Number of valid points

has_colors bool

Flag indicating if color information is present

has_normals property
has_normals: bool

Check if normal information is present.

has_confidence property
has_confidence: bool

Check if confidence information is present.

save_ply
save_ply(path: str, binary: bool = True) -> None

Save point cloud as PLY file.

Parameters:

Name Type Description Default
path str

Output file path

required
binary bool

If True, save in binary format; otherwise ASCII

True

Raises:

Type Description
ImportError

If plyfile is not installed

downsample
downsample(factor: int) -> 'PointCloudData'

Downsample point cloud by given factor.

Parameters:

Name Type Description Default
factor int

Downsampling factor (e.g., 2 = keep every 2nd point)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with downsampled data

filter_by_confidence
filter_by_confidence(min_confidence: float) -> 'PointCloudData'

Filter points by minimum confidence threshold.

Parameters:

Name Type Description Default
min_confidence float

Minimum confidence value (0.0 to 1.0)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with filtered points

Raises:

Type Description
ValueError

If no confidence data available

ScanComponent

Bases: Enum

Available scan components from 3D scanners.

Scanner3D
Scanner3D(
    async_scanner: Optional[AsyncScanner3D] = None,
    loop: Optional[AbstractEventLoop] = None,
    name: Optional[str] = None,
    **kwargs
)

Bases: Mindtrace

Synchronous wrapper around AsyncScanner3D.

All operations are executed on a background event loop. This provides a simple synchronous API for 3D scanner operations.

Usage

scanner = Scanner3D() result = scanner.capture() print(result.range_shape) scanner.close()

Or with context manager

with Scanner3D() as scanner: ... result = scanner.capture() ... print(result.range_shape)

Create a synchronous 3D scanner wrapper.

Parameters:

Name Type Description Default
async_scanner Optional[AsyncScanner3D]

Existing AsyncScanner3D instance

None
loop Optional[AbstractEventLoop]

Event loop to use for async operations

None
name Optional[str]

Scanner identifier. Format: "Photoneo:serial_number" If None, opens first available scanner.

None
**kwargs

Additional arguments passed to Mindtrace

{}

Examples:

>>> # Simple usage - opens first available
>>> scanner = Scanner3D()
>>> # Open specific scanner
>>> scanner = Scanner3D(name="Photoneo:ABC123")
>>> # Use existing async scanner
>>> async_scan = await AsyncScanner3D.open()
>>> sync_scan = Scanner3D(async_scanner=async_scan, loop=loop)
name property
name: str

Get scanner name.

Returns:

Type Description
str

Scanner name in format "Backend:serial_number"

is_open property
is_open: bool

Check if scanner is open.

Returns:

Type Description
bool

True if scanner is open, False otherwise

close
close() -> None

Close scanner and release resources.

Examples:

>>> scanner = Scanner3D()
>>> # ... use scanner ...
>>> scanner.close()
capture
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture multi-component 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> scanner = Scanner3D()
>>> result = scanner.capture()
>>> print(f"Range: {result.range_shape}")
>>> scanner.close()
capture_point_cloud
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
include_confidence bool

Whether to include confidence values

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> scanner = Scanner3D()
>>> point_cloud = scanner.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
>>> scanner.close()
set_exposure_time
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> scanner = Scanner3D()
>>> scanner.set_exposure_time(5000)
>>> scanner.close()
get_exposure_time
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> scanner = Scanner3D()
>>> exposure = scanner.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
>>> scanner.close()
set_trigger_mode
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "software")

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> scanner = Scanner3D()
>>> scanner.set_trigger_mode("software")
>>> scanner.close()
get_trigger_mode
get_trigger_mode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "software"

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> scanner = Scanner3D()
>>> mode = scanner.get_trigger_mode()
>>> print(f"Mode: {mode}")
>>> scanner.close()
ScanResult dataclass
ScanResult(
    range_map: Optional[ndarray] = None,
    intensity: Optional[ndarray] = None,
    confidence: Optional[ndarray] = None,
    normal_map: Optional[ndarray] = None,
    color: Optional[ndarray] = None,
    timestamp: float = 0.0,
    frame_number: int = 0,
    components_enabled: Dict[ScanComponent, bool] = dict(),
    metadata: Dict[str, Union[str, int, float]] = dict(),
)

Result from 3D scanner capture containing multi-component data.

Attributes:

Name Type Description
range_map Optional[ndarray]

Depth/range map - typically uint16 or float32 (H, W)

intensity Optional[ndarray]

Intensity image - uint8 or uint16 (H, W) or (H, W, 3)

confidence Optional[ndarray]

Confidence map - uint8 or uint16 (H, W), values indicate quality

normal_map Optional[ndarray]

Surface normals - float32 (H, W, 3), xyz components

color Optional[ndarray]

Color texture - uint8 (H, W, 3) RGB

timestamp float

Capture timestamp in seconds (from device or system)

frame_number int

Sequential frame number

components_enabled Dict[ScanComponent, bool]

Dict of which components were captured

metadata Dict[str, Union[str, int, float]]

Additional scan metadata (exposure, gain, etc.)

has_range property
has_range: bool

Check if range data is present.

has_intensity property
has_intensity: bool

Check if intensity data is present.

has_confidence property
has_confidence: bool

Check if confidence data is present.

has_normals property
has_normals: bool

Check if normal map is present.

has_color property
has_color: bool

Check if color data is present.

range_shape property
range_shape: tuple

Get shape of range map.

intensity_shape property
intensity_shape: tuple

Get shape of intensity image.

get_valid_mask
get_valid_mask(min_confidence: int = 0) -> np.ndarray

Get mask of valid pixels based on range and confidence.

Parameters:

Name Type Description Default
min_confidence int

Minimum confidence threshold (0-255 typical)

0

Returns:

Type Description
ndarray

Boolean mask (H, W) where True indicates valid pixel

backends

3D scanner backend implementations.

This module provides the abstract base class and concrete implementations for 3D scanner backends.

MockPhotoneoBackend
MockPhotoneoBackend(
    serial_number: Optional[str] = None,
    width: int = 2064,
    height: int = 1544,
    op_timeout_s: float = 30.0,
)

Bases: Scanner3DBackend

Mock backend for Photoneo scanners for testing.

Generates synthetic 3D data for testing scanner integration without physical hardware.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner backends.

Usage

backend = MockPhotoneoBackend(serial_number="MOCK001") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize mock Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of mock device. If None, uses first available mock device.

None
width int

Width of generated images

2064
height int

Height of generated images

1544
op_timeout_s float

Timeout for operations (simulated)

30.0
name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available mock devices.

Returns:

Type Description
List[str]

List of serial numbers for mock devices

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover mock devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize async
initialize() -> bool

Initialize mock scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If mock device not found

close async
close() -> None

Close mock scanner.

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture synthetic 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout (simulated)

10000
enable_range bool

Whether to generate range data

True
enable_intensity bool

Whether to generate intensity data

True
enable_confidence bool

Whether to generate confidence data

False
enable_normal bool

Whether to generate normal data

False
enable_color bool

Whether to generate color data

False

Returns:

Type Description
ScanResult

ScanResult with synthetic data

Raises:

Type Description
CameraConnectionError

If not initialized

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate synthetic point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include colors

True
include_confidence bool

Whether to include confidence

False
timeout_ms int

Capture timeout

10000

Returns:

Type Description
PointCloudData

PointCloudData with synthetic points

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get mock scanner capabilities.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current mock scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

PhotoneoBackend
PhotoneoBackend(
    serial_number: Optional[str] = None,
    cti_path: Optional[str] = None,
    op_timeout_s: float = 30.0,
    buffer_count: int = 5,
)

Bases: Scanner3DBackend

Backend for Photoneo PhoXi 3D scanners using Harvesters.

Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.

This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.

Requirements
  • Harvesters library: pip install harvesters
  • Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
  • Photoneo PhoXi firmware version 1.13.0 or later
Usage

backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of specific scanner. If None, opens first available Photoneo device.

None
cti_path Optional[str]

Path to GenTL Producer (.cti file). Auto-detected if None.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0
buffer_count int

Number of frame buffers for acquisition.

5

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

CameraConfigurationError

If CTI file not found

name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available Photoneo devices.

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Photoneo devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

initialize async
initialize() -> bool

Initialize scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture 3D scan data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color/intensity

True
include_confidence bool

Whether to include confidence values

False
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

close async
close() -> None

Close scanner and release resources.

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

Scanner3DBackend
Scanner3DBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: MindtraceABC

Abstract base class for all 3D scanner implementations.

This class defines the async interface that all 3D scanner backends must implement to ensure consistent behavior across different scanner types and manufacturers. Supports structured light (Photoneo, Ensenso), time-of-flight, LiDAR, and other 3D scanning technologies.

Uses async-first design consistent with CameraBackend and StereoCameraBackend.

Attributes:

Name Type Description
serial_number

Unique identifier for the scanner

is_open bool

Scanner connection status

Implementation Guide
  • Offload blocking SDK calls from async methods: Use asyncio.to_thread for simple cases or loop.run_in_executor with a per-instance single-thread executor when the SDK requires thread affinity.
  • Thread affinity: Many vendor SDKs (e.g., Harvesters/GenTL) are safest when all calls originate from one OS thread. Prefer a dedicated single-thread executor created during initialize() and shut down in close() to serialize SDK access without blocking the event loop.
  • Timeouts and cancellation: Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with asyncio.wait_for to bound runtime.
  • Event loop hygiene: Never call blocking functions directly in async methods. Replace sleeps with await asyncio.sleep or run blocking work in the executor.
  • Errors: Map SDK-specific exceptions to domain exceptions in mindtrace.hardware.core.exceptions with clear, contextual messages.
  • Cleanup: Ensure resources (device handles, Harvester instances, buffers) are released in close().
Supported Scanner Types
  • Structured light: Photoneo PhoXi, Ensenso, Zivid
  • Time-of-flight: Various ToF sensors
  • LiDAR: Point cloud scanners
  • Other 3D sensing technologies
Example Implementation

class MyScanner3DBackend(Scanner3DBackend): ... async def initialize(self) -> bool: ... # Connect to scanner ... return True ... ... async def capture(self, ...) -> ScanResult: ... # Capture 3D scan data ... return ScanResult(...)

Initialize base 3D scanner backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Unique identifier for the scanner (auto-discovered if None)

None
op_timeout_s float

Default timeout in seconds for SDK operations

30.0
name abstractmethod property
name: str

Get scanner name in format 'BackendType:serial_number'.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information dictionary.

discover abstractmethod staticmethod
discover() -> List[str]

Discover available 3D scanners.

Returns:

Type Description
List[str]

List of serial numbers or identifiers for available scanners

Raises:

Type Description
SDKNotAvailableError

If required SDK is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.

Returns:

Type Description
List[str]

List of serial numbers for available scanners

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover scanners with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing scanner information:

List[Dict[str, str]]
  • serial_number: Device serial number
List[Dict[str, str]]
  • model: Model name
List[Dict[str, str]]
  • vendor: Manufacturer name
List[Dict[str, str]]
  • Additional device-specific fields
discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize abstractmethod async
initialize() -> bool

Initialize scanner connection.

This method should: 1. Connect to the scanner hardware 2. Apply default configuration 3. Prepare for acquisition

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If scanner cannot be found

CameraConnectionError

If connection fails

SDKNotAvailableError

If required SDK is not available

capture abstractmethod async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture 3D scan data with multiple components.

3D scanners can output multiple data types in a single capture: - Range/Depth: Z-distance from scanner to surface - Intensity: Grayscale texture/reflectance image - Confidence: Per-pixel quality/confidence values - Normal: Surface normal vectors - Color: RGB texture (if color camera available)

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity/texture data

True
enable_confidence bool

Whether to capture confidence/quality data

False
enable_normal bool

Whether to capture surface normal vectors

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured multi-component data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

close abstractmethod async
close() -> None

Close scanner and release resources.

This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up Harvester/GenTL resources 4. Clean up executors/threads if used

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Default implementation captures scan data and generates point cloud. Override for backend-specific optimization.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color/intensity information

True
include_confidence bool

Whether to include confidence values

False
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

Returns:

Type Description
ScannerCapabilities

ScannerCapabilities describing what features are available

Raises:

Type Description
CameraConnectionError

If scanner not opened

get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

Returns:

Type Description
ScannerConfiguration

ScannerConfiguration with current settings

Raises:

Type Description
CameraConnectionError

If scanner not opened

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

Only non-None values in the configuration will be applied.

Parameters:

Name Type Description Default
config ScannerConfiguration

Configuration to apply

required

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraConfigurationError

If configuration fails

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

Parameters:

Name Type Description Default
milliseconds float

Exposure time in milliseconds

required

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraConfigurationError

If configuration fails

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

Returns:

Type Description
float

Current exposure time in milliseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

Higher values increase exposure by capturing multiple patterns.

Parameters:

Name Type Description Default
multiplier int

Shutter multiplier value (1-10)

required
get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

Parameters:

Name Type Description Default
mode str

Operation mode ('Camera', 'Scanner', 'Mode_2D')

required
get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

Parameters:

Name Type Description Default
strategy str

Coding strategy ('Normal', 'Interreflections', 'HighFrequency')

required
get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

Parameters:

Name Type Description Default
quality str

Quality preset ('Ultra', 'High', 'Fast')

required
get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power.

Parameters:

Name Type Description Default
power int

LED power level (typically 0-4095)

required
get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power.

Parameters:

Name Type Description Default
power int

Laser power level (typically 1-4095)

required
get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

Parameters:

Name Type Description Default
source str

Texture source ('LED', 'Computed', 'Laser', 'Focus', 'Color')

required
get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

Parameters:

Name Type Description Default
topology str

Output topology ('Raw', 'RegularGrid', 'FullGrid')

required
get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

Parameters:

Name Type Description Default
space str

Camera space ('PrimaryCamera', 'ColorCamera')

required
get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation.

Parameters:

Name Type Description Default
radius int

Estimation radius (typically 0-4)

required
get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering.

Parameters:

Name Type Description Default
value float

Maximum inaccuracy (typically 0-100)

required
get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

Parameters:

Name Type Description Default
enabled bool

Whether to enable hole filling

required
get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

Parameters:

Name Type Description Default
enabled bool

Whether to filter to calibration volume

required
get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ('Software', 'Hardware', 'Continuous')

required
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

Parameters:

Name Type Description Default
enabled bool

Whether to enable hardware trigger

required
get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_hardware_trigger_signal async
set_hardware_trigger_signal(signal: str) -> None

Set hardware trigger signal edge.

Parameters:

Name Type Description Default
signal str

Trigger signal edge ('Falling', 'Rising', 'Both')

required
get_hardware_trigger_signal async
get_hardware_trigger_signal() -> str

Get current hardware trigger signal setting.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second.

Parameters:

Name Type Description Default
fps float

Maximum FPS (typically 0-100)

required
get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

photoneo

Photoneo PhoXi 3D scanner backend.

MockPhotoneoBackend
MockPhotoneoBackend(
    serial_number: Optional[str] = None,
    width: int = 2064,
    height: int = 1544,
    op_timeout_s: float = 30.0,
)

Bases: Scanner3DBackend

Mock backend for Photoneo scanners for testing.

Generates synthetic 3D data for testing scanner integration without physical hardware.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner backends.

Usage

backend = MockPhotoneoBackend(serial_number="MOCK001") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize mock Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of mock device. If None, uses first available mock device.

None
width int

Width of generated images

2064
height int

Height of generated images

1544
op_timeout_s float

Timeout for operations (simulated)

30.0
name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available mock devices.

Returns:

Type Description
List[str]

List of serial numbers for mock devices

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover mock devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize async
initialize() -> bool

Initialize mock scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If mock device not found

close async
close() -> None

Close mock scanner.

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture synthetic 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout (simulated)

10000
enable_range bool

Whether to generate range data

True
enable_intensity bool

Whether to generate intensity data

True
enable_confidence bool

Whether to generate confidence data

False
enable_normal bool

Whether to generate normal data

False
enable_color bool

Whether to generate color data

False

Returns:

Type Description
ScanResult

ScanResult with synthetic data

Raises:

Type Description
CameraConnectionError

If not initialized

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate synthetic point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include colors

True
include_confidence bool

Whether to include confidence

False
timeout_ms int

Capture timeout

10000

Returns:

Type Description
PointCloudData

PointCloudData with synthetic points

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get mock scanner capabilities.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current mock scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

PhotoneoBackend
PhotoneoBackend(
    serial_number: Optional[str] = None,
    cti_path: Optional[str] = None,
    op_timeout_s: float = 30.0,
    buffer_count: int = 5,
)

Bases: Scanner3DBackend

Backend for Photoneo PhoXi 3D scanners using Harvesters.

Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.

This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.

Requirements
  • Harvesters library: pip install harvesters
  • Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
  • Photoneo PhoXi firmware version 1.13.0 or later
Usage

backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of specific scanner. If None, opens first available Photoneo device.

None
cti_path Optional[str]

Path to GenTL Producer (.cti file). Auto-detected if None.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0
buffer_count int

Number of frame buffers for acquisition.

5

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

CameraConfigurationError

If CTI file not found

name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available Photoneo devices.

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Photoneo devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

initialize async
initialize() -> bool

Initialize scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture 3D scan data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color/intensity

True
include_confidence bool

Whether to include confidence values

False
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

close async
close() -> None

Close scanner and release resources.

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

mock_photoneo_backend

Mock Photoneo backend for testing without hardware.

MockPhotoneoBackend
MockPhotoneoBackend(
    serial_number: Optional[str] = None,
    width: int = 2064,
    height: int = 1544,
    op_timeout_s: float = 30.0,
)

Bases: Scanner3DBackend

Mock backend for Photoneo scanners for testing.

Generates synthetic 3D data for testing scanner integration without physical hardware.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner backends.

Usage

backend = MockPhotoneoBackend(serial_number="MOCK001") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize mock Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of mock device. If None, uses first available mock device.

None
width int

Width of generated images

2064
height int

Height of generated images

1544
op_timeout_s float

Timeout for operations (simulated)

30.0
name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available mock devices.

Returns:

Type Description
List[str]

List of serial numbers for mock devices

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover mock devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize async
initialize() -> bool

Initialize mock scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If mock device not found

close async
close() -> None

Close mock scanner.

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture synthetic 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout (simulated)

10000
enable_range bool

Whether to generate range data

True
enable_intensity bool

Whether to generate intensity data

True
enable_confidence bool

Whether to generate confidence data

False
enable_normal bool

Whether to generate normal data

False
enable_color bool

Whether to generate color data

False

Returns:

Type Description
ScanResult

ScanResult with synthetic data

Raises:

Type Description
CameraConnectionError

If not initialized

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate synthetic point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include colors

True
include_confidence bool

Whether to include confidence

False
timeout_ms int

Capture timeout

10000

Returns:

Type Description
PointCloudData

PointCloudData with synthetic points

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get mock scanner capabilities.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current mock scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

photoneo_backend

Photoneo PhoXi 3D scanner backend using Harvesters (GigE Vision).

This backend provides access to Photoneo structured light 3D scanners via the GigE Vision protocol using the Harvesters library.

PhotoneoBackend
PhotoneoBackend(
    serial_number: Optional[str] = None,
    cti_path: Optional[str] = None,
    op_timeout_s: float = 30.0,
    buffer_count: int = 5,
)

Bases: Scanner3DBackend

Backend for Photoneo PhoXi 3D scanners using Harvesters.

Photoneo PhoXi scanners are structured light 3D sensors that output multiple data components: Range, Intensity, Confidence, Normal, and Color.

This backend uses GigE Vision protocol via Harvesters library with Matrix Vision GenTL Producer.

Extends Scanner3DBackend to provide consistent interface across different 3D scanner manufacturers.

Requirements
  • Harvesters library: pip install harvesters
  • Matrix Vision mvIMPACT Acquire SDK with GenTL Producer
  • Photoneo PhoXi firmware version 1.13.0 or later
Usage

backend = PhotoneoBackend(serial_number="ABC123") await backend.initialize() result = await backend.capture() print(result.range_shape) await backend.close()

Initialize Photoneo backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number of specific scanner. If None, opens first available Photoneo device.

None
cti_path Optional[str]

Path to GenTL Producer (.cti file). Auto-detected if None.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0
buffer_count int

Number of frame buffers for acquisition.

5

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

CameraConfigurationError

If CTI file not found

name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information.

discover staticmethod
discover() -> List[str]

Discover available Photoneo devices.

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

Raises:

Type Description
SDKNotAvailableError

If Harvesters is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover().

Returns:

Type Description
List[str]

List of serial numbers for available Photoneo devices

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Photoneo devices with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing device information

initialize async
initialize() -> bool

Initialize scanner connection.

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture 3D scan data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color/intensity

True
include_confidence bool

Whether to include confidence values

False
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

close async
close() -> None

Close scanner and release resources.

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power (0-4095).

get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power (1-4095).

get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation (0-4).

get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering (0-100).

get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('Software', 'Hardware', 'Continuous').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second (0-100).

get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

scanner_3d_backend

Abstract base class for 3D scanner backends.

This module defines the async interface that all 3D scanner backends must implement to ensure consistent behavior across different scanner types and manufacturers (structured light, time-of-flight, LiDAR, etc.).

Following the same architectural pattern as CameraBackend and StereoCameraBackend for consistency across the hardware module.

Scanner3DBackend
Scanner3DBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: MindtraceABC

Abstract base class for all 3D scanner implementations.

This class defines the async interface that all 3D scanner backends must implement to ensure consistent behavior across different scanner types and manufacturers. Supports structured light (Photoneo, Ensenso), time-of-flight, LiDAR, and other 3D scanning technologies.

Uses async-first design consistent with CameraBackend and StereoCameraBackend.

Attributes:

Name Type Description
serial_number

Unique identifier for the scanner

is_open bool

Scanner connection status

Implementation Guide
  • Offload blocking SDK calls from async methods: Use asyncio.to_thread for simple cases or loop.run_in_executor with a per-instance single-thread executor when the SDK requires thread affinity.
  • Thread affinity: Many vendor SDKs (e.g., Harvesters/GenTL) are safest when all calls originate from one OS thread. Prefer a dedicated single-thread executor created during initialize() and shut down in close() to serialize SDK access without blocking the event loop.
  • Timeouts and cancellation: Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with asyncio.wait_for to bound runtime.
  • Event loop hygiene: Never call blocking functions directly in async methods. Replace sleeps with await asyncio.sleep or run blocking work in the executor.
  • Errors: Map SDK-specific exceptions to domain exceptions in mindtrace.hardware.core.exceptions with clear, contextual messages.
  • Cleanup: Ensure resources (device handles, Harvester instances, buffers) are released in close().
Supported Scanner Types
  • Structured light: Photoneo PhoXi, Ensenso, Zivid
  • Time-of-flight: Various ToF sensors
  • LiDAR: Point cloud scanners
  • Other 3D sensing technologies
Example Implementation

class MyScanner3DBackend(Scanner3DBackend): ... async def initialize(self) -> bool: ... # Connect to scanner ... return True ... ... async def capture(self, ...) -> ScanResult: ... # Capture 3D scan data ... return ScanResult(...)

Initialize base 3D scanner backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Unique identifier for the scanner (auto-discovered if None)

None
op_timeout_s float

Default timeout in seconds for SDK operations

30.0
name abstractmethod property
name: str

Get scanner name in format 'BackendType:serial_number'.

is_open property
is_open: bool

Check if scanner is open.

device_info property
device_info: Optional[Dict[str, Any]]

Get device information dictionary.

discover abstractmethod staticmethod
discover() -> List[str]

Discover available 3D scanners.

Returns:

Type Description
List[str]

List of serial numbers or identifiers for available scanners

Raises:

Type Description
SDKNotAvailableError

If required SDK is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.

Returns:

Type Description
List[str]

List of serial numbers for available scanners

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover scanners with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing scanner information:

List[Dict[str, str]]
  • serial_number: Device serial number
List[Dict[str, str]]
  • model: Model name
List[Dict[str, str]]
  • vendor: Manufacturer name
List[Dict[str, str]]
  • Additional device-specific fields
discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize abstractmethod async
initialize() -> bool

Initialize scanner connection.

This method should: 1. Connect to the scanner hardware 2. Apply default configuration 3. Prepare for acquisition

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If scanner cannot be found

CameraConnectionError

If connection fails

SDKNotAvailableError

If required SDK is not available

capture abstractmethod async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture 3D scan data with multiple components.

3D scanners can output multiple data types in a single capture: - Range/Depth: Z-distance from scanner to surface - Intensity: Grayscale texture/reflectance image - Confidence: Per-pixel quality/confidence values - Normal: Surface normal vectors - Color: RGB texture (if color camera available)

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity/texture data

True
enable_confidence bool

Whether to capture confidence/quality data

False
enable_normal bool

Whether to capture surface normal vectors

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured multi-component data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

close abstractmethod async
close() -> None

Close scanner and release resources.

This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up Harvester/GenTL resources 4. Clean up executors/threads if used

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Default implementation captures scan data and generates point cloud. Override for backend-specific optimization.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color/intensity information

True
include_confidence bool

Whether to include confidence values

False
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

Returns:

Type Description
ScannerCapabilities

ScannerCapabilities describing what features are available

Raises:

Type Description
CameraConnectionError

If scanner not opened

get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

Returns:

Type Description
ScannerConfiguration

ScannerConfiguration with current settings

Raises:

Type Description
CameraConnectionError

If scanner not opened

set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

Only non-None values in the configuration will be applied.

Parameters:

Name Type Description Default
config ScannerConfiguration

Configuration to apply

required

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraConfigurationError

If configuration fails

set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

Parameters:

Name Type Description Default
milliseconds float

Exposure time in milliseconds

required

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraConfigurationError

If configuration fails

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

Returns:

Type Description
float

Current exposure time in milliseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

set_shutter_multiplier async
set_shutter_multiplier(multiplier: int) -> None

Set shutter multiplier (1-10).

Higher values increase exposure by capturing multiple patterns.

Parameters:

Name Type Description Default
multiplier int

Shutter multiplier value (1-10)

required
get_shutter_multiplier async
get_shutter_multiplier() -> int

Get current shutter multiplier.

set_operation_mode async
set_operation_mode(mode: str) -> None

Set scanner operation mode.

Parameters:

Name Type Description Default
mode str

Operation mode ('Camera', 'Scanner', 'Mode_2D')

required
get_operation_mode async
get_operation_mode() -> str

Get current operation mode.

set_coding_strategy async
set_coding_strategy(strategy: str) -> None

Set structured light coding strategy.

Parameters:

Name Type Description Default
strategy str

Coding strategy ('Normal', 'Interreflections', 'HighFrequency')

required
get_coding_strategy async
get_coding_strategy() -> str

Get current coding strategy.

set_coding_quality async
set_coding_quality(quality: str) -> None

Set scan quality/speed tradeoff.

Parameters:

Name Type Description Default
quality str

Quality preset ('Ultra', 'High', 'Fast')

required
get_coding_quality async
get_coding_quality() -> str

Get current coding quality.

set_led_power async
set_led_power(power: int) -> None

Set LED illumination power.

Parameters:

Name Type Description Default
power int

LED power level (typically 0-4095)

required
get_led_power async
get_led_power() -> int

Get current LED power level.

set_laser_power async
set_laser_power(power: int) -> None

Set laser/projector power.

Parameters:

Name Type Description Default
power int

Laser power level (typically 1-4095)

required
get_laser_power async
get_laser_power() -> int

Get current laser power level.

set_texture_source async
set_texture_source(source: str) -> None

Set texture/intensity data source.

Parameters:

Name Type Description Default
source str

Texture source ('LED', 'Computed', 'Laser', 'Focus', 'Color')

required
get_texture_source async
get_texture_source() -> str

Get current texture source.

set_output_topology async
set_output_topology(topology: str) -> None

Set point cloud output topology.

Parameters:

Name Type Description Default
topology str

Output topology ('Raw', 'RegularGrid', 'FullGrid')

required
get_output_topology async
get_output_topology() -> str

Get current output topology.

set_camera_space async
set_camera_space(space: str) -> None

Set coordinate system reference camera.

Parameters:

Name Type Description Default
space str

Camera space ('PrimaryCamera', 'ColorCamera')

required
get_camera_space async
get_camera_space() -> str

Get current camera space.

set_normals_estimation_radius async
set_normals_estimation_radius(radius: int) -> None

Set radius for surface normal estimation.

Parameters:

Name Type Description Default
radius int

Estimation radius (typically 0-4)

required
get_normals_estimation_radius async
get_normals_estimation_radius() -> int

Get current normals estimation radius.

set_max_inaccuracy async
set_max_inaccuracy(value: float) -> None

Set maximum allowed inaccuracy for point filtering.

Parameters:

Name Type Description Default
value float

Maximum inaccuracy (typically 0-100)

required
get_max_inaccuracy async
get_max_inaccuracy() -> float

Get current max inaccuracy setting.

set_hole_filling async
set_hole_filling(enabled: bool) -> None

Enable/disable hole filling in point cloud.

Parameters:

Name Type Description Default
enabled bool

Whether to enable hole filling

required
get_hole_filling async
get_hole_filling() -> bool

Get hole filling state.

set_calibration_volume_only async
set_calibration_volume_only(enabled: bool) -> None

Enable/disable filtering to calibration volume only.

Parameters:

Name Type Description Default
enabled bool

Whether to filter to calibration volume

required
get_calibration_volume_only async
get_calibration_volume_only() -> bool

Get calibration volume filtering state.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ('Software', 'Hardware', 'Continuous')

required
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

set_hardware_trigger async
set_hardware_trigger(enabled: bool) -> None

Enable/disable hardware trigger.

Parameters:

Name Type Description Default
enabled bool

Whether to enable hardware trigger

required
get_hardware_trigger async
get_hardware_trigger() -> bool

Get hardware trigger state.

set_hardware_trigger_signal async
set_hardware_trigger_signal(signal: str) -> None

Set hardware trigger signal edge.

Parameters:

Name Type Description Default
signal str

Trigger signal edge ('Falling', 'Rising', 'Both')

required
get_hardware_trigger_signal async
get_hardware_trigger_signal() -> str

Get current hardware trigger signal setting.

set_maximum_fps async
set_maximum_fps(fps: float) -> None

Set maximum frames per second.

Parameters:

Name Type Description Default
fps float

Maximum FPS (typically 0-100)

required
get_maximum_fps async
get_maximum_fps() -> float

Get current maximum FPS setting.

core

Core 3D scanner interfaces and models.

AsyncScanner3D
AsyncScanner3D(backend)

Bases: Mindtrace

Async 3D scanner interface.

Provides high-level 3D scanning operations including multi-component capture and point cloud generation.

Usage

scanner = await AsyncScanner3D.open() result = await scanner.capture() print(result.range_shape) await scanner.close()

Initialize async 3D scanner.

Parameters:

Name Type Description Default
backend

Backend instance (e.g., PhotoneoBackend)

required
name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

open async classmethod
open(name: Optional[str] = None) -> 'AsyncScanner3D'

Open and initialize a 3D scanner.

Parameters:

Name Type Description Default
name Optional[str]

Scanner identifier. Format: "Backend:serial_number" Supported backends: "Photoneo", "MockPhotoneo". If None, opens first available Photoneo scanner.

None

Returns:

Type Description
'AsyncScanner3D'

Initialized AsyncScanner3D instance

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

Examples:

>>> scanner = await AsyncScanner3D.open()
>>> scanner = await AsyncScanner3D.open("Photoneo:ABC123")
>>> scanner = await AsyncScanner3D.open("MockPhotoneo:MOCK-001")
close async
close() -> None

Close scanner and release resources.

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture multi-component 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> result = await scanner.capture()
>>> print(f"Range: {result.range_shape}")
>>> print(f"Intensity: {result.intensity_shape}")
capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
include_confidence bool

Whether to include confidence values

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> point_cloud = await scanner.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

Returns:

Type Description
ScannerCapabilities

ScannerCapabilities with available options and ranges

Examples:

>>> caps = await scanner.get_capabilities()
>>> print(f"Coding qualities: {caps.coding_qualities}")
get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

Returns:

Type Description
ScannerConfiguration

ScannerConfiguration with current settings

Examples:

>>> config = await scanner.get_configuration()
>>> print(f"Exposure: {config.exposure_time}ms")
set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

Only non-None values in the configuration will be applied.

Parameters:

Name Type Description Default
config ScannerConfiguration

Configuration to apply

required

Examples:

>>> config = ScannerConfiguration(exposure_time=15.0, coding_quality=CodingQuality.HIGH)
>>> await scanner.set_configuration(config)
set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

Parameters:

Name Type Description Default
milliseconds float

Exposure time in milliseconds

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await scanner.set_exposure_time(10.24)  # 10.24ms exposure
get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

Returns:

Type Description
float

Current exposure time in milliseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> exposure = await scanner.get_exposure_time()
>>> print(f"Exposure: {exposure}ms")
set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ("Continuous", "Software", or "Hardware")

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await scanner.set_trigger_mode("Software")
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

Returns:

Type Description
str

"Continuous", "Software", or "Hardware"

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> mode = await scanner.get_trigger_mode()
>>> print(f"Mode: {mode}")
CameraSpace

Bases: Enum

Coordinate system reference camera.

CodingQuality

Bases: Enum

Scan quality/speed tradeoff.

CodingStrategy

Bases: Enum

Structured light coding strategy.

CoordinateMap dataclass
CoordinateMap(
    x_map: Optional[ndarray] = None,
    y_map: Optional[ndarray] = None,
    width: int = 0,
    height: int = 0,
    scale: float = 1.0,
    offset: float = 0.0,
    is_valid: bool = False,
)

Coordinate map for efficient point cloud generation.

Photoneo devices can provide pre-computed coordinate maps that allow efficient conversion from range-only data to full 3D point clouds. This enables faster transfers (only Z data) with local point cloud computation.

Attributes:

Name Type Description
x_map Optional[ndarray]

X coordinate map (H, W) - multiply by range to get X

y_map Optional[ndarray]

Y coordinate map (H, W) - multiply by range to get Y

width int

Map width in pixels

height int

Map height in pixels

scale float

Coordinate scale factor

offset float

Coordinate offset

is_valid bool

Whether the map has been initialized

from_projected_c classmethod
from_projected_c(
    projected_c: ndarray,
    width: int,
    height: int,
    scale: float = 1.0,
    offset: float = 0.0,
) -> "CoordinateMap"

Create coordinate map from Photoneo ProjectedC component.

The ProjectedC component contains pre-computed X,Y coordinates that can be cached and reused for faster point cloud generation.

Parameters:

Name Type Description Default
projected_c ndarray

ProjectedC data from Photoneo (H, W, 3) float32

required
width int

Image width

required
height int

Image height

required
scale float

Coordinate scale factor

1.0
offset float

Coordinate offset

0.0

Returns:

Type Description
'CoordinateMap'

CoordinateMap instance

compute_point_cloud
compute_point_cloud(
    range_map: ndarray, valid_mask: Optional[ndarray] = None
) -> np.ndarray

Compute 3D point cloud from range map using cached coordinates.

Parameters:

Name Type Description Default
range_map ndarray

Depth/range map (H, W)

required
valid_mask Optional[ndarray]

Optional mask of valid pixels

None

Returns:

Type Description
ndarray

Point cloud array (N, 3) with X, Y, Z coordinates

HardwareTriggerSignal

Bases: Enum

Hardware trigger signal edge.

OperationMode

Bases: Enum

Scanner operation mode.

OutputTopology

Bases: Enum

Point cloud output topology.

PointCloudData dataclass
PointCloudData(
    points: ndarray,
    colors: Optional[ndarray] = None,
    normals: Optional[ndarray] = None,
    confidence: Optional[ndarray] = None,
    num_points: int = 0,
    has_colors: bool = False,
)

3D point cloud data with optional attributes.

Attributes:

Name Type Description
points ndarray

Array of 3D points (N, 3) - (x, y, z) in meters

colors Optional[ndarray]

Optional RGB colors (N, 3) - values in [0, 1]

normals Optional[ndarray]

Optional surface normals (N, 3) - unit vectors

confidence Optional[ndarray]

Optional per-point confidence (N,)

num_points int

Number of valid points

has_colors bool

Flag indicating if color information is present

has_normals property
has_normals: bool

Check if normal information is present.

has_confidence property
has_confidence: bool

Check if confidence information is present.

save_ply
save_ply(path: str, binary: bool = True) -> None

Save point cloud as PLY file.

Parameters:

Name Type Description Default
path str

Output file path

required
binary bool

If True, save in binary format; otherwise ASCII

True

Raises:

Type Description
ImportError

If plyfile is not installed

downsample
downsample(factor: int) -> 'PointCloudData'

Downsample point cloud by given factor.

Parameters:

Name Type Description Default
factor int

Downsampling factor (e.g., 2 = keep every 2nd point)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with downsampled data

filter_by_confidence
filter_by_confidence(min_confidence: float) -> 'PointCloudData'

Filter points by minimum confidence threshold.

Parameters:

Name Type Description Default
min_confidence float

Minimum confidence value (0.0 to 1.0)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with filtered points

Raises:

Type Description
ValueError

If no confidence data available

ScanComponent

Bases: Enum

Available scan components from 3D scanners.

ScannerCapabilities dataclass
ScannerCapabilities(
    has_range: bool = True,
    has_intensity: bool = False,
    has_confidence: bool = False,
    has_normal: bool = False,
    has_color: bool = False,
    operation_modes: list = list(),
    coding_strategies: list = list(),
    coding_qualities: list = list(),
    texture_sources: list = list(),
    output_topologies: list = list(),
    exposure_range: Optional[tuple] = None,
    led_power_range: Optional[tuple] = None,
    laser_power_range: Optional[tuple] = None,
    fps_range: Optional[tuple] = None,
    depth_resolution: Optional[tuple] = None,
    color_resolution: Optional[tuple] = None,
    model: str = "",
    serial_number: str = "",
    firmware_version: str = "",
)

Describes the capabilities of a 3D scanner.

Used to query what features and settings are available on a specific scanner.

ScannerConfiguration dataclass
ScannerConfiguration(
    operation_mode: Optional[OperationMode] = None,
    coding_strategy: Optional[CodingStrategy] = None,
    coding_quality: Optional[CodingQuality] = None,
    maximum_fps: Optional[float] = None,
    exposure_time: Optional[float] = None,
    single_pattern_exposure: Optional[float] = None,
    shutter_multiplier: Optional[int] = None,
    scan_multiplier: Optional[int] = None,
    color_exposure: Optional[float] = None,
    led_power: Optional[int] = None,
    laser_power: Optional[int] = None,
    texture_source: Optional[TextureSource] = None,
    camera_texture_source: Optional[TextureSource] = None,
    output_topology: Optional[OutputTopology] = None,
    camera_space: Optional[CameraSpace] = None,
    normals_estimation_radius: Optional[int] = None,
    max_inaccuracy: Optional[float] = None,
    calibration_volume_only: Optional[bool] = None,
    hole_filling: Optional[bool] = None,
    trigger_mode: Optional[TriggerMode] = None,
    hardware_trigger: Optional[bool] = None,
    hardware_trigger_signal: Optional[HardwareTriggerSignal] = None,
)

Configuration settings for 3D scanners.

Groups all configurable parameters for structured light scanners. Not all parameters may be available on all scanner models.

to_dict
to_dict() -> Dict[str, any]

Convert to dictionary, excluding None values.

from_dict classmethod
from_dict(data: Dict[str, any]) -> 'ScannerConfiguration'

Create configuration from dictionary.

ScanResult dataclass
ScanResult(
    range_map: Optional[ndarray] = None,
    intensity: Optional[ndarray] = None,
    confidence: Optional[ndarray] = None,
    normal_map: Optional[ndarray] = None,
    color: Optional[ndarray] = None,
    timestamp: float = 0.0,
    frame_number: int = 0,
    components_enabled: Dict[ScanComponent, bool] = dict(),
    metadata: Dict[str, Union[str, int, float]] = dict(),
)

Result from 3D scanner capture containing multi-component data.

Attributes:

Name Type Description
range_map Optional[ndarray]

Depth/range map - typically uint16 or float32 (H, W)

intensity Optional[ndarray]

Intensity image - uint8 or uint16 (H, W) or (H, W, 3)

confidence Optional[ndarray]

Confidence map - uint8 or uint16 (H, W), values indicate quality

normal_map Optional[ndarray]

Surface normals - float32 (H, W, 3), xyz components

color Optional[ndarray]

Color texture - uint8 (H, W, 3) RGB

timestamp float

Capture timestamp in seconds (from device or system)

frame_number int

Sequential frame number

components_enabled Dict[ScanComponent, bool]

Dict of which components were captured

metadata Dict[str, Union[str, int, float]]

Additional scan metadata (exposure, gain, etc.)

has_range property
has_range: bool

Check if range data is present.

has_intensity property
has_intensity: bool

Check if intensity data is present.

has_confidence property
has_confidence: bool

Check if confidence data is present.

has_normals property
has_normals: bool

Check if normal map is present.

has_color property
has_color: bool

Check if color data is present.

range_shape property
range_shape: tuple

Get shape of range map.

intensity_shape property
intensity_shape: tuple

Get shape of intensity image.

get_valid_mask
get_valid_mask(min_confidence: int = 0) -> np.ndarray

Get mask of valid pixels based on range and confidence.

Parameters:

Name Type Description Default
min_confidence int

Minimum confidence threshold (0-255 typical)

0

Returns:

Type Description
ndarray

Boolean mask (H, W) where True indicates valid pixel

TextureSource

Bases: Enum

Source for texture/intensity data.

TriggerMode

Bases: Enum

Acquisition trigger mode.

Scanner3D
Scanner3D(
    async_scanner: Optional[AsyncScanner3D] = None,
    loop: Optional[AbstractEventLoop] = None,
    name: Optional[str] = None,
    **kwargs
)

Bases: Mindtrace

Synchronous wrapper around AsyncScanner3D.

All operations are executed on a background event loop. This provides a simple synchronous API for 3D scanner operations.

Usage

scanner = Scanner3D() result = scanner.capture() print(result.range_shape) scanner.close()

Or with context manager

with Scanner3D() as scanner: ... result = scanner.capture() ... print(result.range_shape)

Create a synchronous 3D scanner wrapper.

Parameters:

Name Type Description Default
async_scanner Optional[AsyncScanner3D]

Existing AsyncScanner3D instance

None
loop Optional[AbstractEventLoop]

Event loop to use for async operations

None
name Optional[str]

Scanner identifier. Format: "Photoneo:serial_number" If None, opens first available scanner.

None
**kwargs

Additional arguments passed to Mindtrace

{}

Examples:

>>> # Simple usage - opens first available
>>> scanner = Scanner3D()
>>> # Open specific scanner
>>> scanner = Scanner3D(name="Photoneo:ABC123")
>>> # Use existing async scanner
>>> async_scan = await AsyncScanner3D.open()
>>> sync_scan = Scanner3D(async_scanner=async_scan, loop=loop)
name property
name: str

Get scanner name.

Returns:

Type Description
str

Scanner name in format "Backend:serial_number"

is_open property
is_open: bool

Check if scanner is open.

Returns:

Type Description
bool

True if scanner is open, False otherwise

close
close() -> None

Close scanner and release resources.

Examples:

>>> scanner = Scanner3D()
>>> # ... use scanner ...
>>> scanner.close()
capture
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture multi-component 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> scanner = Scanner3D()
>>> result = scanner.capture()
>>> print(f"Range: {result.range_shape}")
>>> scanner.close()
capture_point_cloud
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
include_confidence bool

Whether to include confidence values

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> scanner = Scanner3D()
>>> point_cloud = scanner.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
>>> scanner.close()
set_exposure_time
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> scanner = Scanner3D()
>>> scanner.set_exposure_time(5000)
>>> scanner.close()
get_exposure_time
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> scanner = Scanner3D()
>>> exposure = scanner.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
>>> scanner.close()
set_trigger_mode
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "software")

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> scanner = Scanner3D()
>>> scanner.set_trigger_mode("software")
>>> scanner.close()
get_trigger_mode
get_trigger_mode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "software"

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> scanner = Scanner3D()
>>> mode = scanner.get_trigger_mode()
>>> print(f"Mode: {mode}")
>>> scanner.close()
async_scanner_3d

Async 3D scanner interface providing high-level scanning operations.

AsyncScanner3D
AsyncScanner3D(backend)

Bases: Mindtrace

Async 3D scanner interface.

Provides high-level 3D scanning operations including multi-component capture and point cloud generation.

Usage

scanner = await AsyncScanner3D.open() result = await scanner.capture() print(result.range_shape) await scanner.close()

Initialize async 3D scanner.

Parameters:

Name Type Description Default
backend

Backend instance (e.g., PhotoneoBackend)

required
name property
name: str

Get scanner name.

is_open property
is_open: bool

Check if scanner is open.

open async classmethod
open(name: Optional[str] = None) -> 'AsyncScanner3D'

Open and initialize a 3D scanner.

Parameters:

Name Type Description Default
name Optional[str]

Scanner identifier. Format: "Backend:serial_number" Supported backends: "Photoneo", "MockPhotoneo". If None, opens first available Photoneo scanner.

None

Returns:

Type Description
'AsyncScanner3D'

Initialized AsyncScanner3D instance

Raises:

Type Description
CameraNotFoundError

If scanner not found

CameraConnectionError

If connection fails

Examples:

>>> scanner = await AsyncScanner3D.open()
>>> scanner = await AsyncScanner3D.open("Photoneo:ABC123")
>>> scanner = await AsyncScanner3D.open("MockPhotoneo:MOCK-001")
close async
close() -> None

Close scanner and release resources.

capture async
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture multi-component 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> result = await scanner.capture()
>>> print(f"Range: {result.range_shape}")
>>> print(f"Intensity: {result.intensity_shape}")
capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
include_confidence bool

Whether to include confidence values

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> point_cloud = await scanner.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
get_capabilities async
get_capabilities() -> ScannerCapabilities

Get scanner capabilities and available settings.

Returns:

Type Description
ScannerCapabilities

ScannerCapabilities with available options and ranges

Examples:

>>> caps = await scanner.get_capabilities()
>>> print(f"Coding qualities: {caps.coding_qualities}")
get_configuration async
get_configuration() -> ScannerConfiguration

Get current scanner configuration.

Returns:

Type Description
ScannerConfiguration

ScannerConfiguration with current settings

Examples:

>>> config = await scanner.get_configuration()
>>> print(f"Exposure: {config.exposure_time}ms")
set_configuration async
set_configuration(config: ScannerConfiguration) -> None

Apply scanner configuration.

Only non-None values in the configuration will be applied.

Parameters:

Name Type Description Default
config ScannerConfiguration

Configuration to apply

required

Examples:

>>> config = ScannerConfiguration(exposure_time=15.0, coding_quality=CodingQuality.HIGH)
>>> await scanner.set_configuration(config)
set_exposure_time async
set_exposure_time(milliseconds: float) -> None

Set exposure time in milliseconds.

Parameters:

Name Type Description Default
milliseconds float

Exposure time in milliseconds

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await scanner.set_exposure_time(10.24)  # 10.24ms exposure
get_exposure_time async
get_exposure_time() -> float

Get current exposure time in milliseconds.

Returns:

Type Description
float

Current exposure time in milliseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> exposure = await scanner.get_exposure_time()
>>> print(f"Exposure: {exposure}ms")
set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ("Continuous", "Software", or "Hardware")

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await scanner.set_trigger_mode("Software")
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

Returns:

Type Description
str

"Continuous", "Software", or "Hardware"

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> mode = await scanner.get_trigger_mode()
>>> print(f"Mode: {mode}")
models

Data models for 3D scanner operations.

This module provides data structures for handling 3D scanner data including multi-component scan results, coordinate maps, and point clouds.

Designed for structured light scanners like Photoneo PhoXi, but extensible for other 3D scanning technologies (ToF, LiDAR, etc.).

ScanComponent

Bases: Enum

Available scan components from 3D scanners.

OperationMode

Bases: Enum

Scanner operation mode.

CodingStrategy

Bases: Enum

Structured light coding strategy.

CodingQuality

Bases: Enum

Scan quality/speed tradeoff.

TextureSource

Bases: Enum

Source for texture/intensity data.

OutputTopology

Bases: Enum

Point cloud output topology.

CameraSpace

Bases: Enum

Coordinate system reference camera.

TriggerMode

Bases: Enum

Acquisition trigger mode.

HardwareTriggerSignal

Bases: Enum

Hardware trigger signal edge.

ScannerConfiguration dataclass
ScannerConfiguration(
    operation_mode: Optional[OperationMode] = None,
    coding_strategy: Optional[CodingStrategy] = None,
    coding_quality: Optional[CodingQuality] = None,
    maximum_fps: Optional[float] = None,
    exposure_time: Optional[float] = None,
    single_pattern_exposure: Optional[float] = None,
    shutter_multiplier: Optional[int] = None,
    scan_multiplier: Optional[int] = None,
    color_exposure: Optional[float] = None,
    led_power: Optional[int] = None,
    laser_power: Optional[int] = None,
    texture_source: Optional[TextureSource] = None,
    camera_texture_source: Optional[TextureSource] = None,
    output_topology: Optional[OutputTopology] = None,
    camera_space: Optional[CameraSpace] = None,
    normals_estimation_radius: Optional[int] = None,
    max_inaccuracy: Optional[float] = None,
    calibration_volume_only: Optional[bool] = None,
    hole_filling: Optional[bool] = None,
    trigger_mode: Optional[TriggerMode] = None,
    hardware_trigger: Optional[bool] = None,
    hardware_trigger_signal: Optional[HardwareTriggerSignal] = None,
)

Configuration settings for 3D scanners.

Groups all configurable parameters for structured light scanners. Not all parameters may be available on all scanner models.

to_dict
to_dict() -> Dict[str, any]

Convert to dictionary, excluding None values.

from_dict classmethod
from_dict(data: Dict[str, any]) -> 'ScannerConfiguration'

Create configuration from dictionary.

ScannerCapabilities dataclass
ScannerCapabilities(
    has_range: bool = True,
    has_intensity: bool = False,
    has_confidence: bool = False,
    has_normal: bool = False,
    has_color: bool = False,
    operation_modes: list = list(),
    coding_strategies: list = list(),
    coding_qualities: list = list(),
    texture_sources: list = list(),
    output_topologies: list = list(),
    exposure_range: Optional[tuple] = None,
    led_power_range: Optional[tuple] = None,
    laser_power_range: Optional[tuple] = None,
    fps_range: Optional[tuple] = None,
    depth_resolution: Optional[tuple] = None,
    color_resolution: Optional[tuple] = None,
    model: str = "",
    serial_number: str = "",
    firmware_version: str = "",
)

Describes the capabilities of a 3D scanner.

Used to query what features and settings are available on a specific scanner.

ScanResult dataclass
ScanResult(
    range_map: Optional[ndarray] = None,
    intensity: Optional[ndarray] = None,
    confidence: Optional[ndarray] = None,
    normal_map: Optional[ndarray] = None,
    color: Optional[ndarray] = None,
    timestamp: float = 0.0,
    frame_number: int = 0,
    components_enabled: Dict[ScanComponent, bool] = dict(),
    metadata: Dict[str, Union[str, int, float]] = dict(),
)

Result from 3D scanner capture containing multi-component data.

Attributes:

Name Type Description
range_map Optional[ndarray]

Depth/range map - typically uint16 or float32 (H, W)

intensity Optional[ndarray]

Intensity image - uint8 or uint16 (H, W) or (H, W, 3)

confidence Optional[ndarray]

Confidence map - uint8 or uint16 (H, W), values indicate quality

normal_map Optional[ndarray]

Surface normals - float32 (H, W, 3), xyz components

color Optional[ndarray]

Color texture - uint8 (H, W, 3) RGB

timestamp float

Capture timestamp in seconds (from device or system)

frame_number int

Sequential frame number

components_enabled Dict[ScanComponent, bool]

Dict of which components were captured

metadata Dict[str, Union[str, int, float]]

Additional scan metadata (exposure, gain, etc.)

has_range property
has_range: bool

Check if range data is present.

has_intensity property
has_intensity: bool

Check if intensity data is present.

has_confidence property
has_confidence: bool

Check if confidence data is present.

has_normals property
has_normals: bool

Check if normal map is present.

has_color property
has_color: bool

Check if color data is present.

range_shape property
range_shape: tuple

Get shape of range map.

intensity_shape property
intensity_shape: tuple

Get shape of intensity image.

get_valid_mask
get_valid_mask(min_confidence: int = 0) -> np.ndarray

Get mask of valid pixels based on range and confidence.

Parameters:

Name Type Description Default
min_confidence int

Minimum confidence threshold (0-255 typical)

0

Returns:

Type Description
ndarray

Boolean mask (H, W) where True indicates valid pixel

CoordinateMap dataclass
CoordinateMap(
    x_map: Optional[ndarray] = None,
    y_map: Optional[ndarray] = None,
    width: int = 0,
    height: int = 0,
    scale: float = 1.0,
    offset: float = 0.0,
    is_valid: bool = False,
)

Coordinate map for efficient point cloud generation.

Photoneo devices can provide pre-computed coordinate maps that allow efficient conversion from range-only data to full 3D point clouds. This enables faster transfers (only Z data) with local point cloud computation.

Attributes:

Name Type Description
x_map Optional[ndarray]

X coordinate map (H, W) - multiply by range to get X

y_map Optional[ndarray]

Y coordinate map (H, W) - multiply by range to get Y

width int

Map width in pixels

height int

Map height in pixels

scale float

Coordinate scale factor

offset float

Coordinate offset

is_valid bool

Whether the map has been initialized

from_projected_c classmethod
from_projected_c(
    projected_c: ndarray,
    width: int,
    height: int,
    scale: float = 1.0,
    offset: float = 0.0,
) -> "CoordinateMap"

Create coordinate map from Photoneo ProjectedC component.

The ProjectedC component contains pre-computed X,Y coordinates that can be cached and reused for faster point cloud generation.

Parameters:

Name Type Description Default
projected_c ndarray

ProjectedC data from Photoneo (H, W, 3) float32

required
width int

Image width

required
height int

Image height

required
scale float

Coordinate scale factor

1.0
offset float

Coordinate offset

0.0

Returns:

Type Description
'CoordinateMap'

CoordinateMap instance

compute_point_cloud
compute_point_cloud(
    range_map: ndarray, valid_mask: Optional[ndarray] = None
) -> np.ndarray

Compute 3D point cloud from range map using cached coordinates.

Parameters:

Name Type Description Default
range_map ndarray

Depth/range map (H, W)

required
valid_mask Optional[ndarray]

Optional mask of valid pixels

None

Returns:

Type Description
ndarray

Point cloud array (N, 3) with X, Y, Z coordinates

PointCloudData dataclass
PointCloudData(
    points: ndarray,
    colors: Optional[ndarray] = None,
    normals: Optional[ndarray] = None,
    confidence: Optional[ndarray] = None,
    num_points: int = 0,
    has_colors: bool = False,
)

3D point cloud data with optional attributes.

Attributes:

Name Type Description
points ndarray

Array of 3D points (N, 3) - (x, y, z) in meters

colors Optional[ndarray]

Optional RGB colors (N, 3) - values in [0, 1]

normals Optional[ndarray]

Optional surface normals (N, 3) - unit vectors

confidence Optional[ndarray]

Optional per-point confidence (N,)

num_points int

Number of valid points

has_colors bool

Flag indicating if color information is present

has_normals property
has_normals: bool

Check if normal information is present.

has_confidence property
has_confidence: bool

Check if confidence information is present.

save_ply
save_ply(path: str, binary: bool = True) -> None

Save point cloud as PLY file.

Parameters:

Name Type Description Default
path str

Output file path

required
binary bool

If True, save in binary format; otherwise ASCII

True

Raises:

Type Description
ImportError

If plyfile is not installed

downsample
downsample(factor: int) -> 'PointCloudData'

Downsample point cloud by given factor.

Parameters:

Name Type Description Default
factor int

Downsampling factor (e.g., 2 = keep every 2nd point)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with downsampled data

filter_by_confidence
filter_by_confidence(min_confidence: float) -> 'PointCloudData'

Filter points by minimum confidence threshold.

Parameters:

Name Type Description Default
min_confidence float

Minimum confidence value (0.0 to 1.0)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with filtered points

Raises:

Type Description
ValueError

If no confidence data available

scanner_3d

Synchronous 3D scanner interface.

This module provides a synchronous wrapper around AsyncScanner3D, following the same pattern as the StereoCamera class.

Scanner3D
Scanner3D(
    async_scanner: Optional[AsyncScanner3D] = None,
    loop: Optional[AbstractEventLoop] = None,
    name: Optional[str] = None,
    **kwargs
)

Bases: Mindtrace

Synchronous wrapper around AsyncScanner3D.

All operations are executed on a background event loop. This provides a simple synchronous API for 3D scanner operations.

Usage

scanner = Scanner3D() result = scanner.capture() print(result.range_shape) scanner.close()

Or with context manager

with Scanner3D() as scanner: ... result = scanner.capture() ... print(result.range_shape)

Create a synchronous 3D scanner wrapper.

Parameters:

Name Type Description Default
async_scanner Optional[AsyncScanner3D]

Existing AsyncScanner3D instance

None
loop Optional[AbstractEventLoop]

Event loop to use for async operations

None
name Optional[str]

Scanner identifier. Format: "Photoneo:serial_number" If None, opens first available scanner.

None
**kwargs

Additional arguments passed to Mindtrace

{}

Examples:

>>> # Simple usage - opens first available
>>> scanner = Scanner3D()
>>> # Open specific scanner
>>> scanner = Scanner3D(name="Photoneo:ABC123")
>>> # Use existing async scanner
>>> async_scan = await AsyncScanner3D.open()
>>> sync_scan = Scanner3D(async_scanner=async_scan, loop=loop)
name property
name: str

Get scanner name.

Returns:

Type Description
str

Scanner name in format "Backend:serial_number"

is_open property
is_open: bool

Check if scanner is open.

Returns:

Type Description
bool

True if scanner is open, False otherwise

close
close() -> None

Close scanner and release resources.

Examples:

>>> scanner = Scanner3D()
>>> # ... use scanner ...
>>> scanner.close()
capture
capture(
    timeout_ms: int = 10000,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
) -> ScanResult

Capture multi-component 3D scan data.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

10000
enable_range bool

Whether to capture range/depth data

True
enable_intensity bool

Whether to capture intensity data

True
enable_confidence bool

Whether to capture confidence data

False
enable_normal bool

Whether to capture surface normals

False
enable_color bool

Whether to capture color texture

False

Returns:

Type Description
ScanResult

ScanResult containing captured data

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> scanner = Scanner3D()
>>> result = scanner.capture()
>>> print(f"Range: {result.range_shape}")
>>> scanner.close()
capture_point_cloud
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 10000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
include_confidence bool

Whether to include confidence values

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

10000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If scanner not opened

CameraCaptureError

If capture fails

Examples:

>>> scanner = Scanner3D()
>>> point_cloud = scanner.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
>>> scanner.close()
set_exposure_time
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> scanner = Scanner3D()
>>> scanner.set_exposure_time(5000)
>>> scanner.close()
get_exposure_time
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> scanner = Scanner3D()
>>> exposure = scanner.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
>>> scanner.close()
set_trigger_mode
set_trigger_mode(mode: str) -> None

Set trigger mode.

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "software")

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> scanner = Scanner3D()
>>> scanner.set_trigger_mode("software")
>>> scanner.close()
get_trigger_mode
get_trigger_mode() -> str

Get current trigger mode.

Returns:

Type Description
str

"continuous" or "software"

Raises:

Type Description
CameraConnectionError

If scanner not opened

Examples:

>>> scanner = Scanner3D()
>>> mode = scanner.get_trigger_mode()
>>> print(f"Mode: {mode}")
>>> scanner.close()
setup

Setup utilities for 3D scanners.

PhotoneoSetup
PhotoneoSetup()

Bases: Mindtrace

Photoneo 3D scanner SDK setup and verification.

This class handles the installation of the Matrix Vision mvGenTL Producer required for Photoneo 3D scanner communication via GigE Vision.

Based on Photoneo's official recommendations: https://github.com/photoneo-3d/photoneo-python-examples

Initialize Photoneo setup.

get_cti_path
get_cti_path() -> str

Find the CTI file on this system.

Searches environment variable first, then platform-specific known paths.

Returns:

Type Description
str

Path to CTI file if found, empty string otherwise

verify_cti_installation
verify_cti_installation() -> bool

Verify that the CTI file is properly installed.

Returns:

Type Description
bool

True if CTI file exists and is accessible

verify_env_variable
verify_env_variable() -> bool

Verify GENICAM_GENTL64_PATH is set correctly.

Returns:

Type Description
bool

True if environment variable is properly configured

verify_harvesters
verify_harvesters() -> bool

Verify that Harvesters library is available.

Returns:

Type Description
bool

True if Harvesters is importable

discover_devices
discover_devices() -> List[dict]

Discover Photoneo devices on the network.

Returns:

Type Description
List[dict]

List of discovered Photoneo devices with their info

install
install() -> bool

Install the Matrix Vision mvGenTL Producer.

Returns:

Type Description
bool

True if installation successful

uninstall
uninstall() -> bool

Uninstall the Matrix Vision mvGenTL Producer.

Returns:

Type Description
bool

True if uninstallation successful

verify
verify() -> bool

Verify complete Photoneo setup.

Returns:

Type Description
bool

True if all components are properly configured

install_photoneo_sdk
install_photoneo_sdk() -> bool

Install the SDK required for Photoneo scanners.

uninstall_photoneo_sdk
uninstall_photoneo_sdk() -> bool

Uninstall the Photoneo SDK.

verify_photoneo_sdk
verify_photoneo_sdk() -> bool

Verify Photoneo SDK installation.

setup_photoneo

Photoneo 3D Scanner SDK Setup Script

This script automates the setup of the Photoneo 3D scanner environment. Photoneo scanners use GigE Vision protocol and require the Matrix Vision mvGenTL Producer for communication via Harvesters.

Based on: https://github.com/photoneo-3d/photoneo-python-examples

Supports: Linux (x86_64, aarch64), Windows (x64), macOS (ARM64, x86_64)

Requirements: - Matrix Vision mvGenTL Producer (version 2.49.0 recommended) - Harvesters library: pip install harvesters - PhoXi firmware version 1.13.0 or later

Usage

mindtrace-scanner-photoneo install # Install Matrix Vision SDK mindtrace-scanner-photoneo verify # Verify installation mindtrace-scanner-photoneo discover # Test device discovery mindtrace-scanner-photoneo uninstall # Uninstall SDK

PhotoneoSetup
PhotoneoSetup()

Bases: Mindtrace

Photoneo 3D scanner SDK setup and verification.

This class handles the installation of the Matrix Vision mvGenTL Producer required for Photoneo 3D scanner communication via GigE Vision.

Based on Photoneo's official recommendations: https://github.com/photoneo-3d/photoneo-python-examples

Initialize Photoneo setup.

get_cti_path
get_cti_path() -> str

Find the CTI file on this system.

Searches environment variable first, then platform-specific known paths.

Returns:

Type Description
str

Path to CTI file if found, empty string otherwise

verify_cti_installation
verify_cti_installation() -> bool

Verify that the CTI file is properly installed.

Returns:

Type Description
bool

True if CTI file exists and is accessible

verify_env_variable
verify_env_variable() -> bool

Verify GENICAM_GENTL64_PATH is set correctly.

Returns:

Type Description
bool

True if environment variable is properly configured

verify_harvesters
verify_harvesters() -> bool

Verify that Harvesters library is available.

Returns:

Type Description
bool

True if Harvesters is importable

discover_devices
discover_devices() -> List[dict]

Discover Photoneo devices on the network.

Returns:

Type Description
List[dict]

List of discovered Photoneo devices with their info

install
install() -> bool

Install the Matrix Vision mvGenTL Producer.

Returns:

Type Description
bool

True if installation successful

uninstall
uninstall() -> bool

Uninstall the Matrix Vision mvGenTL Producer.

Returns:

Type Description
bool

True if uninstallation successful

verify
verify() -> bool

Verify complete Photoneo setup.

Returns:

Type Description
bool

True if all components are properly configured

install_photoneo_sdk
install_photoneo_sdk() -> bool

Install the SDK required for Photoneo scanners.

uninstall_photoneo_sdk
uninstall_photoneo_sdk() -> bool

Uninstall the Photoneo SDK.

verify_photoneo_sdk
verify_photoneo_sdk() -> bool

Verify Photoneo SDK installation.

install
install(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Install the Matrix Vision mvGenTL Producer for Photoneo scanners.

Downloads and installs mvGenTL Producer v2.49.0 as recommended by Photoneo. See: https://github.com/photoneo-3d/photoneo-python-examples

uninstall
uninstall(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Uninstall the Matrix Vision mvGenTL Producer.

verify
verify(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    )
) -> None

Verify Photoneo SDK installation and configuration.

discover
discover(
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
    all_devices: bool = typer.Option(
        False,
        "--all",
        "-a",
        help="Show all GigE Vision devices, not just Photoneo",
    ),
) -> None

Discover Photoneo scanners on the network.

main
main() -> None

Main entry point for the script.

sensors

MindTrace Hardware Sensor System.

A unified sensor system that abstracts different communication backends (MQTT, HTTP, Serial, Modbus) behind a simple AsyncSensor interface.

SensorBackend

Bases: ABC

Abstract base class for all sensor backends.

This interface abstracts different communication patterns: - MQTT: Push-based (subscribe to topics, cache messages) - HTTP: Pull-based (make requests on-demand) - Serial: Pull-based (send commands, read responses) - Modbus: Pull-based (read registers)

connect abstractmethod async
connect() -> None

Establish connection to the backend.

Raises:

Type Description
ConnectionError

If connection fails

disconnect abstractmethod async
disconnect() -> None

Close connection to the backend.

Should be safe to call multiple times.

read_data abstractmethod async
read_data(address: str) -> Optional[Dict[str, Any]]

Read sensor data from the specified address.

For different backends, 'address' means: - MQTT: topic name (returns cached message) - HTTP: endpoint path (makes GET request) - Serial: sensor command (send command, read response) - Modbus: register address (read holding registers)

Parameters:

Name Type Description Default
address str

Backend-specific address/identifier

required

Returns:

Type Description
Optional[Dict[str, Any]]

Dictionary with sensor data, or None if no data available

Raises:

Type Description
ConnectionError

If backend not connected

TimeoutError

If read operation times out

ValueError

If address is invalid

is_connected abstractmethod
is_connected() -> bool

Check if backend is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

HTTPSensorBackend
HTTPSensorBackend(
    base_url: str,
    auth_token: Optional[str] = None,
    timeout: float = 30.0,
    **kwargs
)

Bases: SensorBackend

HTTP backend for sensor communication (placeholder).

This backend will connect to REST APIs and make HTTP GET requests to read sensor data. It implements a pull-based pattern where we request data on-demand.

Future implementation will: - Make HTTP GET requests to base_url + endpoint - Handle authentication headers - Parse JSON responses - Implement timeout and retry logic

Initialize HTTP backend.

Parameters:

Name Type Description Default
base_url str

Base URL for HTTP requests (e.g., "http://api.sensors.com")

required
auth_token Optional[str]

Optional authentication token

None
timeout float

Request timeout in seconds

30.0
**kwargs

Additional HTTP client parameters

{}
connect async
connect() -> None

Establish HTTP client connection.

Raises:

Type Description
NotImplementedError

HTTP backend not yet implemented

disconnect async
disconnect() -> None

Close HTTP client connection.

read_data async
read_data(address: str) -> Optional[Dict[str, Any]]

Read sensor data via HTTP GET request.

Parameters:

Name Type Description Default
address str

Endpoint path (e.g., "/sensors/temperature/current")

required

Returns:

Type Description
Optional[Dict[str, Any]]

JSON response data, or None if request fails

Raises:

Type Description
NotImplementedError

HTTP backend not yet implemented

is_connected
is_connected() -> bool

Check if HTTP client is ready.

Returns:

Type Description
bool

Always False until implementation is complete

MQTTSensorBackend
MQTTSensorBackend(
    broker_url: str,
    identifier: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    keepalive: int = 60,
    **kwargs
)

Bases: SensorBackend

MQTT backend for sensor communication.

This backend connects to an MQTT broker and subscribes to topics. Messages are cached when received, and read_data() returns the latest cached message.

This implements a push-based pattern where data comes to us, unlike HTTP/Serial which are pull-based where we request data on-demand.

Initialize MQTT backend.

Parameters:

Name Type Description Default
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier Optional[str]

MQTT client identifier (auto-generated if None)

None
username Optional[str]

MQTT username (optional)

None
password Optional[str]

MQTT password (optional)

None
keepalive int

MQTT keepalive interval in seconds

60
**kwargs

Additional MQTT client parameters

{}
connect async
connect() -> None

Connect to MQTT broker.

Raises:

Type Description
ConnectionError

If connection to broker fails

disconnect async
disconnect() -> None

Disconnect from MQTT broker.

read_data async
read_data(address: str) -> Optional[Dict[str, Any]]

Read cached data from MQTT topic.

For MQTT, the address is the topic name. If we haven't subscribed to this topic yet, we'll subscribe and wait briefly for a message.

Parameters:

Name Type Description Default
address str

MQTT topic name

required

Returns:

Type Description
Optional[Dict[str, Any]]

Latest cached message for the topic, or None if no data available

Raises:

Type Description
ConnectionError

If not connected to broker

ValueError

If topic name is invalid

is_connected
is_connected() -> bool

Check if connected to MQTT broker.

Returns:

Type Description
bool

True if connected, False otherwise

SerialSensorBackend
SerialSensorBackend(
    port: str, baudrate: int = 9600, timeout: float = 5.0, **kwargs
)

Bases: SensorBackend

Serial backend for sensor communication (placeholder).

This backend will connect to sensors via serial/USB ports and send commands to read sensor data. It implements a pull-based pattern where we send commands and read responses on-demand.

Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor commands and read responses - Parse sensor data (JSON, CSV, or custom formats) - Handle timeouts and communication errors

Initialize Serial backend.

Parameters:

Name Type Description Default
port str

Serial port path (e.g., "/dev/ttyUSB0" or "COM3")

required
baudrate int

Serial communication baudrate

9600
timeout float

Communication timeout in seconds

5.0
**kwargs

Additional serial parameters (parity, stopbits, etc.)

{}
connect async
connect() -> None

Open serial port connection.

Raises:

Type Description
NotImplementedError

Serial backend not yet implemented

disconnect async
disconnect() -> None

Close serial port connection.

read_data async
read_data(address: str) -> Optional[Dict[str, Any]]

Send command to sensor and read response.

Parameters:

Name Type Description Default
address str

Sensor command (e.g., "READ_TEMP", "GET_HUMIDITY")

required

Returns:

Type Description
Optional[Dict[str, Any]]

Parsed sensor response data, or None if command fails

Raises:

Type Description
NotImplementedError

Serial backend not yet implemented

is_connected
is_connected() -> bool

Check if serial port is open.

Returns:

Type Description
bool

Always False until implementation is complete

SensorManager
SensorManager()

Simple manager for multiple sensors.

This manager provides basic functionality: - Register sensors with different backends - Remove sensors by ID - Read from all sensors in parallel

The manager keeps sensors in a registry and delegates operations to them.

Initialize sensor manager.

sensor_count property
sensor_count: int

Get number of registered sensors.

register_sensor
register_sensor(
    sensor_id: str,
    backend_type: str,
    connection_params: Dict[str, Any],
    address: str,
) -> AsyncSensor

Register a new sensor with the manager.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
backend_type str

Type of backend ("mqtt", "http", "serial")

required
connection_params Dict[str, Any]

Backend-specific connection parameters

required
address str

Backend-specific address (topic, endpoint, command)

required

Returns:

Type Description
AsyncSensor

The created AsyncSensor instance

Raises:

Type Description
ValueError

If sensor_id already exists or parameters are invalid

Examples:

Register MQTT sensor

sensor = manager.register_sensor( "temp001", "mqtt", {"broker_url": "mqtt://localhost:1883"}, "sensors/temperature" )

Register HTTP sensor

sensor = manager.register_sensor( "temp002", "http", {"base_url": "http://api.sensors.com"}, "/sensors/temperature" )

remove_sensor
remove_sensor(sensor_id: str) -> None

Remove a sensor from the manager.

Parameters:

Name Type Description Default
sensor_id str

ID of sensor to remove

required

Raises:

Type Description
ValueError

If sensor_id doesn't exist

get_sensor
get_sensor(sensor_id: str) -> Optional[AsyncSensor]

Get a sensor by ID.

Parameters:

Name Type Description Default
sensor_id str

ID of sensor to get

required

Returns:

Type Description
Optional[AsyncSensor]

AsyncSensor instance or None if not found

list_sensors
list_sensors() -> List[str]

Get list of all registered sensor IDs.

Returns:

Type Description
List[str]

List of sensor IDs

connect_all async
connect_all() -> Dict[str, bool]

Connect all registered sensors.

Returns:

Type Description
Dict[str, bool]

Dictionary mapping sensor IDs to connection success (True/False)

disconnect_all async
disconnect_all() -> None

Disconnect all registered sensors.

read_all async
read_all() -> Dict[str, Dict[str, Any]]

Read data from all registered sensors.

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping sensor IDs to their data (or error info)

Examples:

{ "temp001": {"temperature": 23.5, "unit": "C"}, "temp002": {"error": "Not connected"}, "humid001": {"humidity": 65.2, "unit": "%"} }

AsyncSensor
AsyncSensor(sensor_id: str, backend: SensorBackend, address: str)

Unified async sensor interface.

This class provides a simple, consistent API for reading sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).

The sensor abstracts different communication patterns: - MQTT: Push-based (messages are cached when received) - HTTP: Pull-based (requests made on-demand) - Serial: Pull-based (commands sent on-demand)

All backends are hidden behind the same connect/disconnect/read interface.

Initialize AsyncSensor with a backend.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for this sensor

required
backend SensorBackend

Backend implementation (MQTT, HTTP, Serial, etc.)

required
address str

Backend-specific address (topic, endpoint, command, etc.)

required

Raises:

Type Description
ValueError

If sensor_id or address is empty

TypeError

If backend is not a SensorBackend instance

sensor_id property
sensor_id: str

Get the sensor ID.

is_connected property
is_connected: bool

Check if sensor backend is connected.

Returns:

Type Description
bool

True if backend is connected, False otherwise

connect async
connect() -> None

Connect the sensor backend.

This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).

Raises:

Type Description
ConnectionError

If connection fails

disconnect async
disconnect() -> None

Disconnect the sensor backend.

This closes the connection to the underlying communication system. Safe to call multiple times.

read async
read() -> Optional[Dict[str, Any]]

Read sensor data.

This method abstracts different communication patterns: - MQTT: Returns cached message from topic - HTTP: Makes GET request to endpoint - Serial: Sends command and reads response

Returns:

Type Description
Optional[Dict[str, Any]]

Dictionary with sensor data, or None if no data available

Raises:

Type Description
ConnectionError

If backend is not connected

TimeoutError

If read operation times out

ValueError

If address is invalid

SensorSimulator
SensorSimulator(
    simulator_id: str, backend: SensorSimulatorBackend, address: str
)

Unified sensor simulator interface.

This class provides a simple, consistent API for publishing sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).

The simulator abstracts different communication patterns: - MQTT: Publish messages to topics - HTTP: POST data to REST endpoints - Serial: Send data/commands to serial devices

All backends are hidden behind the same connect/disconnect/publish interface. This is perfect for integration testing and sensor data simulation.

Initialize SensorSimulator with a backend.

Parameters:

Name Type Description Default
simulator_id str

Unique identifier for this simulator

required
backend SensorSimulatorBackend

Backend implementation (MQTT, HTTP, Serial, etc.)

required
address str

Backend-specific address (topic, endpoint, command, etc.)

required

Raises:

Type Description
ValueError

If simulator_id or address is empty

TypeError

If backend is not a SensorSimulatorBackend instance

simulator_id property
simulator_id: str

Get the simulator ID.

is_connected property
is_connected: bool

Check if simulator backend is connected.

Returns:

Type Description
bool

True if backend is connected, False otherwise

connect async
connect() -> None

Connect the simulator backend.

This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).

Raises:

Type Description
ConnectionError

If connection fails

disconnect async
disconnect() -> None

Disconnect the simulator backend.

This closes the connection to the underlying communication system. Safe to call multiple times.

publish async
publish(data: Union[Dict[str, Any], Any]) -> None

Publish sensor data.

This method abstracts different communication patterns: - MQTT: Publishes message to topic - HTTP: Makes POST request to endpoint - Serial: Sends data to serial port

Parameters:

Name Type Description Default
data Union[Dict[str, Any], Any]

Data to publish (dict, primitive, or complex object)

required

Raises:

Type Description
ConnectionError

If backend is not connected

TimeoutError

If publish operation times out

ValueError

If address is invalid or data cannot be serialized

SensorSimulatorBackend

Bases: ABC

Abstract base class for all sensor simulator backends.

This interface abstracts different communication patterns for publishing: - MQTT: Publish messages to topics - HTTP: POST data to endpoints - Serial: Send data/commands to serial ports - Modbus: Write to registers

connect abstractmethod async
connect() -> None

Establish connection to the backend.

Raises:

Type Description
ConnectionError

If connection fails

disconnect abstractmethod async
disconnect() -> None

Close connection to the backend.

Should be safe to call multiple times.

publish_data abstractmethod async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data to the specified address.

For different backends, 'address' means: - MQTT: topic name to publish to - HTTP: endpoint path to POST to - Serial: sensor command or data format - Modbus: register address to write to

Parameters:

Name Type Description Default
address str

Backend-specific address/identifier

required
data Union[Dict[str, Any], Any]

Data to publish (dict, primitive, or complex object)

required

Raises:

Type Description
ConnectionError

If backend not connected

TimeoutError

If publish operation times out

ValueError

If address is invalid or data cannot be serialized

is_connected abstractmethod
is_connected() -> bool

Check if backend is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

HTTPSensorSimulator
HTTPSensorSimulator(
    base_url: str,
    auth_token: Optional[str] = None,
    timeout: float = 30.0,
    **kwargs
)

Bases: SensorSimulatorBackend

HTTP backend for sensor simulation (placeholder).

This backend will connect to REST APIs and make HTTP POST requests to publish sensor data. It implements a push-based pattern where we send data to endpoints.

Future implementation will: - Make HTTP POST requests to base_url + endpoint - Handle authentication headers - Send JSON payloads - Implement timeout and retry logic

Initialize HTTP simulator backend.

Parameters:

Name Type Description Default
base_url str

Base URL for HTTP requests (e.g., "http://api.sensors.com")

required
auth_token Optional[str]

Optional authentication token

None
timeout float

Request timeout in seconds

30.0
**kwargs

Additional HTTP client parameters

{}
connect async
connect() -> None

Establish HTTP client connection.

Raises:

Type Description
NotImplementedError

HTTP simulator not yet implemented

disconnect async
disconnect() -> None

Close HTTP client connection.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data via HTTP POST request.

Parameters:

Name Type Description Default
address str

Endpoint path (e.g., "/sensors/temperature/data")

required
data Union[Dict[str, Any], Any]

Data to publish (will be JSON-encoded)

required

Raises:

Type Description
NotImplementedError

HTTP simulator not yet implemented

is_connected
is_connected() -> bool

Check if HTTP client is ready.

Returns:

Type Description
bool

Always False until implementation is complete

MQTTSensorSimulator
MQTTSensorSimulator(
    broker_url: str,
    identifier: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    keepalive: int = 60,
    **kwargs
)

Bases: SensorSimulatorBackend

MQTT backend for sensor simulation.

This backend connects to an MQTT broker and publishes sensor data to topics. It's designed for testing and integration scenarios where you need to simulate sensor data streams that can be consumed by AsyncSensor instances.

Initialize MQTT simulator backend.

Parameters:

Name Type Description Default
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier Optional[str]

MQTT client identifier (auto-generated if None)

None
username Optional[str]

MQTT username (optional)

None
password Optional[str]

MQTT password (optional)

None
keepalive int

MQTT keepalive interval in seconds

60
**kwargs

Additional MQTT client parameters

{}
connect async
connect() -> None

Connect to MQTT broker.

Raises:

Type Description
ConnectionError

If connection to broker fails

disconnect async
disconnect() -> None

Disconnect from MQTT broker.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data to MQTT topic.

Parameters:

Name Type Description Default
address str

MQTT topic name to publish to

required
data Union[Dict[str, Any], Any]

Data to publish (will be JSON-encoded if dict/list)

required

Raises:

Type Description
ConnectionError

If not connected to broker

ValueError

If topic name is invalid

TimeoutError

If publish operation times out

is_connected
is_connected() -> bool

Check if connected to MQTT broker.

Returns:

Type Description
bool

True if connected, False otherwise

SerialSensorSimulator
SerialSensorSimulator(
    port: str, baudrate: int = 9600, timeout: float = 5.0, **kwargs
)

Bases: SensorSimulatorBackend

Serial backend for sensor simulation (placeholder).

This backend will connect to serial/USB ports and send sensor data commands. It implements a push-based pattern where we send sensor data to simulate physical sensor devices.

Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor data in various formats (JSON, CSV, custom protocols) - Simulate sensor response patterns and timing - Handle communication protocols and handshaking

Initialize Serial simulator backend.

Parameters:

Name Type Description Default
port str

Serial port path (e.g., "/dev/ttyUSB0" or "COM3")

required
baudrate int

Serial communication baudrate

9600
timeout float

Communication timeout in seconds

5.0
**kwargs

Additional serial parameters (parity, stopbits, etc.)

{}
connect async
connect() -> None

Open serial port connection.

Raises:

Type Description
NotImplementedError

Serial simulator not yet implemented

disconnect async
disconnect() -> None

Close serial port connection.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Send sensor data via serial port.

Parameters:

Name Type Description Default
address str

Command type or data format identifier (e.g., "TEMP_DATA", "JSON_FORMAT")

required
data Union[Dict[str, Any], Any]

Data to send (will be formatted according to address)

required

Raises:

Type Description
NotImplementedError

Serial simulator not yet implemented

is_connected
is_connected() -> bool

Check if serial port is open.

Returns:

Type Description
bool

Always False until implementation is complete

create_backend
create_backend(backend_type: str, **params) -> SensorBackend

Create a sensor backend of the specified type.

Parameters:

Name Type Description Default
backend_type str

Type of backend ("mqtt", "http", "serial")

required
**params

Backend-specific parameters

{}

Returns:

Type Description
SensorBackend

Instantiated backend

Raises:

Type Description
ValueError

If backend_type is unknown

TypeError

If required parameters are missing

Examples:

MQTT backend

mqtt_backend = create_backend("mqtt", broker_url="mqtt://localhost:1883")

HTTP backend

http_backend = create_backend("http", base_url="http://api.sensors.com")

Serial backend

serial_backend = create_backend("serial", port="/dev/ttyUSB0", baudrate=9600)

create_simulator_backend
create_simulator_backend(backend_type: str, **params) -> SensorSimulatorBackend

Create a sensor simulator backend of the specified type.

Parameters:

Name Type Description Default
backend_type str

Type of backend ("mqtt", "http", "serial")

required
**params

Backend-specific parameters

{}

Returns:

Type Description
SensorSimulatorBackend

Instantiated simulator backend

Raises:

Type Description
ValueError

If backend_type is unknown

TypeError

If required parameters are missing

Examples:

MQTT simulator backend

mqtt_sim = create_simulator_backend("mqtt", broker_url="mqtt://localhost:1883")

HTTP simulator backend

http_sim = create_simulator_backend("http", base_url="http://api.sensors.com")

Serial simulator backend

serial_sim = create_simulator_backend("serial", port="/dev/ttyUSB0", baudrate=9600)

backends

Sensor backends package.

base

Base sensor backend interface.

This module defines the abstract interface that all sensor backends must implement.

SensorBackend

Bases: ABC

Abstract base class for all sensor backends.

This interface abstracts different communication patterns: - MQTT: Push-based (subscribe to topics, cache messages) - HTTP: Pull-based (make requests on-demand) - Serial: Pull-based (send commands, read responses) - Modbus: Pull-based (read registers)

connect abstractmethod async
connect() -> None

Establish connection to the backend.

Raises:

Type Description
ConnectionError

If connection fails

disconnect abstractmethod async
disconnect() -> None

Close connection to the backend.

Should be safe to call multiple times.

read_data abstractmethod async
read_data(address: str) -> Optional[Dict[str, Any]]

Read sensor data from the specified address.

For different backends, 'address' means: - MQTT: topic name (returns cached message) - HTTP: endpoint path (makes GET request) - Serial: sensor command (send command, read response) - Modbus: register address (read holding registers)

Parameters:

Name Type Description Default
address str

Backend-specific address/identifier

required

Returns:

Type Description
Optional[Dict[str, Any]]

Dictionary with sensor data, or None if no data available

Raises:

Type Description
ConnectionError

If backend not connected

TimeoutError

If read operation times out

ValueError

If address is invalid

is_connected abstractmethod
is_connected() -> bool

Check if backend is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

http

HTTP sensor backend implementation (placeholder).

This module will implement the SensorBackend interface for HTTP/REST API communication. Currently this is a placeholder that raises NotImplementedError.

HTTPSensorBackend
HTTPSensorBackend(
    base_url: str,
    auth_token: Optional[str] = None,
    timeout: float = 30.0,
    **kwargs
)

Bases: SensorBackend

HTTP backend for sensor communication (placeholder).

This backend will connect to REST APIs and make HTTP GET requests to read sensor data. It implements a pull-based pattern where we request data on-demand.

Future implementation will: - Make HTTP GET requests to base_url + endpoint - Handle authentication headers - Parse JSON responses - Implement timeout and retry logic

Initialize HTTP backend.

Parameters:

Name Type Description Default
base_url str

Base URL for HTTP requests (e.g., "http://api.sensors.com")

required
auth_token Optional[str]

Optional authentication token

None
timeout float

Request timeout in seconds

30.0
**kwargs

Additional HTTP client parameters

{}
connect async
connect() -> None

Establish HTTP client connection.

Raises:

Type Description
NotImplementedError

HTTP backend not yet implemented

disconnect async
disconnect() -> None

Close HTTP client connection.

read_data async
read_data(address: str) -> Optional[Dict[str, Any]]

Read sensor data via HTTP GET request.

Parameters:

Name Type Description Default
address str

Endpoint path (e.g., "/sensors/temperature/current")

required

Returns:

Type Description
Optional[Dict[str, Any]]

JSON response data, or None if request fails

Raises:

Type Description
NotImplementedError

HTTP backend not yet implemented

is_connected
is_connected() -> bool

Check if HTTP client is ready.

Returns:

Type Description
bool

Always False until implementation is complete

mqtt

MQTT sensor backend implementation.

This module implements the SensorBackend interface for MQTT communication. It uses a push-based model where messages are cached when received.

MQTTSensorBackend
MQTTSensorBackend(
    broker_url: str,
    identifier: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    keepalive: int = 60,
    **kwargs
)

Bases: SensorBackend

MQTT backend for sensor communication.

This backend connects to an MQTT broker and subscribes to topics. Messages are cached when received, and read_data() returns the latest cached message.

This implements a push-based pattern where data comes to us, unlike HTTP/Serial which are pull-based where we request data on-demand.

Initialize MQTT backend.

Parameters:

Name Type Description Default
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier Optional[str]

MQTT client identifier (auto-generated if None)

None
username Optional[str]

MQTT username (optional)

None
password Optional[str]

MQTT password (optional)

None
keepalive int

MQTT keepalive interval in seconds

60
**kwargs

Additional MQTT client parameters

{}
connect async
connect() -> None

Connect to MQTT broker.

Raises:

Type Description
ConnectionError

If connection to broker fails

disconnect async
disconnect() -> None

Disconnect from MQTT broker.

read_data async
read_data(address: str) -> Optional[Dict[str, Any]]

Read cached data from MQTT topic.

For MQTT, the address is the topic name. If we haven't subscribed to this topic yet, we'll subscribe and wait briefly for a message.

Parameters:

Name Type Description Default
address str

MQTT topic name

required

Returns:

Type Description
Optional[Dict[str, Any]]

Latest cached message for the topic, or None if no data available

Raises:

Type Description
ConnectionError

If not connected to broker

ValueError

If topic name is invalid

is_connected
is_connected() -> bool

Check if connected to MQTT broker.

Returns:

Type Description
bool

True if connected, False otherwise

serial

Serial sensor backend implementation (placeholder).

This module will implement the SensorBackend interface for serial/USB communication. Currently this is a placeholder that raises NotImplementedError.

SerialSensorBackend
SerialSensorBackend(
    port: str, baudrate: int = 9600, timeout: float = 5.0, **kwargs
)

Bases: SensorBackend

Serial backend for sensor communication (placeholder).

This backend will connect to sensors via serial/USB ports and send commands to read sensor data. It implements a pull-based pattern where we send commands and read responses on-demand.

Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor commands and read responses - Parse sensor data (JSON, CSV, or custom formats) - Handle timeouts and communication errors

Initialize Serial backend.

Parameters:

Name Type Description Default
port str

Serial port path (e.g., "/dev/ttyUSB0" or "COM3")

required
baudrate int

Serial communication baudrate

9600
timeout float

Communication timeout in seconds

5.0
**kwargs

Additional serial parameters (parity, stopbits, etc.)

{}
connect async
connect() -> None

Open serial port connection.

Raises:

Type Description
NotImplementedError

Serial backend not yet implemented

disconnect async
disconnect() -> None

Close serial port connection.

read_data async
read_data(address: str) -> Optional[Dict[str, Any]]

Send command to sensor and read response.

Parameters:

Name Type Description Default
address str

Sensor command (e.g., "READ_TEMP", "GET_HUMIDITY")

required

Returns:

Type Description
Optional[Dict[str, Any]]

Parsed sensor response data, or None if command fails

Raises:

Type Description
NotImplementedError

Serial backend not yet implemented

is_connected
is_connected() -> bool

Check if serial port is open.

Returns:

Type Description
bool

Always False until implementation is complete

core

Sensor core package.

factory

Backend factory for creating sensor backends and simulators.

This module provides factory functions to create different types of sensor backends and simulator backends based on type strings and parameters.

create_backend
create_backend(backend_type: str, **params) -> SensorBackend

Create a sensor backend of the specified type.

Parameters:

Name Type Description Default
backend_type str

Type of backend ("mqtt", "http", "serial")

required
**params

Backend-specific parameters

{}

Returns:

Type Description
SensorBackend

Instantiated backend

Raises:

Type Description
ValueError

If backend_type is unknown

TypeError

If required parameters are missing

Examples:

MQTT backend

mqtt_backend = create_backend("mqtt", broker_url="mqtt://localhost:1883")

HTTP backend

http_backend = create_backend("http", base_url="http://api.sensors.com")

Serial backend

serial_backend = create_backend("serial", port="/dev/ttyUSB0", baudrate=9600)

register_backend
register_backend(backend_type: str, backend_class: type) -> None

Register a custom backend type.

Parameters:

Name Type Description Default
backend_type str

Name for the backend type

required
backend_class type

Backend class that implements SensorBackend

required

Raises:

Type Description
TypeError

If backend_class doesn't inherit from SensorBackend

get_available_backends
get_available_backends() -> Dict[str, type]

Get all available backend types.

Returns:

Type Description
Dict[str, type]

Dictionary mapping backend names to classes

create_simulator_backend
create_simulator_backend(backend_type: str, **params) -> SensorSimulatorBackend

Create a sensor simulator backend of the specified type.

Parameters:

Name Type Description Default
backend_type str

Type of backend ("mqtt", "http", "serial")

required
**params

Backend-specific parameters

{}

Returns:

Type Description
SensorSimulatorBackend

Instantiated simulator backend

Raises:

Type Description
ValueError

If backend_type is unknown

TypeError

If required parameters are missing

Examples:

MQTT simulator backend

mqtt_sim = create_simulator_backend("mqtt", broker_url="mqtt://localhost:1883")

HTTP simulator backend

http_sim = create_simulator_backend("http", base_url="http://api.sensors.com")

Serial simulator backend

serial_sim = create_simulator_backend("serial", port="/dev/ttyUSB0", baudrate=9600)

register_simulator_backend
register_simulator_backend(backend_type: str, backend_class: type) -> None

Register a custom simulator backend type.

Parameters:

Name Type Description Default
backend_type str

Name for the backend type

required
backend_class type

Backend class that implements SensorSimulatorBackend

required

Raises:

Type Description
TypeError

If backend_class doesn't inherit from SensorSimulatorBackend

get_available_simulator_backends
get_available_simulator_backends() -> Dict[str, type]

Get all available simulator backend types.

Returns:

Type Description
Dict[str, type]

Dictionary mapping simulator backend names to classes

manager

Simple sensor manager implementation.

This module implements a minimal SensorManager that can register/remove sensors and perform bulk read operations across multiple sensors.

SensorManager
SensorManager()

Simple manager for multiple sensors.

This manager provides basic functionality: - Register sensors with different backends - Remove sensors by ID - Read from all sensors in parallel

The manager keeps sensors in a registry and delegates operations to them.

Initialize sensor manager.

sensor_count property
sensor_count: int

Get number of registered sensors.

register_sensor
register_sensor(
    sensor_id: str,
    backend_type: str,
    connection_params: Dict[str, Any],
    address: str,
) -> AsyncSensor

Register a new sensor with the manager.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
backend_type str

Type of backend ("mqtt", "http", "serial")

required
connection_params Dict[str, Any]

Backend-specific connection parameters

required
address str

Backend-specific address (topic, endpoint, command)

required

Returns:

Type Description
AsyncSensor

The created AsyncSensor instance

Raises:

Type Description
ValueError

If sensor_id already exists or parameters are invalid

Examples:

Register MQTT sensor

sensor = manager.register_sensor( "temp001", "mqtt", {"broker_url": "mqtt://localhost:1883"}, "sensors/temperature" )

Register HTTP sensor

sensor = manager.register_sensor( "temp002", "http", {"base_url": "http://api.sensors.com"}, "/sensors/temperature" )

remove_sensor
remove_sensor(sensor_id: str) -> None

Remove a sensor from the manager.

Parameters:

Name Type Description Default
sensor_id str

ID of sensor to remove

required

Raises:

Type Description
ValueError

If sensor_id doesn't exist

get_sensor
get_sensor(sensor_id: str) -> Optional[AsyncSensor]

Get a sensor by ID.

Parameters:

Name Type Description Default
sensor_id str

ID of sensor to get

required

Returns:

Type Description
Optional[AsyncSensor]

AsyncSensor instance or None if not found

list_sensors
list_sensors() -> List[str]

Get list of all registered sensor IDs.

Returns:

Type Description
List[str]

List of sensor IDs

connect_all async
connect_all() -> Dict[str, bool]

Connect all registered sensors.

Returns:

Type Description
Dict[str, bool]

Dictionary mapping sensor IDs to connection success (True/False)

disconnect_all async
disconnect_all() -> None

Disconnect all registered sensors.

read_all async
read_all() -> Dict[str, Dict[str, Any]]

Read data from all registered sensors.

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping sensor IDs to their data (or error info)

Examples:

{ "temp001": {"temperature": 23.5, "unit": "C"}, "temp002": {"error": "Not connected"}, "humid001": {"humidity": 65.2, "unit": "%"} }

sensor

Unified AsyncSensor class.

This module implements the main AsyncSensor class that provides a simple, unified interface for all sensor backends (MQTT, HTTP, Serial, etc.).

AsyncSensor
AsyncSensor(sensor_id: str, backend: SensorBackend, address: str)

Unified async sensor interface.

This class provides a simple, consistent API for reading sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).

The sensor abstracts different communication patterns: - MQTT: Push-based (messages are cached when received) - HTTP: Pull-based (requests made on-demand) - Serial: Pull-based (commands sent on-demand)

All backends are hidden behind the same connect/disconnect/read interface.

Initialize AsyncSensor with a backend.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for this sensor

required
backend SensorBackend

Backend implementation (MQTT, HTTP, Serial, etc.)

required
address str

Backend-specific address (topic, endpoint, command, etc.)

required

Raises:

Type Description
ValueError

If sensor_id or address is empty

TypeError

If backend is not a SensorBackend instance

sensor_id property
sensor_id: str

Get the sensor ID.

is_connected property
is_connected: bool

Check if sensor backend is connected.

Returns:

Type Description
bool

True if backend is connected, False otherwise

connect async
connect() -> None

Connect the sensor backend.

This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).

Raises:

Type Description
ConnectionError

If connection fails

disconnect async
disconnect() -> None

Disconnect the sensor backend.

This closes the connection to the underlying communication system. Safe to call multiple times.

read async
read() -> Optional[Dict[str, Any]]

Read sensor data.

This method abstracts different communication patterns: - MQTT: Returns cached message from topic - HTTP: Makes GET request to endpoint - Serial: Sends command and reads response

Returns:

Type Description
Optional[Dict[str, Any]]

Dictionary with sensor data, or None if no data available

Raises:

Type Description
ConnectionError

If backend is not connected

TimeoutError

If read operation times out

ValueError

If address is invalid

simulator

SensorSimulator class for publishing sensor data.

This module implements the main SensorSimulator class that provides a simple, unified interface for publishing sensor data to all simulator backends (MQTT, HTTP, Serial, etc.).

SensorSimulator
SensorSimulator(
    simulator_id: str, backend: SensorSimulatorBackend, address: str
)

Unified sensor simulator interface.

This class provides a simple, consistent API for publishing sensor data regardless of the underlying communication backend (MQTT, HTTP, Serial, etc.).

The simulator abstracts different communication patterns: - MQTT: Publish messages to topics - HTTP: POST data to REST endpoints - Serial: Send data/commands to serial devices

All backends are hidden behind the same connect/disconnect/publish interface. This is perfect for integration testing and sensor data simulation.

Initialize SensorSimulator with a backend.

Parameters:

Name Type Description Default
simulator_id str

Unique identifier for this simulator

required
backend SensorSimulatorBackend

Backend implementation (MQTT, HTTP, Serial, etc.)

required
address str

Backend-specific address (topic, endpoint, command, etc.)

required

Raises:

Type Description
ValueError

If simulator_id or address is empty

TypeError

If backend is not a SensorSimulatorBackend instance

simulator_id property
simulator_id: str

Get the simulator ID.

is_connected property
is_connected: bool

Check if simulator backend is connected.

Returns:

Type Description
bool

True if backend is connected, False otherwise

connect async
connect() -> None

Connect the simulator backend.

This establishes the connection to the underlying communication system (MQTT broker, HTTP server, serial port, etc.).

Raises:

Type Description
ConnectionError

If connection fails

disconnect async
disconnect() -> None

Disconnect the simulator backend.

This closes the connection to the underlying communication system. Safe to call multiple times.

publish async
publish(data: Union[Dict[str, Any], Any]) -> None

Publish sensor data.

This method abstracts different communication patterns: - MQTT: Publishes message to topic - HTTP: Makes POST request to endpoint - Serial: Sends data to serial port

Parameters:

Name Type Description Default
data Union[Dict[str, Any], Any]

Data to publish (dict, primitive, or complex object)

required

Raises:

Type Description
ConnectionError

If backend is not connected

TimeoutError

If publish operation times out

ValueError

If address is invalid or data cannot be serialized

simulators

Sensor simulators for testing and integration purposes.

This module provides SensorSimulator implementations that can publish data to various backends (MQTT, HTTP, Serial) for testing AsyncSensor functionality.

SensorSimulatorBackend

Bases: ABC

Abstract base class for all sensor simulator backends.

This interface abstracts different communication patterns for publishing: - MQTT: Publish messages to topics - HTTP: POST data to endpoints - Serial: Send data/commands to serial ports - Modbus: Write to registers

connect abstractmethod async
connect() -> None

Establish connection to the backend.

Raises:

Type Description
ConnectionError

If connection fails

disconnect abstractmethod async
disconnect() -> None

Close connection to the backend.

Should be safe to call multiple times.

publish_data abstractmethod async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data to the specified address.

For different backends, 'address' means: - MQTT: topic name to publish to - HTTP: endpoint path to POST to - Serial: sensor command or data format - Modbus: register address to write to

Parameters:

Name Type Description Default
address str

Backend-specific address/identifier

required
data Union[Dict[str, Any], Any]

Data to publish (dict, primitive, or complex object)

required

Raises:

Type Description
ConnectionError

If backend not connected

TimeoutError

If publish operation times out

ValueError

If address is invalid or data cannot be serialized

is_connected abstractmethod
is_connected() -> bool

Check if backend is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

HTTPSensorSimulator
HTTPSensorSimulator(
    base_url: str,
    auth_token: Optional[str] = None,
    timeout: float = 30.0,
    **kwargs
)

Bases: SensorSimulatorBackend

HTTP backend for sensor simulation (placeholder).

This backend will connect to REST APIs and make HTTP POST requests to publish sensor data. It implements a push-based pattern where we send data to endpoints.

Future implementation will: - Make HTTP POST requests to base_url + endpoint - Handle authentication headers - Send JSON payloads - Implement timeout and retry logic

Initialize HTTP simulator backend.

Parameters:

Name Type Description Default
base_url str

Base URL for HTTP requests (e.g., "http://api.sensors.com")

required
auth_token Optional[str]

Optional authentication token

None
timeout float

Request timeout in seconds

30.0
**kwargs

Additional HTTP client parameters

{}
connect async
connect() -> None

Establish HTTP client connection.

Raises:

Type Description
NotImplementedError

HTTP simulator not yet implemented

disconnect async
disconnect() -> None

Close HTTP client connection.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data via HTTP POST request.

Parameters:

Name Type Description Default
address str

Endpoint path (e.g., "/sensors/temperature/data")

required
data Union[Dict[str, Any], Any]

Data to publish (will be JSON-encoded)

required

Raises:

Type Description
NotImplementedError

HTTP simulator not yet implemented

is_connected
is_connected() -> bool

Check if HTTP client is ready.

Returns:

Type Description
bool

Always False until implementation is complete

MQTTSensorSimulator
MQTTSensorSimulator(
    broker_url: str,
    identifier: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    keepalive: int = 60,
    **kwargs
)

Bases: SensorSimulatorBackend

MQTT backend for sensor simulation.

This backend connects to an MQTT broker and publishes sensor data to topics. It's designed for testing and integration scenarios where you need to simulate sensor data streams that can be consumed by AsyncSensor instances.

Initialize MQTT simulator backend.

Parameters:

Name Type Description Default
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier Optional[str]

MQTT client identifier (auto-generated if None)

None
username Optional[str]

MQTT username (optional)

None
password Optional[str]

MQTT password (optional)

None
keepalive int

MQTT keepalive interval in seconds

60
**kwargs

Additional MQTT client parameters

{}
connect async
connect() -> None

Connect to MQTT broker.

Raises:

Type Description
ConnectionError

If connection to broker fails

disconnect async
disconnect() -> None

Disconnect from MQTT broker.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data to MQTT topic.

Parameters:

Name Type Description Default
address str

MQTT topic name to publish to

required
data Union[Dict[str, Any], Any]

Data to publish (will be JSON-encoded if dict/list)

required

Raises:

Type Description
ConnectionError

If not connected to broker

ValueError

If topic name is invalid

TimeoutError

If publish operation times out

is_connected
is_connected() -> bool

Check if connected to MQTT broker.

Returns:

Type Description
bool

True if connected, False otherwise

SerialSensorSimulator
SerialSensorSimulator(
    port: str, baudrate: int = 9600, timeout: float = 5.0, **kwargs
)

Bases: SensorSimulatorBackend

Serial backend for sensor simulation (placeholder).

This backend will connect to serial/USB ports and send sensor data commands. It implements a push-based pattern where we send sensor data to simulate physical sensor devices.

Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor data in various formats (JSON, CSV, custom protocols) - Simulate sensor response patterns and timing - Handle communication protocols and handshaking

Initialize Serial simulator backend.

Parameters:

Name Type Description Default
port str

Serial port path (e.g., "/dev/ttyUSB0" or "COM3")

required
baudrate int

Serial communication baudrate

9600
timeout float

Communication timeout in seconds

5.0
**kwargs

Additional serial parameters (parity, stopbits, etc.)

{}
connect async
connect() -> None

Open serial port connection.

Raises:

Type Description
NotImplementedError

Serial simulator not yet implemented

disconnect async
disconnect() -> None

Close serial port connection.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Send sensor data via serial port.

Parameters:

Name Type Description Default
address str

Command type or data format identifier (e.g., "TEMP_DATA", "JSON_FORMAT")

required
data Union[Dict[str, Any], Any]

Data to send (will be formatted according to address)

required

Raises:

Type Description
NotImplementedError

Serial simulator not yet implemented

is_connected
is_connected() -> bool

Check if serial port is open.

Returns:

Type Description
bool

Always False until implementation is complete

base

Base sensor simulator backend interface.

This module defines the abstract interface that all sensor simulator backends must implement.

SensorSimulatorBackend

Bases: ABC

Abstract base class for all sensor simulator backends.

This interface abstracts different communication patterns for publishing: - MQTT: Publish messages to topics - HTTP: POST data to endpoints - Serial: Send data/commands to serial ports - Modbus: Write to registers

connect abstractmethod async
connect() -> None

Establish connection to the backend.

Raises:

Type Description
ConnectionError

If connection fails

disconnect abstractmethod async
disconnect() -> None

Close connection to the backend.

Should be safe to call multiple times.

publish_data abstractmethod async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data to the specified address.

For different backends, 'address' means: - MQTT: topic name to publish to - HTTP: endpoint path to POST to - Serial: sensor command or data format - Modbus: register address to write to

Parameters:

Name Type Description Default
address str

Backend-specific address/identifier

required
data Union[Dict[str, Any], Any]

Data to publish (dict, primitive, or complex object)

required

Raises:

Type Description
ConnectionError

If backend not connected

TimeoutError

If publish operation times out

ValueError

If address is invalid or data cannot be serialized

is_connected abstractmethod
is_connected() -> bool

Check if backend is currently connected.

Returns:

Type Description
bool

True if connected, False otherwise

http

HTTP sensor simulator backend implementation (placeholder).

This module will implement the SensorSimulatorBackend interface for HTTP/REST API communication. Currently this is a placeholder that raises NotImplementedError.

HTTPSensorSimulator
HTTPSensorSimulator(
    base_url: str,
    auth_token: Optional[str] = None,
    timeout: float = 30.0,
    **kwargs
)

Bases: SensorSimulatorBackend

HTTP backend for sensor simulation (placeholder).

This backend will connect to REST APIs and make HTTP POST requests to publish sensor data. It implements a push-based pattern where we send data to endpoints.

Future implementation will: - Make HTTP POST requests to base_url + endpoint - Handle authentication headers - Send JSON payloads - Implement timeout and retry logic

Initialize HTTP simulator backend.

Parameters:

Name Type Description Default
base_url str

Base URL for HTTP requests (e.g., "http://api.sensors.com")

required
auth_token Optional[str]

Optional authentication token

None
timeout float

Request timeout in seconds

30.0
**kwargs

Additional HTTP client parameters

{}
connect async
connect() -> None

Establish HTTP client connection.

Raises:

Type Description
NotImplementedError

HTTP simulator not yet implemented

disconnect async
disconnect() -> None

Close HTTP client connection.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data via HTTP POST request.

Parameters:

Name Type Description Default
address str

Endpoint path (e.g., "/sensors/temperature/data")

required
data Union[Dict[str, Any], Any]

Data to publish (will be JSON-encoded)

required

Raises:

Type Description
NotImplementedError

HTTP simulator not yet implemented

is_connected
is_connected() -> bool

Check if HTTP client is ready.

Returns:

Type Description
bool

Always False until implementation is complete

mqtt

MQTT sensor simulator backend implementation.

This module implements the SensorSimulatorBackend interface for MQTT communication. It publishes sensor data to MQTT topics for testing and integration purposes.

MQTTSensorSimulator
MQTTSensorSimulator(
    broker_url: str,
    identifier: Optional[str] = None,
    username: Optional[str] = None,
    password: Optional[str] = None,
    keepalive: int = 60,
    **kwargs
)

Bases: SensorSimulatorBackend

MQTT backend for sensor simulation.

This backend connects to an MQTT broker and publishes sensor data to topics. It's designed for testing and integration scenarios where you need to simulate sensor data streams that can be consumed by AsyncSensor instances.

Initialize MQTT simulator backend.

Parameters:

Name Type Description Default
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier Optional[str]

MQTT client identifier (auto-generated if None)

None
username Optional[str]

MQTT username (optional)

None
password Optional[str]

MQTT password (optional)

None
keepalive int

MQTT keepalive interval in seconds

60
**kwargs

Additional MQTT client parameters

{}
connect async
connect() -> None

Connect to MQTT broker.

Raises:

Type Description
ConnectionError

If connection to broker fails

disconnect async
disconnect() -> None

Disconnect from MQTT broker.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Publish sensor data to MQTT topic.

Parameters:

Name Type Description Default
address str

MQTT topic name to publish to

required
data Union[Dict[str, Any], Any]

Data to publish (will be JSON-encoded if dict/list)

required

Raises:

Type Description
ConnectionError

If not connected to broker

ValueError

If topic name is invalid

TimeoutError

If publish operation times out

is_connected
is_connected() -> bool

Check if connected to MQTT broker.

Returns:

Type Description
bool

True if connected, False otherwise

serial

Serial sensor simulator backend implementation (placeholder).

This module will implement the SensorSimulatorBackend interface for serial/USB communication. Currently this is a placeholder that raises NotImplementedError.

SerialSensorSimulator
SerialSensorSimulator(
    port: str, baudrate: int = 9600, timeout: float = 5.0, **kwargs
)

Bases: SensorSimulatorBackend

Serial backend for sensor simulation (placeholder).

This backend will connect to serial/USB ports and send sensor data commands. It implements a push-based pattern where we send sensor data to simulate physical sensor devices.

Future implementation will: - Connect to serial ports (e.g., /dev/ttyUSB0, COM3) - Send sensor data in various formats (JSON, CSV, custom protocols) - Simulate sensor response patterns and timing - Handle communication protocols and handshaking

Initialize Serial simulator backend.

Parameters:

Name Type Description Default
port str

Serial port path (e.g., "/dev/ttyUSB0" or "COM3")

required
baudrate int

Serial communication baudrate

9600
timeout float

Communication timeout in seconds

5.0
**kwargs

Additional serial parameters (parity, stopbits, etc.)

{}
connect async
connect() -> None

Open serial port connection.

Raises:

Type Description
NotImplementedError

Serial simulator not yet implemented

disconnect async
disconnect() -> None

Close serial port connection.

publish_data async
publish_data(address: str, data: Union[Dict[str, Any], Any]) -> None

Send sensor data via serial port.

Parameters:

Name Type Description Default
address str

Command type or data format identifier (e.g., "TEMP_DATA", "JSON_FORMAT")

required
data Union[Dict[str, Any], Any]

Data to send (will be formatted according to address)

required

Raises:

Type Description
NotImplementedError

Serial simulator not yet implemented

is_connected
is_connected() -> bool

Check if serial port is open.

Returns:

Type Description
bool

Always False until implementation is complete

services

Hardware API modules - lazy imports for independent service operation.

cameras

CameraManagerService - Service-based camera management API.

CameraManagerConnectionManager
CameraManagerConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for CameraManagerService.

Provides strongly-typed methods for all camera management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available camera backends.

Returns:

Type Description
List[str]

List of available backend names

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

Returns:

Type Description
Dict[str, Any]

Dictionary mapping backend names to their information

discover_cameras async
discover_cameras(backend: Optional[str] = None) -> List[str]

Discover available cameras from all or specific backends.

Parameters:

Name Type Description Default
backend Optional[str]

Optional backend name to filter by

None

Returns:

Type Description
List[str]

List of camera names in format 'Backend:device_name'

open_camera async
open_camera(camera: str, test_connection: bool = True) -> bool

Open a single camera.

Parameters:

Name Type Description Default
camera str

Camera name in format 'Backend:device_name'

required
test_connection bool

Test connection after opening

True

Returns:

Type Description
bool

True if successful

open_cameras_batch async
open_cameras_batch(
    cameras: List[str], test_connection: bool = True
) -> Dict[str, Any]

Open multiple cameras in batch.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names

required
test_connection bool

Test connection after opening

True

Returns:

Type Description
Dict[str, Any]

Batch operation results

close_camera async
close_camera(camera: str) -> bool

Close a specific camera.

Parameters:

Name Type Description Default
camera str

Camera name to close

required

Returns:

Type Description
bool

True if successful

close_cameras_batch async
close_cameras_batch(cameras: List[str]) -> Dict[str, Any]

Close multiple cameras in batch.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names to close

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

close_all_cameras async
close_all_cameras() -> bool

Close all active cameras.

Returns:

Type Description
bool

True if successful

get_active_cameras async
get_active_cameras() -> List[str]

Get list of currently active cameras.

Returns:

Type Description
List[str]

List of active camera names

get_camera_status async
get_camera_status(camera: str) -> Dict[str, Any]

Get camera status information.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Camera status information

get_camera_info async
get_camera_info(camera: str) -> Dict[str, Any]

Get detailed camera information.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Camera information

get_camera_capabilities async
get_camera_capabilities(camera: str) -> Dict[str, Any]

Get camera capabilities information.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Camera capabilities

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics information.

Returns:

Type Description
Dict[str, Any]

System diagnostics data

configure_camera async
configure_camera(camera: str, properties: Dict[str, Any]) -> bool

Configure camera parameters.

Parameters:

Name Type Description Default
camera str

Camera name to configure

required
properties Dict[str, Any]

Configuration properties

required

Returns:

Type Description
bool

True if successful

configure_cameras_batch async
configure_cameras_batch(
    configurations: Dict[str, Dict[str, Any]],
) -> Dict[str, Any]

Configure multiple cameras in batch.

Parameters:

Name Type Description Default
configurations Dict[str, Dict[str, Any]]

Dictionary mapping camera names to their configurations

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

get_camera_configuration async
get_camera_configuration(camera: str) -> Dict[str, Any]

Get current camera configuration.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Current camera configuration

import_camera_config async
import_camera_config(camera: str, config_path: str) -> Dict[str, Any]

Import camera configuration from file.

Parameters:

Name Type Description Default
camera str

Camera name

required
config_path str

Path to configuration file

required

Returns:

Type Description
Dict[str, Any]

Import operation result

export_camera_config async
export_camera_config(camera: str, config_path: str) -> Dict[str, Any]

Export camera configuration to file.

Parameters:

Name Type Description Default
camera str

Camera name

required
config_path str

Path to save configuration file

required

Returns:

Type Description
Dict[str, Any]

Export operation result

capture_image async
capture_image(
    camera: str, save_path: Optional[str] = None, output_format: str = "pil"
) -> Dict[str, Any]

Capture a single image.

Parameters:

Name Type Description Default
camera str

Camera name

required
save_path Optional[str]

Optional path to save image

None
output_format str

Output format for returned image ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

Capture result

capture_images_batch async
capture_images_batch(
    cameras: List[str], output_format: str = "pil"
) -> Dict[str, Any]

Capture images from multiple cameras.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names

required
output_format str

Output format for returned images ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

Batch capture results

capture_hdr_image async
capture_hdr_image(
    camera: str,
    save_path_pattern: Optional[str] = None,
    exposure_levels: int = 3,
    exposure_multiplier: float = 2.0,
    return_images: bool = True,
    output_format: str = "pil",
) -> Dict[str, Any]

Capture HDR image sequence.

Parameters:

Name Type Description Default
camera str

Camera name

required
save_path_pattern Optional[str]

Path pattern with {exposure} placeholder

None
exposure_levels int

Number of exposure levels

3
exposure_multiplier float

Multiplier between exposures

2.0
return_images bool

Return captured images

True
output_format str

Output format for returned images ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

HDR capture result

capture_hdr_images_batch async
capture_hdr_images_batch(
    cameras: List[str],
    save_path_pattern: Optional[str] = None,
    exposure_levels: int = 3,
    exposure_multiplier: float = 2.0,
    return_images: bool = True,
    output_format: str = "pil",
) -> Dict[str, Any]

Capture HDR images from multiple cameras.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names

required
save_path_pattern Optional[str]

Path pattern with {exposure} placeholder

None
exposure_levels int

Number of exposure levels

3
exposure_multiplier float

Multiplier between exposures

2.0
return_images bool

Return captured images

True
output_format str

Output format for returned images ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

Batch HDR capture results

get_bandwidth_settings async
get_bandwidth_settings() -> Dict[str, Any]

Get current bandwidth settings.

Returns:

Type Description
Dict[str, Any]

Bandwidth settings

set_bandwidth_limit async
set_bandwidth_limit(max_concurrent_captures: int) -> bool

Set maximum concurrent capture limit.

Parameters:

Name Type Description Default
max_concurrent_captures int

Maximum concurrent captures

required

Returns:

Type Description
bool

True if successful

get_network_diagnostics async
get_network_diagnostics() -> Dict[str, Any]

Get network diagnostics information.

Returns:

Type Description
Dict[str, Any]

Network diagnostics data

configure_capture_groups async
configure_capture_groups(config: Dict[str, Dict[str, Dict[str, Any]]]) -> bool

Configure stage+set capture groups with per-group concurrency semaphores.

Parameters:

Name Type Description Default
config Dict[str, Dict[str, Dict[str, Any]]]

{stage: {set: {"batch_size": int, "cameras": [str]}}}

required

Returns:

Type Description
bool

True if successful

get_capture_groups async
get_capture_groups() -> Dict[str, Any]

Get current capture group configuration.

Returns:

Type Description
Dict[str, Any]

Dictionary of capture groups keyed by "stage:set_name"

remove_capture_groups async
remove_capture_groups() -> bool

Remove all capture group configurations.

Returns:

Type Description
bool

True if successful

CameraManagerService
CameraManagerService(include_mocks: bool = False, **kwargs)

Bases: Service

Camera Management Service.

Provides comprehensive camera management functionality through a Service-based architecture with MCP tool integration and async camera operations.

Initialize CameraManagerService.

Parameters:

Name Type Description Default
include_mocks bool

Include mock cameras in discovery

False
**kwargs

Additional Service initialization parameters

{}
shutdown_cleanup async
shutdown_cleanup()

Cleanup camera manager on shutdown.

discover_backends async
discover_backends() -> BackendsResponse

Discover available camera backends.

get_backend_info async
get_backend_info() -> BackendInfoResponse

Get detailed information about all backends.

discover_cameras async
discover_cameras(request: BackendFilterRequest) -> ListResponse

Discover available cameras from all or specific backends.

open_camera async
open_camera(request: CameraOpenRequest) -> BoolResponse

Open a single camera with exposure validation.

open_cameras_batch async
open_cameras_batch(request: CameraOpenBatchRequest) -> BatchOperationResponse

Open multiple cameras in batch.

close_camera async
close_camera(request: CameraCloseRequest) -> BoolResponse

Close a specific camera.

close_cameras_batch async
close_cameras_batch(request: CameraCloseBatchRequest) -> BatchOperationResponse

Close multiple cameras in batch.

close_all_cameras async
close_all_cameras() -> BoolResponse

Close all active cameras.

get_active_cameras async
get_active_cameras() -> ActiveCamerasResponse

Get list of currently active cameras.

get_camera_status async
get_camera_status(request: CameraQueryRequest) -> CameraStatusResponse

Get camera status information.

get_camera_info async
get_camera_info(request: CameraQueryRequest) -> CameraInfoResponse

Get detailed camera information.

get_camera_capabilities async
get_camera_capabilities(
    request: CameraQueryRequest,
) -> CameraCapabilitiesResponse

Get camera capabilities information.

configure_camera async
configure_camera(request: CameraConfigureRequest) -> BoolResponse

Configure camera parameters.

configure_cameras_batch async
configure_cameras_batch(
    request: CameraConfigureBatchRequest,
) -> BatchOperationResponse

Configure multiple cameras in batch.

get_camera_configuration async
get_camera_configuration(
    request: CameraQueryRequest,
) -> CameraConfigurationResponse

Get current camera configuration.

import_camera_config async
import_camera_config(request: ConfigFileImportRequest) -> ConfigFileResponse

Import camera configuration from file.

export_camera_config async
export_camera_config(request: ConfigFileExportRequest) -> ConfigFileResponse

Export camera configuration to file.

capture_image async
capture_image(request: CaptureImageRequest) -> CaptureResponse

Capture a single image with timeout protection.

capture_images_batch async
capture_images_batch(request: CaptureBatchRequest) -> BatchCaptureResponse

Capture images from multiple cameras.

capture_hdr_image async
capture_hdr_image(request: CaptureHDRRequest) -> HDRCaptureResponse

Capture HDR image sequence.

capture_hdr_images_batch async
capture_hdr_images_batch(
    request: CaptureHDRBatchRequest,
) -> BatchHDRCaptureResponse

Capture HDR images from multiple cameras.

get_network_diagnostics async
get_network_diagnostics() -> NetworkDiagnosticsResponse

Get network diagnostics information.

get_performance_settings async
get_performance_settings(
    request: CameraPerformanceSettingsRequest = None,
) -> CameraPerformanceSettingsResponse

Get current camera performance settings.

Returns global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.

set_performance_settings async
set_performance_settings(
    request: CameraPerformanceSettingsRequest,
) -> BoolResponse

Update camera performance settings.

Updates global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.

configure_capture_groups async
configure_capture_groups(
    request: ConfigureCaptureGroupsRequest,
) -> BoolResponse

Configure stage+set capture groups with per-group concurrency semaphores.

get_capture_groups async
get_capture_groups() -> CaptureGroupsResponse

Get current capture group configuration.

remove_capture_groups async
remove_capture_groups() -> BoolResponse

Remove all capture group configurations.

start_stream async
start_stream(request: StreamStartRequest) -> StreamInfoResponse

Start camera stream with resilient state management.

stop_stream
stop_stream(request: StreamStopRequest) -> BoolResponse

Stop camera stream with resilient state management.

get_stream_status async
get_stream_status(request: StreamStatusRequest) -> StreamStatusResponse

Get camera stream status with resilient state management.

get_active_streams
get_active_streams() -> ActiveStreamsResponse

Get list of cameras with active streams.

stop_all_streams
stop_all_streams() -> BoolResponse

Stop all active camera streams.

serve_camera_stream async
serve_camera_stream(camera_name: str)

Serve MJPEG video stream for a specific camera.

get_lens_status async
get_lens_status(request: CameraQueryRequest) -> LensStatusResponse

Get liquid lens hardware state for a camera.

get_optical_power async
get_optical_power(request: CameraQueryRequest) -> DictResponse

Get current optical power (diopters) for a camera's liquid lens.

set_optical_power async
set_optical_power(request: OpticalPowerRequest) -> BoolResponse

Set optical power (diopters) for a camera's liquid lens.

trigger_autofocus async
trigger_autofocus(request: TriggerAutofocusRequest) -> BoolResponse

Trigger one-shot autofocus on a camera's liquid lens.

get_focus_config async
get_focus_config(request: CameraQueryRequest) -> DictResponse

Get autofocus configuration for a camera's liquid lens.

set_focus_config async
set_focus_config(request: FocusConfigRequest) -> BoolResponse

Update autofocus configuration for a camera's liquid lens.

calibrate_homography_checkerboard async
calibrate_homography_checkerboard(
    request: HomographyCalibrateCheckerboardRequest,
) -> HomographyCalibrationResponse

Calibrate homography using checkerboard pattern detection.

calibrate_homography_correspondences
calibrate_homography_correspondences(
    request: HomographyCalibrateCorrespondencesRequest,
) -> HomographyCalibrationResponse

Calibrate homography from known point correspondences.

calibrate_homography_multi_view
calibrate_homography_multi_view(
    request: HomographyCalibrateMultiViewRequest,
) -> HomographyCalibrationResponse

Calibrate homography from multiple checkerboard positions on the same plane.

Ideal for calibrating long surfaces (metallic bars, conveyor belts) using a standard checkerboard moved to multiple positions.

measure_homography_box
measure_homography_box(
    request: HomographyMeasureBoundingBoxRequest,
) -> HomographyMeasurementResponse

Measure bounding box dimensions using homography calibration.

measure_homography_batch
measure_homography_batch(
    request: HomographyMeasureBatchRequest,
) -> HomographyBatchMeasurementResponse

Unified batch measurement for bounding boxes and/or point-pair distances.

measure_homography_distance
measure_homography_distance(
    request: HomographyMeasureDistanceRequest,
) -> HomographyDistanceResponse

Measure distance between two points using homography calibration.

health_check async
health_check() -> HealthCheckResponse

Health check endpoint for container healthcheck.

get_system_diagnostics async
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics information.

connection_manager

Connection Manager for CameraManagerService.

Provides a strongly-typed client interface for programmatic access to camera management operations.

CameraManagerConnectionManager
CameraManagerConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for CameraManagerService.

Provides strongly-typed methods for all camera management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available camera backends.

Returns:

Type Description
List[str]

List of available backend names

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

Returns:

Type Description
Dict[str, Any]

Dictionary mapping backend names to their information

discover_cameras async
discover_cameras(backend: Optional[str] = None) -> List[str]

Discover available cameras from all or specific backends.

Parameters:

Name Type Description Default
backend Optional[str]

Optional backend name to filter by

None

Returns:

Type Description
List[str]

List of camera names in format 'Backend:device_name'

open_camera async
open_camera(camera: str, test_connection: bool = True) -> bool

Open a single camera.

Parameters:

Name Type Description Default
camera str

Camera name in format 'Backend:device_name'

required
test_connection bool

Test connection after opening

True

Returns:

Type Description
bool

True if successful

open_cameras_batch async
open_cameras_batch(
    cameras: List[str], test_connection: bool = True
) -> Dict[str, Any]

Open multiple cameras in batch.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names

required
test_connection bool

Test connection after opening

True

Returns:

Type Description
Dict[str, Any]

Batch operation results

close_camera async
close_camera(camera: str) -> bool

Close a specific camera.

Parameters:

Name Type Description Default
camera str

Camera name to close

required

Returns:

Type Description
bool

True if successful

close_cameras_batch async
close_cameras_batch(cameras: List[str]) -> Dict[str, Any]

Close multiple cameras in batch.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names to close

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

close_all_cameras async
close_all_cameras() -> bool

Close all active cameras.

Returns:

Type Description
bool

True if successful

get_active_cameras async
get_active_cameras() -> List[str]

Get list of currently active cameras.

Returns:

Type Description
List[str]

List of active camera names

get_camera_status async
get_camera_status(camera: str) -> Dict[str, Any]

Get camera status information.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Camera status information

get_camera_info async
get_camera_info(camera: str) -> Dict[str, Any]

Get detailed camera information.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Camera information

get_camera_capabilities async
get_camera_capabilities(camera: str) -> Dict[str, Any]

Get camera capabilities information.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Camera capabilities

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics information.

Returns:

Type Description
Dict[str, Any]

System diagnostics data

configure_camera async
configure_camera(camera: str, properties: Dict[str, Any]) -> bool

Configure camera parameters.

Parameters:

Name Type Description Default
camera str

Camera name to configure

required
properties Dict[str, Any]

Configuration properties

required

Returns:

Type Description
bool

True if successful

configure_cameras_batch async
configure_cameras_batch(
    configurations: Dict[str, Dict[str, Any]],
) -> Dict[str, Any]

Configure multiple cameras in batch.

Parameters:

Name Type Description Default
configurations Dict[str, Dict[str, Any]]

Dictionary mapping camera names to their configurations

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

get_camera_configuration async
get_camera_configuration(camera: str) -> Dict[str, Any]

Get current camera configuration.

Parameters:

Name Type Description Default
camera str

Camera name to query

required

Returns:

Type Description
Dict[str, Any]

Current camera configuration

import_camera_config async
import_camera_config(camera: str, config_path: str) -> Dict[str, Any]

Import camera configuration from file.

Parameters:

Name Type Description Default
camera str

Camera name

required
config_path str

Path to configuration file

required

Returns:

Type Description
Dict[str, Any]

Import operation result

export_camera_config async
export_camera_config(camera: str, config_path: str) -> Dict[str, Any]

Export camera configuration to file.

Parameters:

Name Type Description Default
camera str

Camera name

required
config_path str

Path to save configuration file

required

Returns:

Type Description
Dict[str, Any]

Export operation result

capture_image async
capture_image(
    camera: str, save_path: Optional[str] = None, output_format: str = "pil"
) -> Dict[str, Any]

Capture a single image.

Parameters:

Name Type Description Default
camera str

Camera name

required
save_path Optional[str]

Optional path to save image

None
output_format str

Output format for returned image ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

Capture result

capture_images_batch async
capture_images_batch(
    cameras: List[str], output_format: str = "pil"
) -> Dict[str, Any]

Capture images from multiple cameras.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names

required
output_format str

Output format for returned images ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

Batch capture results

capture_hdr_image async
capture_hdr_image(
    camera: str,
    save_path_pattern: Optional[str] = None,
    exposure_levels: int = 3,
    exposure_multiplier: float = 2.0,
    return_images: bool = True,
    output_format: str = "pil",
) -> Dict[str, Any]

Capture HDR image sequence.

Parameters:

Name Type Description Default
camera str

Camera name

required
save_path_pattern Optional[str]

Path pattern with {exposure} placeholder

None
exposure_levels int

Number of exposure levels

3
exposure_multiplier float

Multiplier between exposures

2.0
return_images bool

Return captured images

True
output_format str

Output format for returned images ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

HDR capture result

capture_hdr_images_batch async
capture_hdr_images_batch(
    cameras: List[str],
    save_path_pattern: Optional[str] = None,
    exposure_levels: int = 3,
    exposure_multiplier: float = 2.0,
    return_images: bool = True,
    output_format: str = "pil",
) -> Dict[str, Any]

Capture HDR images from multiple cameras.

Parameters:

Name Type Description Default
cameras List[str]

List of camera names

required
save_path_pattern Optional[str]

Path pattern with {exposure} placeholder

None
exposure_levels int

Number of exposure levels

3
exposure_multiplier float

Multiplier between exposures

2.0
return_images bool

Return captured images

True
output_format str

Output format for returned images ("numpy" or "pil")

'pil'

Returns:

Type Description
Dict[str, Any]

Batch HDR capture results

get_bandwidth_settings async
get_bandwidth_settings() -> Dict[str, Any]

Get current bandwidth settings.

Returns:

Type Description
Dict[str, Any]

Bandwidth settings

set_bandwidth_limit async
set_bandwidth_limit(max_concurrent_captures: int) -> bool

Set maximum concurrent capture limit.

Parameters:

Name Type Description Default
max_concurrent_captures int

Maximum concurrent captures

required

Returns:

Type Description
bool

True if successful

get_network_diagnostics async
get_network_diagnostics() -> Dict[str, Any]

Get network diagnostics information.

Returns:

Type Description
Dict[str, Any]

Network diagnostics data

configure_capture_groups async
configure_capture_groups(config: Dict[str, Dict[str, Dict[str, Any]]]) -> bool

Configure stage+set capture groups with per-group concurrency semaphores.

Parameters:

Name Type Description Default
config Dict[str, Dict[str, Dict[str, Any]]]

{stage: {set: {"batch_size": int, "cameras": [str]}}}

required

Returns:

Type Description
bool

True if successful

get_capture_groups async
get_capture_groups() -> Dict[str, Any]

Get current capture group configuration.

Returns:

Type Description
Dict[str, Any]

Dictionary of capture groups keyed by "stage:set_name"

remove_capture_groups async
remove_capture_groups() -> bool

Remove all capture group configurations.

Returns:

Type Description
bool

True if successful

launcher

Camera API service launcher.

main
main()

Main launcher function.

models

Models for CameraManagerService API.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

BandwidthLimitCameraRequest

Bases: BaseModel

Request model for setting camera bandwidth limit.

BandwidthLimitRequest

Bases: BaseModel

Request model for setting bandwidth limit.

CameraCloseBatchRequest

Bases: BaseModel

Request model for batch camera closing.

CameraCloseRequest

Bases: BaseModel

Request model for closing a camera.

CameraConfigureBatchRequest

Bases: BaseModel

Request model for batch camera configuration.

validate_configurations classmethod
validate_configurations(
    v: Union[Dict[str, Dict[str, Any]], List[Dict[str, Any]]],
) -> Dict[str, Dict[str, Any]]

Convert list format to dict format.

CameraConfigureRequest

Bases: BaseModel

Request model for camera configuration.

CameraOpenBatchRequest

Bases: BaseModel

Request model for batch camera opening.

CameraOpenRequest

Bases: BaseModel

Request model for opening a camera.

CameraPerformanceSettingsRequest

Bases: BaseModel

Request model for updating camera performance settings.

Global settings (always applicable): - timeout_ms, retrieve_retry_count, max_concurrent_captures

Per-camera GigE settings (requires camera field, only for GigE cameras): - packet_size, inter_packet_delay, bandwidth_limit_mbps

CameraQueryRequest

Bases: BaseModel

Request model for camera query operations.

CaptureBatchRequest

Bases: BaseModel

Request model for batch image capture.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

CaptureHDRBatchRequest

Bases: BaseModel

Request model for batch HDR image capture.

validate_exposure_levels classmethod
validate_exposure_levels(v: Union[int, List[float]]) -> Union[int, List[float]]

Validate exposure levels.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

CaptureHDRRequest

Bases: BaseModel

Request model for HDR image capture.

validate_exposure_levels classmethod
validate_exposure_levels(v: Union[int, List[float]]) -> Union[int, List[float]]

Validate exposure levels.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

CaptureImageRequest

Bases: BaseModel

Request model for single image capture.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

ConfigFileExportRequest

Bases: BaseModel

Request model for configuration file export.

ConfigFileImportRequest

Bases: BaseModel

Request model for configuration file import.

ConfigureCaptureGroupsRequest

Bases: BaseModel

Request model for configuring stage+set capture groups.

Each group creates a concurrency semaphore sized to batch_size, limiting how many cameras within the group can capture simultaneously.

ExposureRequest

Bases: BaseModel

Request model for exposure setting.

FocusConfigRequest

Bases: BaseModel

Request model for setting focus configuration.

GainRequest

Bases: BaseModel

Request model for gain setting.

HomographyCalibrateCheckerboardRequest

Bases: BaseModel

Request model for checkerboard-based homography calibration.

HomographyCalibrateCorrespondencesRequest

Bases: BaseModel

Request model for manual point correspondence calibration.

validate_points classmethod
validate_points(v: List[List[float]]) -> List[List[float]]

Validate point arrays.

HomographyCalibrateMultiViewRequest

Bases: BaseModel

Request model for multi-view checkerboard calibration.

Note: Checkerboard parameters (board_size, square_size, world_unit) are configured in HomographySettings (see config.py), not passed per-request.

validate_positions classmethod
validate_positions(v: List[Dict[str, float]]) -> List[Dict[str, float]]

Validate position format.

validate_lengths_match
validate_lengths_match()

Ensure number of images matches number of positions.

HomographyMeasureBatchRequest

Bases: BaseModel

Unified request model for batch measurements (bounding boxes and/or point-pair distances).

validate_boxes classmethod
validate_boxes(
    v: Optional[List[Dict[str, int]]],
) -> Optional[List[Dict[str, int]]]

Validate bounding boxes.

validate_point_pairs classmethod
validate_point_pairs(
    v: Optional[List[List[List[float]]]],
) -> Optional[List[List[List[float]]]]

Validate point pairs.

validate_at_least_one
validate_at_least_one()

Ensure at least one measurement type is provided.

HomographyMeasureBoundingBoxRequest

Bases: BaseModel

Request model for measuring a single bounding box.

HomographyMeasureDistanceRequest

Bases: BaseModel

Request model for measuring distance between two points.

ImageEnhancementRequest

Bases: BaseModel

Request model for image enhancement setting.

InterPacketDelayRequest

Bases: BaseModel

Request model for setting inter-packet delay.

OpticalPowerRequest

Bases: BaseModel

Request model for setting optical power.

PacketSizeRequest

Bases: BaseModel

Request model for setting camera packet size.

PixelFormatRequest

Bases: BaseModel

Request model for pixel format setting.

ROIRequest

Bases: BaseModel

Request model for ROI (Region of Interest) setting.

StreamStartRequest

Bases: BaseModel

Request model for starting camera stream.

StreamStatusRequest

Bases: BaseModel

Request model for getting stream status.

StreamStopRequest

Bases: BaseModel

Request model for stopping camera stream.

TriggerAutofocusRequest

Bases: BaseModel

Request model for triggering one-shot autofocus.

TriggerModeRequest

Bases: BaseModel

Request model for trigger mode setting.

WhiteBalanceRequest

Bases: BaseModel

Request model for white balance setting.

ActiveCamerasResponse

Bases: BaseResponse

Response model for active cameras list.

ActiveStreamsResponse

Bases: BaseResponse

Response model for active streams list.

BackendInfo

Bases: BaseModel

Backend information model.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BandwidthSettings

Bases: BaseModel

Bandwidth settings model.

BandwidthSettingsResponse

Bases: BaseResponse

Response model for bandwidth settings.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BatchCaptureResponse

Bases: BaseResponse

Response model for batch capture operations.

BatchHDRCaptureResponse

Bases: BaseResponse

Response model for batch HDR capture.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

BatchOperationResult

Bases: BaseModel

Batch operation result model.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

CameraCapabilities

Bases: BaseModel

Camera capabilities model.

CameraCapabilitiesResponse

Bases: BaseResponse

Response model for camera capabilities.

CameraConfiguration

Bases: BaseModel

Camera configuration model.

CameraConfigurationResponse

Bases: BaseResponse

Response model for camera configuration.

CameraInfo

Bases: BaseModel

Camera information model.

CameraInfoResponse

Bases: BaseResponse

Response model for camera information.

CameraPerformanceSettings

Bases: BaseModel

Camera performance and retry settings model.

Global settings: - timeout_ms, retrieve_retry_count, max_concurrent_captures

Per-camera GigE settings (None if not applicable or not queried): - packet_size, inter_packet_delay, bandwidth_limit_mbps

CameraPerformanceSettingsResponse

Bases: BaseResponse

Response model for camera performance settings.

CameraStatus

Bases: BaseModel

Camera status model.

CameraStatusResponse

Bases: BaseResponse

Response model for camera status.

CaptureGroupInfo

Bases: BaseModel

Capture group information model.

CaptureGroupsResponse

Bases: BaseResponse

Response model for capture groups.

CaptureResponse

Bases: BaseResponse

Response model for single image capture.

CaptureResult

Bases: BaseModel

Capture result model.

ConfigFileOperationResult

Bases: BaseModel

Configuration file operation result.

ConfigFileResponse

Bases: BaseResponse

Response model for configuration file operations.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

ErrorDetail

Bases: BaseModel

Error detail model.

ErrorResponse

Bases: BaseResponse

Response model for error conditions.

FloatResponse

Bases: BaseResponse

Response model for float values.

HDRCaptureResponse

Bases: BaseResponse

Response model for HDR capture.

HDRCaptureResult

Bases: BaseModel

HDR capture result model.

HealthCheckResponse

Bases: BaseModel

Health check response model.

HomographyBatchMeasurementData

Bases: BaseModel

Batch measurement data containing both box and distance measurements.

HomographyBatchMeasurementResponse

Bases: BaseResponse

Response model for unified batch homography measurements.

HomographyCalibrationResponse

Bases: BaseResponse

Response model for homography calibration.

HomographyCalibrationResult

Bases: BaseModel

Homography calibration result model.

HomographyDistanceResponse

Bases: BaseResponse

Response model for distance measurement.

HomographyDistanceResult

Bases: BaseModel

Homography distance measurement result model.

HomographyMeasurementResponse

Bases: BaseResponse

Response model for single homography measurement.

HomographyMeasurementResult

Bases: BaseModel

Homography measurement result model.

IntResponse

Bases: BaseResponse

Response model for integer values.

LensStatus

Bases: BaseModel

Liquid lens hardware state.

LensStatusResponse

Bases: BaseResponse

Response model for lens status.

ListResponse

Bases: BaseResponse

Response model for list data.

NetworkDiagnostics

Bases: BaseModel

Network diagnostics model.

NetworkDiagnosticsResponse

Bases: BaseResponse

Response model for network diagnostics.

ParameterRange

Bases: BaseModel

Parameter range model.

RangeResponse

Bases: BaseResponse

Response model for parameter ranges.

StreamInfo

Bases: BaseModel

Stream information model.

StreamInfoResponse

Bases: BaseResponse

Response model for stream information.

StreamStatus

Bases: BaseModel

Stream status model.

StreamStatusResponse

Bases: BaseResponse

Response model for stream status.

StringResponse

Bases: BaseResponse

Response model for string values.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

requests

Request models for CameraManagerService.

Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all camera operations.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

CameraOpenRequest

Bases: BaseModel

Request model for opening a camera.

CameraOpenBatchRequest

Bases: BaseModel

Request model for batch camera opening.

CameraCloseRequest

Bases: BaseModel

Request model for closing a camera.

CameraCloseBatchRequest

Bases: BaseModel

Request model for batch camera closing.

CameraConfigureRequest

Bases: BaseModel

Request model for camera configuration.

CameraConfigureBatchRequest

Bases: BaseModel

Request model for batch camera configuration.

validate_configurations classmethod
validate_configurations(
    v: Union[Dict[str, Dict[str, Any]], List[Dict[str, Any]]],
) -> Dict[str, Dict[str, Any]]

Convert list format to dict format.

CameraQueryRequest

Bases: BaseModel

Request model for camera query operations.

ConfigFileImportRequest

Bases: BaseModel

Request model for configuration file import.

ConfigFileExportRequest

Bases: BaseModel

Request model for configuration file export.

CaptureImageRequest

Bases: BaseModel

Request model for single image capture.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

CaptureBatchRequest

Bases: BaseModel

Request model for batch image capture.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

CaptureHDRRequest

Bases: BaseModel

Request model for HDR image capture.

validate_exposure_levels classmethod
validate_exposure_levels(v: Union[int, List[float]]) -> Union[int, List[float]]

Validate exposure levels.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

CaptureHDRBatchRequest

Bases: BaseModel

Request model for batch HDR image capture.

validate_exposure_levels classmethod
validate_exposure_levels(v: Union[int, List[float]]) -> Union[int, List[float]]

Validate exposure levels.

validate_output_format classmethod
validate_output_format(v: str) -> str

Validate output format is supported.

BandwidthLimitRequest

Bases: BaseModel

Request model for setting bandwidth limit.

CameraPerformanceSettingsRequest

Bases: BaseModel

Request model for updating camera performance settings.

Global settings (always applicable): - timeout_ms, retrieve_retry_count, max_concurrent_captures

Per-camera GigE settings (requires camera field, only for GigE cameras): - packet_size, inter_packet_delay, bandwidth_limit_mbps

ExposureRequest

Bases: BaseModel

Request model for exposure setting.

GainRequest

Bases: BaseModel

Request model for gain setting.

ROIRequest

Bases: BaseModel

Request model for ROI (Region of Interest) setting.

TriggerModeRequest

Bases: BaseModel

Request model for trigger mode setting.

PixelFormatRequest

Bases: BaseModel

Request model for pixel format setting.

WhiteBalanceRequest

Bases: BaseModel

Request model for white balance setting.

ImageEnhancementRequest

Bases: BaseModel

Request model for image enhancement setting.

BandwidthLimitCameraRequest

Bases: BaseModel

Request model for setting camera bandwidth limit.

PacketSizeRequest

Bases: BaseModel

Request model for setting camera packet size.

InterPacketDelayRequest

Bases: BaseModel

Request model for setting inter-packet delay.

StreamStartRequest

Bases: BaseModel

Request model for starting camera stream.

StreamStopRequest

Bases: BaseModel

Request model for stopping camera stream.

StreamStatusRequest

Bases: BaseModel

Request model for getting stream status.

HomographyCalibrateCheckerboardRequest

Bases: BaseModel

Request model for checkerboard-based homography calibration.

HomographyCalibrateCorrespondencesRequest

Bases: BaseModel

Request model for manual point correspondence calibration.

validate_points classmethod
validate_points(v: List[List[float]]) -> List[List[float]]

Validate point arrays.

HomographyMeasureBoundingBoxRequest

Bases: BaseModel

Request model for measuring a single bounding box.

HomographyMeasureBatchRequest

Bases: BaseModel

Unified request model for batch measurements (bounding boxes and/or point-pair distances).

validate_boxes classmethod
validate_boxes(
    v: Optional[List[Dict[str, int]]],
) -> Optional[List[Dict[str, int]]]

Validate bounding boxes.

validate_point_pairs classmethod
validate_point_pairs(
    v: Optional[List[List[List[float]]]],
) -> Optional[List[List[List[float]]]]

Validate point pairs.

validate_at_least_one
validate_at_least_one()

Ensure at least one measurement type is provided.

HomographyMeasureDistanceRequest

Bases: BaseModel

Request model for measuring distance between two points.

OpticalPowerRequest

Bases: BaseModel

Request model for setting optical power.

TriggerAutofocusRequest

Bases: BaseModel

Request model for triggering one-shot autofocus.

FocusConfigRequest

Bases: BaseModel

Request model for setting focus configuration.

HomographyCalibrateMultiViewRequest

Bases: BaseModel

Request model for multi-view checkerboard calibration.

Note: Checkerboard parameters (board_size, square_size, world_unit) are configured in HomographySettings (see config.py), not passed per-request.

validate_positions classmethod
validate_positions(v: List[Dict[str, float]]) -> List[Dict[str, float]]

Validate position format.

validate_lengths_match
validate_lengths_match()

Ensure number of images matches number of positions.

ConfigureCaptureGroupsRequest

Bases: BaseModel

Request model for configuring stage+set capture groups.

Each group creates a concurrency semaphore sized to batch_size, limiting how many cameras within the group can capture simultaneously.

responses

Response models for CameraManagerService.

Contains all Pydantic models for API responses, ensuring consistent response formatting across all camera management endpoints.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

StringResponse

Bases: BaseResponse

Response model for string values.

IntResponse

Bases: BaseResponse

Response model for integer values.

FloatResponse

Bases: BaseResponse

Response model for float values.

ListResponse

Bases: BaseResponse

Response model for list data.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

BackendInfo

Bases: BaseModel

Backend information model.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

CameraInfo

Bases: BaseModel

Camera information model.

CameraStatus

Bases: BaseModel

Camera status model.

CameraCapabilities

Bases: BaseModel

Camera capabilities model.

CameraConfiguration

Bases: BaseModel

Camera configuration model.

LensStatus

Bases: BaseModel

Liquid lens hardware state.

LensStatusResponse

Bases: BaseResponse

Response model for lens status.

CameraInfoResponse

Bases: BaseResponse

Response model for camera information.

CameraStatusResponse

Bases: BaseResponse

Response model for camera status.

CameraCapabilitiesResponse

Bases: BaseResponse

Response model for camera capabilities.

CameraConfigurationResponse

Bases: BaseResponse

Response model for camera configuration.

ActiveCamerasResponse

Bases: BaseResponse

Response model for active cameras list.

CaptureResult

Bases: BaseModel

Capture result model.

CaptureResponse

Bases: BaseResponse

Response model for single image capture.

BatchCaptureResponse

Bases: BaseResponse

Response model for batch capture operations.

HDRCaptureResult

Bases: BaseModel

HDR capture result model.

HDRCaptureResponse

Bases: BaseResponse

Response model for HDR capture.

BatchHDRCaptureResponse

Bases: BaseResponse

Response model for batch HDR capture.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

BandwidthSettings

Bases: BaseModel

Bandwidth settings model.

BandwidthSettingsResponse

Bases: BaseResponse

Response model for bandwidth settings.

CameraPerformanceSettings

Bases: BaseModel

Camera performance and retry settings model.

Global settings: - timeout_ms, retrieve_retry_count, max_concurrent_captures

Per-camera GigE settings (None if not applicable or not queried): - packet_size, inter_packet_delay, bandwidth_limit_mbps

CameraPerformanceSettingsResponse

Bases: BaseResponse

Response model for camera performance settings.

NetworkDiagnostics

Bases: BaseModel

Network diagnostics model.

NetworkDiagnosticsResponse

Bases: BaseResponse

Response model for network diagnostics.

BatchOperationResult

Bases: BaseModel

Batch operation result model.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

ErrorDetail

Bases: BaseModel

Error detail model.

ErrorResponse

Bases: BaseResponse

Response model for error conditions.

ParameterRange

Bases: BaseModel

Parameter range model.

RangeResponse

Bases: BaseResponse

Response model for parameter ranges.

ConfigFileOperationResult

Bases: BaseModel

Configuration file operation result.

ConfigFileResponse

Bases: BaseResponse

Response model for configuration file operations.

StreamInfo

Bases: BaseModel

Stream information model.

StreamStatus

Bases: BaseModel

Stream status model.

StreamInfoResponse

Bases: BaseResponse

Response model for stream information.

StreamStatusResponse

Bases: BaseResponse

Response model for stream status.

ActiveStreamsResponse

Bases: BaseResponse

Response model for active streams list.

HomographyCalibrationResult

Bases: BaseModel

Homography calibration result model.

HomographyCalibrationResponse

Bases: BaseResponse

Response model for homography calibration.

HomographyMeasurementResult

Bases: BaseModel

Homography measurement result model.

HomographyMeasurementResponse

Bases: BaseResponse

Response model for single homography measurement.

HomographyDistanceResult

Bases: BaseModel

Homography distance measurement result model.

HomographyDistanceResponse

Bases: BaseResponse

Response model for distance measurement.

HomographyBatchMeasurementData

Bases: BaseModel

Batch measurement data containing both box and distance measurements.

HomographyBatchMeasurementResponse

Bases: BaseResponse

Response model for unified batch homography measurements.

CaptureGroupInfo

Bases: BaseModel

Capture group information model.

CaptureGroupsResponse

Bases: BaseResponse

Response model for capture groups.

HealthCheckResponse

Bases: BaseModel

Health check response model.

schemas

TaskSchemas for CameraManagerService endpoints.

backend_schemas

Backend and Discovery TaskSchemas.

capture_group_schemas

Capture Group TaskSchemas for stage+set batching.

capture_schemas

Image Capture TaskSchemas.

config_schemas

Camera Configuration TaskSchemas.

focus_schemas

Liquid Lens and Focus Control TaskSchemas.

health_schemas

Health check TaskSchema.

homography_schemas

Homography Calibration & Measurement TaskSchemas.

info_schemas

Camera Status and Information TaskSchemas.

lifecycle_schemas

Camera Lifecycle TaskSchemas.

network_schemas

Network and Performance TaskSchemas.

stream_schemas

Streaming TaskSchemas.

service

CameraManagerService - Service-based API for camera management.

This service wraps AsyncCameraManager functionality in a Service-based architecture with comprehensive MCP tool integration and typed client access.

CameraManagerService
CameraManagerService(include_mocks: bool = False, **kwargs)

Bases: Service

Camera Management Service.

Provides comprehensive camera management functionality through a Service-based architecture with MCP tool integration and async camera operations.

Initialize CameraManagerService.

Parameters:

Name Type Description Default
include_mocks bool

Include mock cameras in discovery

False
**kwargs

Additional Service initialization parameters

{}
shutdown_cleanup async
shutdown_cleanup()

Cleanup camera manager on shutdown.

discover_backends async
discover_backends() -> BackendsResponse

Discover available camera backends.

get_backend_info async
get_backend_info() -> BackendInfoResponse

Get detailed information about all backends.

discover_cameras async
discover_cameras(request: BackendFilterRequest) -> ListResponse

Discover available cameras from all or specific backends.

open_camera async
open_camera(request: CameraOpenRequest) -> BoolResponse

Open a single camera with exposure validation.

open_cameras_batch async
open_cameras_batch(request: CameraOpenBatchRequest) -> BatchOperationResponse

Open multiple cameras in batch.

close_camera async
close_camera(request: CameraCloseRequest) -> BoolResponse

Close a specific camera.

close_cameras_batch async
close_cameras_batch(request: CameraCloseBatchRequest) -> BatchOperationResponse

Close multiple cameras in batch.

close_all_cameras async
close_all_cameras() -> BoolResponse

Close all active cameras.

get_active_cameras async
get_active_cameras() -> ActiveCamerasResponse

Get list of currently active cameras.

get_camera_status async
get_camera_status(request: CameraQueryRequest) -> CameraStatusResponse

Get camera status information.

get_camera_info async
get_camera_info(request: CameraQueryRequest) -> CameraInfoResponse

Get detailed camera information.

get_camera_capabilities async
get_camera_capabilities(
    request: CameraQueryRequest,
) -> CameraCapabilitiesResponse

Get camera capabilities information.

configure_camera async
configure_camera(request: CameraConfigureRequest) -> BoolResponse

Configure camera parameters.

configure_cameras_batch async
configure_cameras_batch(
    request: CameraConfigureBatchRequest,
) -> BatchOperationResponse

Configure multiple cameras in batch.

get_camera_configuration async
get_camera_configuration(
    request: CameraQueryRequest,
) -> CameraConfigurationResponse

Get current camera configuration.

import_camera_config async
import_camera_config(request: ConfigFileImportRequest) -> ConfigFileResponse

Import camera configuration from file.

export_camera_config async
export_camera_config(request: ConfigFileExportRequest) -> ConfigFileResponse

Export camera configuration to file.

capture_image async
capture_image(request: CaptureImageRequest) -> CaptureResponse

Capture a single image with timeout protection.

capture_images_batch async
capture_images_batch(request: CaptureBatchRequest) -> BatchCaptureResponse

Capture images from multiple cameras.

capture_hdr_image async
capture_hdr_image(request: CaptureHDRRequest) -> HDRCaptureResponse

Capture HDR image sequence.

capture_hdr_images_batch async
capture_hdr_images_batch(
    request: CaptureHDRBatchRequest,
) -> BatchHDRCaptureResponse

Capture HDR images from multiple cameras.

get_network_diagnostics async
get_network_diagnostics() -> NetworkDiagnosticsResponse

Get network diagnostics information.

get_performance_settings async
get_performance_settings(
    request: CameraPerformanceSettingsRequest = None,
) -> CameraPerformanceSettingsResponse

Get current camera performance settings.

Returns global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.

set_performance_settings async
set_performance_settings(
    request: CameraPerformanceSettingsRequest,
) -> BoolResponse

Update camera performance settings.

Updates global settings (timeout, retries, concurrent captures) and optionally per-camera GigE settings (packet_size, inter_packet_delay, bandwidth_limit) if camera is specified.

configure_capture_groups async
configure_capture_groups(
    request: ConfigureCaptureGroupsRequest,
) -> BoolResponse

Configure stage+set capture groups with per-group concurrency semaphores.

get_capture_groups async
get_capture_groups() -> CaptureGroupsResponse

Get current capture group configuration.

remove_capture_groups async
remove_capture_groups() -> BoolResponse

Remove all capture group configurations.

start_stream async
start_stream(request: StreamStartRequest) -> StreamInfoResponse

Start camera stream with resilient state management.

stop_stream
stop_stream(request: StreamStopRequest) -> BoolResponse

Stop camera stream with resilient state management.

get_stream_status async
get_stream_status(request: StreamStatusRequest) -> StreamStatusResponse

Get camera stream status with resilient state management.

get_active_streams
get_active_streams() -> ActiveStreamsResponse

Get list of cameras with active streams.

stop_all_streams
stop_all_streams() -> BoolResponse

Stop all active camera streams.

serve_camera_stream async
serve_camera_stream(camera_name: str)

Serve MJPEG video stream for a specific camera.

get_lens_status async
get_lens_status(request: CameraQueryRequest) -> LensStatusResponse

Get liquid lens hardware state for a camera.

get_optical_power async
get_optical_power(request: CameraQueryRequest) -> DictResponse

Get current optical power (diopters) for a camera's liquid lens.

set_optical_power async
set_optical_power(request: OpticalPowerRequest) -> BoolResponse

Set optical power (diopters) for a camera's liquid lens.

trigger_autofocus async
trigger_autofocus(request: TriggerAutofocusRequest) -> BoolResponse

Trigger one-shot autofocus on a camera's liquid lens.

get_focus_config async
get_focus_config(request: CameraQueryRequest) -> DictResponse

Get autofocus configuration for a camera's liquid lens.

set_focus_config async
set_focus_config(request: FocusConfigRequest) -> BoolResponse

Update autofocus configuration for a camera's liquid lens.

calibrate_homography_checkerboard async
calibrate_homography_checkerboard(
    request: HomographyCalibrateCheckerboardRequest,
) -> HomographyCalibrationResponse

Calibrate homography using checkerboard pattern detection.

calibrate_homography_correspondences
calibrate_homography_correspondences(
    request: HomographyCalibrateCorrespondencesRequest,
) -> HomographyCalibrationResponse

Calibrate homography from known point correspondences.

calibrate_homography_multi_view
calibrate_homography_multi_view(
    request: HomographyCalibrateMultiViewRequest,
) -> HomographyCalibrationResponse

Calibrate homography from multiple checkerboard positions on the same plane.

Ideal for calibrating long surfaces (metallic bars, conveyor belts) using a standard checkerboard moved to multiple positions.

measure_homography_box
measure_homography_box(
    request: HomographyMeasureBoundingBoxRequest,
) -> HomographyMeasurementResponse

Measure bounding box dimensions using homography calibration.

measure_homography_batch
measure_homography_batch(
    request: HomographyMeasureBatchRequest,
) -> HomographyBatchMeasurementResponse

Unified batch measurement for bounding boxes and/or point-pair distances.

measure_homography_distance
measure_homography_distance(
    request: HomographyMeasureDistanceRequest,
) -> HomographyDistanceResponse

Measure distance between two points using homography calibration.

health_check async
health_check() -> HealthCheckResponse

Health check endpoint for container healthcheck.

get_system_diagnostics async
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics information.

plcs

PLC API Service - REST API for PLC management and control.

PLCManagerConnectionManager
PLCManagerConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for PLCManagerService.

Provides strongly-typed methods for all PLC management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available PLC backends.

Returns:

Type Description
List[str]

List of available backend names

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

Returns:

Type Description
Dict[str, Any]

Dictionary mapping backend names to their information

discover_plcs async
discover_plcs(backend: Optional[str] = None) -> List[str]

Discover available PLCs from all or specific backends.

Parameters:

Name Type Description Default
backend Optional[str]

Optional backend name to filter by

None

Returns:

Type Description
List[str]

List of PLC identifiers

connect_plc async
connect_plc(
    plc_name: str,
    backend: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
) -> bool

Connect to a PLC.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
backend str

Backend type (AllenBradley, Siemens, Modbus)

required
ip_address str

IP address of the PLC

required
plc_type Optional[str]

Specific PLC type (logix, slc, cip, auto)

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None

Returns:

Type Description
bool

True if successful

connect_plcs_batch async
connect_plcs_batch(plcs: List[PLCConnectRequest]) -> Dict[str, Any]

Connect to multiple PLCs in batch.

Parameters:

Name Type Description Default
plcs List[PLCConnectRequest]

List of PLC connection requests

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

disconnect_plc async
disconnect_plc(plc: str) -> bool

Disconnect from a PLC.

Parameters:

Name Type Description Default
plc str

PLC name to disconnect

required

Returns:

Type Description
bool

True if successful

disconnect_plcs_batch async
disconnect_plcs_batch(plcs: List[str]) -> Dict[str, Any]

Disconnect from multiple PLCs in batch.

Parameters:

Name Type Description Default
plcs List[str]

List of PLC names to disconnect

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

disconnect_all_plcs async
disconnect_all_plcs() -> bool

Disconnect from all active PLCs.

Returns:

Type Description
bool

True if successful

get_active_plcs async
get_active_plcs() -> List[str]

Get list of currently active PLCs.

Returns:

Type Description
List[str]

List of active PLC names

read_tags async
read_tags(plc: str, tags: Union[str, List[str]]) -> Dict[str, Any]

Read tag values from a PLC.

Parameters:

Name Type Description Default
plc str

PLC name

required
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tags async
write_tags(
    plc: str, tags: Union[Tuple[str, Any], List[Tuple[str, Any]]]
) -> Dict[str, bool]

Write tag values to a PLC.

Parameters:

Name Type Description Default
plc str

PLC name

required
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

read_tags_batch async
read_tags_batch(
    requests: List[Tuple[str, Union[str, List[str]]]],
) -> Dict[str, Dict[str, Any]]

Read tags from multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[str, List[str]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping PLC names to their tag read results

write_tags_batch async
write_tags_batch(
    requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]

Write tags to multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, bool]]

Dictionary mapping PLC names to their tag write results

list_tags async
list_tags(plc: str) -> List[str]

List all available tags on a PLC.

Parameters:

Name Type Description Default
plc str

PLC name

required

Returns:

Type Description
List[str]

List of tag names

get_tag_info async
get_tag_info(plc: str, tag: str) -> Dict[str, Any]

Get detailed information about a specific tag.

Parameters:

Name Type Description Default
plc str

PLC name

required
tag str

Tag name

required

Returns:

Type Description
Dict[str, Any]

Tag information

get_plc_status async
get_plc_status(plc: str) -> Dict[str, Any]

Get PLC status information.

Parameters:

Name Type Description Default
plc str

PLC name to query

required

Returns:

Type Description
Dict[str, Any]

PLC status information

get_plc_info async
get_plc_info(plc: str) -> Dict[str, Any]

Get detailed PLC information.

Parameters:

Name Type Description Default
plc str

PLC name to query

required

Returns:

Type Description
Dict[str, Any]

PLC information

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics information.

Returns:

Type Description
Dict[str, Any]

System diagnostics data

PLCManagerService
PLCManagerService(**kwargs)

Bases: Service

PLC Management Service.

Provides comprehensive PLC management functionality through a Service-based architecture with MCP tool integration and async PLC operations.

Initialize PLCManagerService.

Parameters:

Name Type Description Default
**kwargs

Additional Service initialization parameters

{}
shutdown_cleanup async
shutdown_cleanup()

Cleanup PLC manager on shutdown.

discover_backends
discover_backends() -> BackendsResponse

Discover available PLC backends.

get_backend_info
get_backend_info() -> BackendInfoResponse

Get detailed information about all backends.

discover_plcs async
discover_plcs(request: BackendFilterRequest) -> ListResponse

Discover available PLCs from all or specific backends.

connect_plc async
connect_plc(request: PLCConnectRequest) -> BoolResponse

Connect to a PLC.

connect_plcs_batch async
connect_plcs_batch(request: PLCConnectBatchRequest) -> BatchOperationResponse

Connect to multiple PLCs in batch.

disconnect_plc async
disconnect_plc(request: PLCDisconnectRequest) -> BoolResponse

Disconnect from a PLC.

disconnect_plcs_batch async
disconnect_plcs_batch(
    request: PLCDisconnectBatchRequest,
) -> BatchOperationResponse

Disconnect from multiple PLCs in batch.

disconnect_all_plcs async
disconnect_all_plcs() -> BoolResponse

Disconnect from all active PLCs.

get_active_plcs
get_active_plcs() -> ActivePLCsResponse

Get list of currently active PLCs.

read_tags async
read_tags(request: TagReadRequest) -> TagReadResponse

Read tag values from a PLC.

write_tags async
write_tags(request: TagWriteRequest) -> TagWriteResponse

Write tag values to a PLC.

read_tags_batch async
read_tags_batch(request: TagBatchReadRequest) -> BatchTagReadResponse

Read tags from multiple PLCs in batch.

write_tags_batch async
write_tags_batch(request: TagBatchWriteRequest) -> BatchTagWriteResponse

Write tags to multiple PLCs in batch.

list_tags async
list_tags(request: PLCQueryRequest) -> TagListResponse

List all available tags on a PLC.

get_tag_info async
get_tag_info(request: TagInfoRequest) -> TagInfoResponse

Get detailed information about a specific tag.

get_plc_status async
get_plc_status(request: PLCQueryRequest) -> PLCStatusResponse

Get PLC status information.

get_plc_info async
get_plc_info(request: PLCQueryRequest) -> PLCInfoResponse

Get detailed PLC information.

get_system_diagnostics async
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics information.

health_check
health_check() -> HealthCheckResponse

Health check endpoint for container healthcheck.

connection_manager

Connection Manager for PLCManagerService.

Provides a strongly-typed client interface for programmatic access to PLC management operations.

PLCManagerConnectionManager
PLCManagerConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for PLCManagerService.

Provides strongly-typed methods for all PLC management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available PLC backends.

Returns:

Type Description
List[str]

List of available backend names

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

Returns:

Type Description
Dict[str, Any]

Dictionary mapping backend names to their information

discover_plcs async
discover_plcs(backend: Optional[str] = None) -> List[str]

Discover available PLCs from all or specific backends.

Parameters:

Name Type Description Default
backend Optional[str]

Optional backend name to filter by

None

Returns:

Type Description
List[str]

List of PLC identifiers

connect_plc async
connect_plc(
    plc_name: str,
    backend: str,
    ip_address: str,
    plc_type: Optional[str] = None,
    connection_timeout: Optional[float] = None,
    read_timeout: Optional[float] = None,
    write_timeout: Optional[float] = None,
    retry_count: Optional[int] = None,
    retry_delay: Optional[float] = None,
) -> bool

Connect to a PLC.

Parameters:

Name Type Description Default
plc_name str

Unique identifier for the PLC

required
backend str

Backend type (AllenBradley, Siemens, Modbus)

required
ip_address str

IP address of the PLC

required
plc_type Optional[str]

Specific PLC type (logix, slc, cip, auto)

None
connection_timeout Optional[float]

Connection timeout in seconds

None
read_timeout Optional[float]

Tag read timeout in seconds

None
write_timeout Optional[float]

Tag write timeout in seconds

None
retry_count Optional[int]

Number of retry attempts

None
retry_delay Optional[float]

Delay between retries in seconds

None

Returns:

Type Description
bool

True if successful

connect_plcs_batch async
connect_plcs_batch(plcs: List[PLCConnectRequest]) -> Dict[str, Any]

Connect to multiple PLCs in batch.

Parameters:

Name Type Description Default
plcs List[PLCConnectRequest]

List of PLC connection requests

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

disconnect_plc async
disconnect_plc(plc: str) -> bool

Disconnect from a PLC.

Parameters:

Name Type Description Default
plc str

PLC name to disconnect

required

Returns:

Type Description
bool

True if successful

disconnect_plcs_batch async
disconnect_plcs_batch(plcs: List[str]) -> Dict[str, Any]

Disconnect from multiple PLCs in batch.

Parameters:

Name Type Description Default
plcs List[str]

List of PLC names to disconnect

required

Returns:

Type Description
Dict[str, Any]

Batch operation results

disconnect_all_plcs async
disconnect_all_plcs() -> bool

Disconnect from all active PLCs.

Returns:

Type Description
bool

True if successful

get_active_plcs async
get_active_plcs() -> List[str]

Get list of currently active PLCs.

Returns:

Type Description
List[str]

List of active PLC names

read_tags async
read_tags(plc: str, tags: Union[str, List[str]]) -> Dict[str, Any]

Read tag values from a PLC.

Parameters:

Name Type Description Default
plc str

PLC name

required
tags Union[str, List[str]]

Single tag name or list of tag names

required

Returns:

Type Description
Dict[str, Any]

Dictionary mapping tag names to their values

write_tags async
write_tags(
    plc: str, tags: Union[Tuple[str, Any], List[Tuple[str, Any]]]
) -> Dict[str, bool]

Write tag values to a PLC.

Parameters:

Name Type Description Default
plc str

PLC name

required
tags Union[Tuple[str, Any], List[Tuple[str, Any]]]

Single (tag_name, value) tuple or list of tuples

required

Returns:

Type Description
Dict[str, bool]

Dictionary mapping tag names to write success status

read_tags_batch async
read_tags_batch(
    requests: List[Tuple[str, Union[str, List[str]]]],
) -> Dict[str, Dict[str, Any]]

Read tags from multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[str, List[str]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, Any]]

Dictionary mapping PLC names to their tag read results

write_tags_batch async
write_tags_batch(
    requests: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> Dict[str, Dict[str, bool]]

Write tags to multiple PLCs in batch.

Parameters:

Name Type Description Default
requests List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]

List of (plc_name, tags) tuples

required

Returns:

Type Description
Dict[str, Dict[str, bool]]

Dictionary mapping PLC names to their tag write results

list_tags async
list_tags(plc: str) -> List[str]

List all available tags on a PLC.

Parameters:

Name Type Description Default
plc str

PLC name

required

Returns:

Type Description
List[str]

List of tag names

get_tag_info async
get_tag_info(plc: str, tag: str) -> Dict[str, Any]

Get detailed information about a specific tag.

Parameters:

Name Type Description Default
plc str

PLC name

required
tag str

Tag name

required

Returns:

Type Description
Dict[str, Any]

Tag information

get_plc_status async
get_plc_status(plc: str) -> Dict[str, Any]

Get PLC status information.

Parameters:

Name Type Description Default
plc str

PLC name to query

required

Returns:

Type Description
Dict[str, Any]

PLC status information

get_plc_info async
get_plc_info(plc: str) -> Dict[str, Any]

Get detailed PLC information.

Parameters:

Name Type Description Default
plc str

PLC name to query

required

Returns:

Type Description
Dict[str, Any]

PLC information

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics information.

Returns:

Type Description
Dict[str, Any]

System diagnostics data

launcher

PLC API service launcher.

main
main()

Main launcher function.

models

PLC API models - Request and Response models.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

PLCConnectBatchRequest

Bases: BaseModel

Request model for batch PLC connection.

PLCConnectRequest

Bases: BaseModel

Request model for connecting to a PLC.

PLCDisconnectBatchRequest

Bases: BaseModel

Request model for batch PLC disconnection.

PLCDisconnectRequest

Bases: BaseModel

Request model for disconnecting from a PLC.

PLCQueryRequest

Bases: BaseModel

Request model for PLC query operations.

TagBatchReadRequest

Bases: BaseModel

Request model for batch tag reading from multiple PLCs.

validate_requests classmethod
validate_requests(
    v: List[Tuple[str, Union[str, List[str]]]],
) -> List[Tuple[str, Union[str, List[str]]]]

Validate batch read requests.

TagBatchWriteRequest

Bases: BaseModel

Request model for batch tag writing to multiple PLCs.

validate_requests classmethod
validate_requests(
    v: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]

Validate batch write requests.

TagInfoRequest

Bases: BaseModel

Request model for getting tag information.

TagReadRequest

Bases: BaseModel

Request model for reading tags from a PLC.

validate_tags classmethod
validate_tags(v: Union[str, List[str]]) -> Union[str, List[str]]

Ensure tags is not empty.

TagWriteRequest

Bases: BaseModel

Request model for writing tags to a PLC.

validate_tags classmethod
validate_tags(
    v: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Union[Tuple[str, Any], List[Tuple[str, Any]]]

Ensure tags is properly formatted.

ActivePLCsResponse

Bases: BaseResponse

Response model for active PLCs listing.

BackendInfo

Bases: BaseModel

Backend information model.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

BatchOperationResult

Bases: BaseModel

Batch operation result model.

BatchTagReadResponse

Bases: BaseResponse

Response model for batch tag read operations.

BatchTagWriteResponse

Bases: BaseResponse

Response model for batch tag write operations.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

FloatResponse

Bases: BaseResponse

Response model for float values.

HealthCheckResponse

Bases: BaseModel

Health check response model.

IntResponse

Bases: BaseResponse

Response model for integer values.

ListResponse

Bases: BaseResponse

Response model for list data.

PLCInfo

Bases: BaseModel

PLC information model.

PLCInfoResponse

Bases: BaseResponse

Response model for PLC information.

PLCStatus

Bases: BaseModel

PLC status model.

PLCStatusResponse

Bases: BaseResponse

Response model for PLC status.

StringResponse

Bases: BaseResponse

Response model for string values.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

TagInfo

Bases: BaseModel

Tag information model.

TagInfoResponse

Bases: BaseResponse

Response model for tag information.

TagListResponse

Bases: BaseResponse

Response model for tag list operations.

TagReadResponse

Bases: BaseResponse

Response model for tag read operations.

TagWriteResponse

Bases: BaseResponse

Response model for tag write operations.

requests

Request models for PLCManagerService.

Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all PLC operations.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

PLCConnectRequest

Bases: BaseModel

Request model for connecting to a PLC.

PLCConnectBatchRequest

Bases: BaseModel

Request model for batch PLC connection.

PLCDisconnectRequest

Bases: BaseModel

Request model for disconnecting from a PLC.

PLCDisconnectBatchRequest

Bases: BaseModel

Request model for batch PLC disconnection.

PLCQueryRequest

Bases: BaseModel

Request model for PLC query operations.

TagReadRequest

Bases: BaseModel

Request model for reading tags from a PLC.

validate_tags classmethod
validate_tags(v: Union[str, List[str]]) -> Union[str, List[str]]

Ensure tags is not empty.

TagWriteRequest

Bases: BaseModel

Request model for writing tags to a PLC.

validate_tags classmethod
validate_tags(
    v: Union[Tuple[str, Any], List[Tuple[str, Any]]],
) -> Union[Tuple[str, Any], List[Tuple[str, Any]]]

Ensure tags is properly formatted.

TagBatchReadRequest

Bases: BaseModel

Request model for batch tag reading from multiple PLCs.

validate_requests classmethod
validate_requests(
    v: List[Tuple[str, Union[str, List[str]]]],
) -> List[Tuple[str, Union[str, List[str]]]]

Validate batch read requests.

TagBatchWriteRequest

Bases: BaseModel

Request model for batch tag writing to multiple PLCs.

validate_requests classmethod
validate_requests(
    v: List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]],
) -> List[Tuple[str, Union[Tuple[str, Any], List[Tuple[str, Any]]]]]

Validate batch write requests.

TagInfoRequest

Bases: BaseModel

Request model for getting tag information.

responses

Response models for PLCManagerService.

Contains all Pydantic models for API responses, ensuring consistent response formatting across all PLC management endpoints.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

StringResponse

Bases: BaseResponse

Response model for string values.

IntResponse

Bases: BaseResponse

Response model for integer values.

FloatResponse

Bases: BaseResponse

Response model for float values.

ListResponse

Bases: BaseResponse

Response model for list data.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

BackendInfo

Bases: BaseModel

Backend information model.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

PLCInfo

Bases: BaseModel

PLC information model.

PLCStatus

Bases: BaseModel

PLC status model.

PLCInfoResponse

Bases: BaseResponse

Response model for PLC information.

PLCStatusResponse

Bases: BaseResponse

Response model for PLC status.

ActivePLCsResponse

Bases: BaseResponse

Response model for active PLCs listing.

TagReadResponse

Bases: BaseResponse

Response model for tag read operations.

TagWriteResponse

Bases: BaseResponse

Response model for tag write operations.

TagListResponse

Bases: BaseResponse

Response model for tag list operations.

TagInfo

Bases: BaseModel

Tag information model.

TagInfoResponse

Bases: BaseResponse

Response model for tag information.

BatchOperationResult

Bases: BaseModel

Batch operation result model.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

BatchTagReadResponse

Bases: BaseResponse

Response model for batch tag read operations.

BatchTagWriteResponse

Bases: BaseResponse

Response model for batch tag write operations.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

HealthCheckResponse

Bases: BaseModel

Health check response model.

schemas

TaskSchemas for PLCManagerService endpoints.

backend_schemas

Backend and Discovery TaskSchemas.

health_schemas

Health check TaskSchema.

lifecycle_schemas

PLC Lifecycle TaskSchemas.

status_schemas

Status & Information TaskSchemas.

tag_schemas

Tag Operations TaskSchemas.

service

PLCManagerService - Service-based API for PLC management.

This service wraps PLCManager functionality in a Service-based architecture with comprehensive MCP tool integration and typed client access.

PLCManagerService
PLCManagerService(**kwargs)

Bases: Service

PLC Management Service.

Provides comprehensive PLC management functionality through a Service-based architecture with MCP tool integration and async PLC operations.

Initialize PLCManagerService.

Parameters:

Name Type Description Default
**kwargs

Additional Service initialization parameters

{}
shutdown_cleanup async
shutdown_cleanup()

Cleanup PLC manager on shutdown.

discover_backends
discover_backends() -> BackendsResponse

Discover available PLC backends.

get_backend_info
get_backend_info() -> BackendInfoResponse

Get detailed information about all backends.

discover_plcs async
discover_plcs(request: BackendFilterRequest) -> ListResponse

Discover available PLCs from all or specific backends.

connect_plc async
connect_plc(request: PLCConnectRequest) -> BoolResponse

Connect to a PLC.

connect_plcs_batch async
connect_plcs_batch(request: PLCConnectBatchRequest) -> BatchOperationResponse

Connect to multiple PLCs in batch.

disconnect_plc async
disconnect_plc(request: PLCDisconnectRequest) -> BoolResponse

Disconnect from a PLC.

disconnect_plcs_batch async
disconnect_plcs_batch(
    request: PLCDisconnectBatchRequest,
) -> BatchOperationResponse

Disconnect from multiple PLCs in batch.

disconnect_all_plcs async
disconnect_all_plcs() -> BoolResponse

Disconnect from all active PLCs.

get_active_plcs
get_active_plcs() -> ActivePLCsResponse

Get list of currently active PLCs.

read_tags async
read_tags(request: TagReadRequest) -> TagReadResponse

Read tag values from a PLC.

write_tags async
write_tags(request: TagWriteRequest) -> TagWriteResponse

Write tag values to a PLC.

read_tags_batch async
read_tags_batch(request: TagBatchReadRequest) -> BatchTagReadResponse

Read tags from multiple PLCs in batch.

write_tags_batch async
write_tags_batch(request: TagBatchWriteRequest) -> BatchTagWriteResponse

Write tags to multiple PLCs in batch.

list_tags async
list_tags(request: PLCQueryRequest) -> TagListResponse

List all available tags on a PLC.

get_tag_info async
get_tag_info(request: TagInfoRequest) -> TagInfoResponse

Get detailed information about a specific tag.

get_plc_status async
get_plc_status(request: PLCQueryRequest) -> PLCStatusResponse

Get PLC status information.

get_plc_info async
get_plc_info(request: PLCQueryRequest) -> PLCInfoResponse

Get detailed PLC information.

get_system_diagnostics async
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics information.

health_check
health_check() -> HealthCheckResponse

Health check endpoint for container healthcheck.

scanners_3d

Scanner3DService - Service-based 3D scanner management API.

Scanner3DConnectionManager
Scanner3DConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for Scanner3DService.

Provides strongly-typed methods for all 3D scanner management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available 3D scanner backends.

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

discover_scanners async
discover_scanners(backend: Optional[str] = None) -> List[str]

Discover available 3D scanners.

open_scanner async
open_scanner(scanner: str, test_connection: bool = True) -> bool

Open a 3D scanner.

open_scanners_batch async
open_scanners_batch(
    scanners: List[str], test_connection: bool = True
) -> Dict[str, Any]

Open multiple 3D scanners.

close_scanner async
close_scanner(scanner: str) -> bool

Close a 3D scanner.

close_scanners_batch async
close_scanners_batch(scanners: List[str]) -> Dict[str, Any]

Close multiple 3D scanners.

close_all_scanners async
close_all_scanners() -> bool

Close all active 3D scanners.

get_active_scanners async
get_active_scanners() -> List[str]

Get list of active scanners.

get_scanner_status async
get_scanner_status(scanner: str) -> Dict[str, Any]

Get scanner status.

get_scanner_info async
get_scanner_info(scanner: str) -> Dict[str, Any]

Get scanner information.

configure_scanner async
configure_scanner(scanner: str, properties: Dict[str, Any]) -> bool

Configure scanner parameters.

configure_scanners_batch async
configure_scanners_batch(
    configurations: Dict[str, Dict[str, Any]],
) -> Dict[str, Any]

Configure multiple scanners.

get_scanner_configuration async
get_scanner_configuration(scanner: str) -> Dict[str, Any]

Get scanner configuration.

capture_scan async
capture_scan(
    scanner: str,
    save_range_path: Optional[str] = None,
    save_intensity_path: Optional[str] = None,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
    timeout_ms: int = 10000,
    output_format: str = "numpy",
) -> Dict[str, Any]

Capture 3D scan data.

capture_scan_batch async
capture_scan_batch(
    captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]

Capture scans from multiple scanners.

capture_point_cloud async
capture_point_cloud(
    scanner: str,
    save_path: Optional[str] = None,
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    output_format: str = "numpy",
) -> Dict[str, Any]

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]

Capture point clouds from multiple scanners.

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics.

health_check async
health_check() -> Dict[str, Any]

Check service health.

Scanner3DService
Scanner3DService(**kwargs)

Bases: Service

3D Scanner Management Service.

Provides comprehensive REST API and MCP tools for managing 3D scanners with multi-component capture capabilities (range, intensity, confidence, normals, color, point clouds).

Supported Operations: - Backend discovery and information - Scanner lifecycle management (open, close, status) - Multi-component capture (range, intensity, confidence, normals, color) - Point cloud generation with optional color and confidence - Scanner configuration (exposure, trigger mode) - Batch operations for multiple scanners - System diagnostics and monitoring

Initialize Scanner3DService.

Parameters:

Name Type Description Default
**kwargs

Additional arguments passed to Service base class

{}
health_check async
health_check() -> HealthCheckResponse

Health check endpoint.

get_backends async
get_backends() -> BackendsResponse

Get available scanner backends.

get_backend_info async
get_backend_info() -> BackendInfoResponse

Get detailed backend information.

discover_scanners async
discover_scanners(request: BackendFilterRequest) -> ListResponse

Discover available 3D scanners.

open_scanner async
open_scanner(request: ScannerOpenRequest) -> BoolResponse

Open a 3D scanner connection.

open_scanners_batch async
open_scanners_batch(request: ScannerOpenBatchRequest) -> BatchOperationResponse

Open multiple scanners.

close_scanner async
close_scanner(request: ScannerCloseRequest) -> BoolResponse

Close a 3D scanner connection.

close_scanners_batch async
close_scanners_batch(
    request: ScannerCloseBatchRequest,
) -> BatchOperationResponse

Close multiple scanners.

close_all_scanners async
close_all_scanners() -> BatchOperationResponse

Close all active scanners.

get_active_scanners async
get_active_scanners() -> ActiveScannersResponse

Get list of active scanners.

get_scanner_status async
get_scanner_status(request: ScannerQueryRequest) -> ScannerStatusResponse

Get scanner status.

get_scanner_info async
get_scanner_info(request: ScannerQueryRequest) -> ScannerInfoResponse

Get scanner information.

get_system_diagnostics async
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics.

get_scanner_capabilities async
get_scanner_capabilities(
    request: ScannerQueryRequest,
) -> ScannerCapabilitiesResponse

Get scanner capabilities and available settings.

configure_scanner async
configure_scanner(request: ScannerConfigureRequest) -> BoolResponse

Configure scanner parameters.

configure_scanners_batch async
configure_scanners_batch(
    request: ScannerConfigureBatchRequest,
) -> BatchOperationResponse

Configure multiple scanners.

get_scanner_configuration async
get_scanner_configuration(
    request: ScannerQueryRequest,
) -> ScannerConfigurationResponse

Get scanner configuration.

capture_scan async
capture_scan(request: ScanCaptureRequest) -> ScanCaptureResponse

Capture 3D scan data.

capture_scan_batch async
capture_scan_batch(
    request: ScanCaptureBatchRequest,
) -> ScanCaptureBatchResponse

Capture scans from multiple scanners.

capture_point_cloud async
capture_point_cloud(request: PointCloudCaptureRequest) -> PointCloudResponse

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    request: PointCloudCaptureBatchRequest,
) -> PointCloudBatchResponse

Capture point clouds from multiple scanners.

connection_manager

Connection Manager for Scanner3DService.

Provides a strongly-typed client interface for programmatic access to 3D scanner management operations.

Scanner3DConnectionManager
Scanner3DConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for Scanner3DService.

Provides strongly-typed methods for all 3D scanner management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available 3D scanner backends.

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

discover_scanners async
discover_scanners(backend: Optional[str] = None) -> List[str]

Discover available 3D scanners.

open_scanner async
open_scanner(scanner: str, test_connection: bool = True) -> bool

Open a 3D scanner.

open_scanners_batch async
open_scanners_batch(
    scanners: List[str], test_connection: bool = True
) -> Dict[str, Any]

Open multiple 3D scanners.

close_scanner async
close_scanner(scanner: str) -> bool

Close a 3D scanner.

close_scanners_batch async
close_scanners_batch(scanners: List[str]) -> Dict[str, Any]

Close multiple 3D scanners.

close_all_scanners async
close_all_scanners() -> bool

Close all active 3D scanners.

get_active_scanners async
get_active_scanners() -> List[str]

Get list of active scanners.

get_scanner_status async
get_scanner_status(scanner: str) -> Dict[str, Any]

Get scanner status.

get_scanner_info async
get_scanner_info(scanner: str) -> Dict[str, Any]

Get scanner information.

configure_scanner async
configure_scanner(scanner: str, properties: Dict[str, Any]) -> bool

Configure scanner parameters.

configure_scanners_batch async
configure_scanners_batch(
    configurations: Dict[str, Dict[str, Any]],
) -> Dict[str, Any]

Configure multiple scanners.

get_scanner_configuration async
get_scanner_configuration(scanner: str) -> Dict[str, Any]

Get scanner configuration.

capture_scan async
capture_scan(
    scanner: str,
    save_range_path: Optional[str] = None,
    save_intensity_path: Optional[str] = None,
    enable_range: bool = True,
    enable_intensity: bool = True,
    enable_confidence: bool = False,
    enable_normal: bool = False,
    enable_color: bool = False,
    timeout_ms: int = 10000,
    output_format: str = "numpy",
) -> Dict[str, Any]

Capture 3D scan data.

capture_scan_batch async
capture_scan_batch(
    captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]

Capture scans from multiple scanners.

capture_point_cloud async
capture_point_cloud(
    scanner: str,
    save_path: Optional[str] = None,
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    output_format: str = "numpy",
) -> Dict[str, Any]

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]

Capture point clouds from multiple scanners.

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics.

health_check async
health_check() -> Dict[str, Any]

Check service health.

launcher

3D Scanner API service launcher.

main
main()

Main launcher function.

models

Models for Scanner3DService.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

PointCloudCaptureBatchRequest

Bases: BaseModel

Request model for batch point cloud capture.

PointCloudCaptureRequest

Bases: BaseModel

Request model for point cloud capture.

ScanCaptureBatchRequest

Bases: BaseModel

Request model for batch scan capture.

ScanCaptureRequest

Bases: BaseModel

Request model for 3D scan capture.

ScannerCloseBatchRequest

Bases: BaseModel

Request model for batch scanner closing.

ScannerCloseRequest

Bases: BaseModel

Request model for closing a 3D scanner.

ScannerConfigureBatchRequest

Bases: BaseModel

Request model for batch scanner configuration.

ScannerConfigureRequest

Bases: BaseModel

Request model for scanner configuration.

All fields are optional - only provided fields will be applied.

ScannerOpenBatchRequest

Bases: BaseModel

Request model for batch scanner opening.

ScannerOpenRequest

Bases: BaseModel

Request model for opening a 3D scanner.

ScannerQueryRequest

Bases: BaseModel

Request model for scanner queries.

ActiveScannersResponse

Bases: BaseResponse

Response model for listing active scanners.

BackendInfo

Bases: BaseModel

3D scanner backend information model.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

BatchOperationResult

Bases: BaseModel

Individual batch operation result.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

HealthCheckResponse

Bases: BaseModel

Health check response model.

ListResponse

Bases: BaseResponse

Response model for list data.

PointCloudBatchResponse

Bases: BaseResponse

Response model for batch point cloud capture.

PointCloudBatchResult

Bases: BaseModel

Batch point cloud capture result model.

PointCloudResponse

Bases: BaseResponse

Response model for point cloud capture.

PointCloudResult

Bases: BaseModel

Point cloud capture result model.

ScanCaptureBatchResponse

Bases: BaseResponse

Response model for batch scan capture.

ScanCaptureBatchResult

Bases: BaseModel

Batch scan capture result model.

ScanCaptureResponse

Bases: BaseResponse

Response model for scan capture.

ScanCaptureResult

Bases: BaseModel

Scan capture result model.

ScannerCapabilities

Bases: BaseModel

Scanner capabilities model.

ScannerCapabilitiesResponse

Bases: BaseResponse

Response model for scanner capabilities.

ScannerConfiguration

Bases: BaseModel

3D scanner configuration model.

ScannerConfigurationResponse

Bases: BaseResponse

Response model for scanner configuration.

ScannerInfo

Bases: BaseModel

3D scanner information model.

ScannerInfoResponse

Bases: BaseResponse

Response model for scanner information.

ScannerStatus

Bases: BaseModel

3D scanner status model.

ScannerStatusResponse

Bases: BaseResponse

Response model for scanner status.

StringResponse

Bases: BaseResponse

Response model for string values.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

requests

Request models for Scanner3DService.

Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all 3D scanner operations.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

ScannerOpenRequest

Bases: BaseModel

Request model for opening a 3D scanner.

ScannerOpenBatchRequest

Bases: BaseModel

Request model for batch scanner opening.

ScannerCloseRequest

Bases: BaseModel

Request model for closing a 3D scanner.

ScannerCloseBatchRequest

Bases: BaseModel

Request model for batch scanner closing.

ScannerQueryRequest

Bases: BaseModel

Request model for scanner queries.

ScannerConfigureRequest

Bases: BaseModel

Request model for scanner configuration.

All fields are optional - only provided fields will be applied.

ScannerConfigureBatchRequest

Bases: BaseModel

Request model for batch scanner configuration.

ScanCaptureRequest

Bases: BaseModel

Request model for 3D scan capture.

ScanCaptureBatchRequest

Bases: BaseModel

Request model for batch scan capture.

PointCloudCaptureRequest

Bases: BaseModel

Request model for point cloud capture.

PointCloudCaptureBatchRequest

Bases: BaseModel

Request model for batch point cloud capture.

responses

Response models for Scanner3DService.

Contains all Pydantic models for API responses, ensuring consistent response formatting across all 3D scanner management endpoints.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

StringResponse

Bases: BaseResponse

Response model for string values.

ListResponse

Bases: BaseResponse

Response model for list data.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

BackendInfo

Bases: BaseModel

3D scanner backend information model.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

ScannerStatus

Bases: BaseModel

3D scanner status model.

ScannerStatusResponse

Bases: BaseResponse

Response model for scanner status.

ScannerInfo

Bases: BaseModel

3D scanner information model.

ScannerInfoResponse

Bases: BaseResponse

Response model for scanner information.

ScannerConfiguration

Bases: BaseModel

3D scanner configuration model.

ScannerConfigurationResponse

Bases: BaseResponse

Response model for scanner configuration.

ScannerCapabilities

Bases: BaseModel

Scanner capabilities model.

ScannerCapabilitiesResponse

Bases: BaseResponse

Response model for scanner capabilities.

ScanCaptureResult

Bases: BaseModel

Scan capture result model.

ScanCaptureResponse

Bases: BaseResponse

Response model for scan capture.

ScanCaptureBatchResult

Bases: BaseModel

Batch scan capture result model.

ScanCaptureBatchResponse

Bases: BaseResponse

Response model for batch scan capture.

PointCloudResult

Bases: BaseModel

Point cloud capture result model.

PointCloudResponse

Bases: BaseResponse

Response model for point cloud capture.

PointCloudBatchResult

Bases: BaseModel

Batch point cloud capture result model.

PointCloudBatchResponse

Bases: BaseResponse

Response model for batch point cloud capture.

BatchOperationResult

Bases: BaseModel

Individual batch operation result.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

ActiveScannersResponse

Bases: BaseResponse

Response model for listing active scanners.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

HealthCheckResponse

Bases: BaseModel

Health check response model.

schemas

MCP TaskSchemas for Scanner3DService.

capture_schemas

Scanner Capture TaskSchemas.

config_schemas

Scanner Configuration TaskSchemas.

health_schemas

Health Check TaskSchemas.

info_schemas

Scanner Information TaskSchemas.

lifecycle_schemas

Scanner Lifecycle TaskSchemas.

service

Scanner3DService - Service-based API for 3D scanner management.

This service provides comprehensive REST API and MCP tools for managing 3D scanners (Photoneo PhoXi, etc.) with multi-component capture capabilities.

Scanner3DService
Scanner3DService(**kwargs)

Bases: Service

3D Scanner Management Service.

Provides comprehensive REST API and MCP tools for managing 3D scanners with multi-component capture capabilities (range, intensity, confidence, normals, color, point clouds).

Supported Operations: - Backend discovery and information - Scanner lifecycle management (open, close, status) - Multi-component capture (range, intensity, confidence, normals, color) - Point cloud generation with optional color and confidence - Scanner configuration (exposure, trigger mode) - Batch operations for multiple scanners - System diagnostics and monitoring

Initialize Scanner3DService.

Parameters:

Name Type Description Default
**kwargs

Additional arguments passed to Service base class

{}
health_check async
health_check() -> HealthCheckResponse

Health check endpoint.

get_backends async
get_backends() -> BackendsResponse

Get available scanner backends.

get_backend_info async
get_backend_info() -> BackendInfoResponse

Get detailed backend information.

discover_scanners async
discover_scanners(request: BackendFilterRequest) -> ListResponse

Discover available 3D scanners.

open_scanner async
open_scanner(request: ScannerOpenRequest) -> BoolResponse

Open a 3D scanner connection.

open_scanners_batch async
open_scanners_batch(request: ScannerOpenBatchRequest) -> BatchOperationResponse

Open multiple scanners.

close_scanner async
close_scanner(request: ScannerCloseRequest) -> BoolResponse

Close a 3D scanner connection.

close_scanners_batch async
close_scanners_batch(
    request: ScannerCloseBatchRequest,
) -> BatchOperationResponse

Close multiple scanners.

close_all_scanners async
close_all_scanners() -> BatchOperationResponse

Close all active scanners.

get_active_scanners async
get_active_scanners() -> ActiveScannersResponse

Get list of active scanners.

get_scanner_status async
get_scanner_status(request: ScannerQueryRequest) -> ScannerStatusResponse

Get scanner status.

get_scanner_info async
get_scanner_info(request: ScannerQueryRequest) -> ScannerInfoResponse

Get scanner information.

get_system_diagnostics async
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics.

get_scanner_capabilities async
get_scanner_capabilities(
    request: ScannerQueryRequest,
) -> ScannerCapabilitiesResponse

Get scanner capabilities and available settings.

configure_scanner async
configure_scanner(request: ScannerConfigureRequest) -> BoolResponse

Configure scanner parameters.

configure_scanners_batch async
configure_scanners_batch(
    request: ScannerConfigureBatchRequest,
) -> BatchOperationResponse

Configure multiple scanners.

get_scanner_configuration async
get_scanner_configuration(
    request: ScannerQueryRequest,
) -> ScannerConfigurationResponse

Get scanner configuration.

capture_scan async
capture_scan(request: ScanCaptureRequest) -> ScanCaptureResponse

Capture 3D scan data.

capture_scan_batch async
capture_scan_batch(
    request: ScanCaptureBatchRequest,
) -> ScanCaptureBatchResponse

Capture scans from multiple scanners.

capture_point_cloud async
capture_point_cloud(request: PointCloudCaptureRequest) -> PointCloudResponse

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    request: PointCloudCaptureBatchRequest,
) -> PointCloudBatchResponse

Capture point clouds from multiple scanners.

sensors

Sensor API module providing service and connection management.

SensorConnectionManager
SensorConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Strongly-typed connection manager for sensor service operations.

connect_sensor async
connect_sensor(
    sensor_id: str, backend_type: str, config: Dict[str, Any], address: str
) -> SensorConnectionResponse

Connect to a sensor with specified configuration.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
backend_type str

Backend type (mqtt, http, serial)

required
config Dict[str, Any]

Backend-specific configuration

required
address str

Sensor address (topic, endpoint, or port)

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

disconnect_sensor async
disconnect_sensor(sensor_id: str) -> SensorConnectionResponse

Disconnect from a connected sensor.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor to disconnect

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of disconnection

read_sensor_data async
read_sensor_data(
    sensor_id: str, timeout: Optional[float] = None
) -> SensorDataResponse

Read data from a connected sensor.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
timeout Optional[float]

Optional read timeout in seconds

None

Returns:

Type Description
SensorDataResponse

Response containing sensor data or error information

get_sensor_status async
get_sensor_status(sensor_id: str) -> SensorStatusResponse

Get status information for a sensor.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required

Returns:

Type Description
SensorStatusResponse

Response containing sensor status information

list_sensors async
list_sensors(include_status: bool = False) -> SensorListResponse

List all registered sensors.

Parameters:

Name Type Description Default
include_status bool

Whether to include connection status for each sensor

False

Returns:

Type Description
SensorListResponse

Response containing list of sensors

connect_mqtt_sensor async
connect_mqtt_sensor(
    sensor_id: str, broker_url: str, identifier: str, address: str
) -> SensorConnectionResponse

Connect to an MQTT sensor with simplified parameters.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier str

Client identifier for MQTT connection

required
address str

MQTT topic to subscribe to

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

connect_http_sensor async
connect_http_sensor(
    sensor_id: str,
    base_url: str,
    address: str,
    headers: Optional[Dict[str, str]] = None,
) -> SensorConnectionResponse

Connect to an HTTP sensor with simplified parameters.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
base_url str

Base URL for HTTP requests

required
address str

Endpoint path for sensor data

required
headers Optional[Dict[str, str]]

Optional HTTP headers

None

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

connect_serial_sensor async
connect_serial_sensor(
    sensor_id: str, port: str, baudrate: int = 9600, timeout: float = 1.0
) -> SensorConnectionResponse

Connect to a serial sensor with simplified parameters.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
port str

Serial port (e.g., "/dev/ttyUSB0" or "COM1")

required
baudrate int

Serial communication baud rate

9600
timeout float

Serial read timeout

1.0

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

SensorManagerService
SensorManagerService(manager: Optional[SensorManager] = None, **kwargs)

Bases: Service

Service wrapper for SensorManager with MCP endpoint registration.

Initialize the sensor manager service.

Parameters:

Name Type Description Default
manager Optional[SensorManager]

Optional SensorManager instance. If None, creates a new one.

None
**kwargs

Additional arguments passed to the Service base class

{}
manager property
manager: SensorManager

Get the underlying SensorManager instance.

connect_sensor async
connect_sensor(request: SensorConnectionRequest) -> SensorConnectionResponse

Connect to a sensor with specified configuration.

Parameters:

Name Type Description Default
request SensorConnectionRequest

Connection request with sensor configuration

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

disconnect_sensor async
disconnect_sensor(request: SensorStatusRequest) -> SensorConnectionResponse

Disconnect from a connected sensor.

Parameters:

Name Type Description Default
request SensorStatusRequest

Request containing sensor_id to disconnect

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of disconnection

read_sensor_data async
read_sensor_data(request: SensorDataRequest) -> SensorDataResponse

Read data from a connected sensor.

Parameters:

Name Type Description Default
request SensorDataRequest

Request specifying sensor and read parameters

required

Returns:

Type Description
SensorDataResponse

Response containing sensor data or error information

get_sensor_status async
get_sensor_status(request: SensorStatusRequest) -> SensorStatusResponse

Get status information for a sensor.

Parameters:

Name Type Description Default
request SensorStatusRequest

Request containing sensor_id

required

Returns:

Type Description
SensorStatusResponse

Response containing sensor status information

list_sensors async
list_sensors(request: SensorListRequest) -> SensorListResponse

List all registered sensors.

Parameters:

Name Type Description Default
request SensorListRequest

Request with listing options

required

Returns:

Type Description
SensorListResponse

Response containing list of sensors

shutdown_cleanup async
shutdown_cleanup()

Cleanup sensors on shutdown.

health_check
health_check() -> HealthCheckResponse

Health check endpoint for container healthcheck.

connection_manager

Connection manager for typed sensor service client access.

SensorConnectionManager
SensorConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Strongly-typed connection manager for sensor service operations.

connect_sensor async
connect_sensor(
    sensor_id: str, backend_type: str, config: Dict[str, Any], address: str
) -> SensorConnectionResponse

Connect to a sensor with specified configuration.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
backend_type str

Backend type (mqtt, http, serial)

required
config Dict[str, Any]

Backend-specific configuration

required
address str

Sensor address (topic, endpoint, or port)

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

disconnect_sensor async
disconnect_sensor(sensor_id: str) -> SensorConnectionResponse

Disconnect from a connected sensor.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor to disconnect

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of disconnection

read_sensor_data async
read_sensor_data(
    sensor_id: str, timeout: Optional[float] = None
) -> SensorDataResponse

Read data from a connected sensor.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
timeout Optional[float]

Optional read timeout in seconds

None

Returns:

Type Description
SensorDataResponse

Response containing sensor data or error information

get_sensor_status async
get_sensor_status(sensor_id: str) -> SensorStatusResponse

Get status information for a sensor.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required

Returns:

Type Description
SensorStatusResponse

Response containing sensor status information

list_sensors async
list_sensors(include_status: bool = False) -> SensorListResponse

List all registered sensors.

Parameters:

Name Type Description Default
include_status bool

Whether to include connection status for each sensor

False

Returns:

Type Description
SensorListResponse

Response containing list of sensors

connect_mqtt_sensor async
connect_mqtt_sensor(
    sensor_id: str, broker_url: str, identifier: str, address: str
) -> SensorConnectionResponse

Connect to an MQTT sensor with simplified parameters.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
broker_url str

MQTT broker URL (e.g., "mqtt://localhost:1883")

required
identifier str

Client identifier for MQTT connection

required
address str

MQTT topic to subscribe to

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

connect_http_sensor async
connect_http_sensor(
    sensor_id: str,
    base_url: str,
    address: str,
    headers: Optional[Dict[str, str]] = None,
) -> SensorConnectionResponse

Connect to an HTTP sensor with simplified parameters.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
base_url str

Base URL for HTTP requests

required
address str

Endpoint path for sensor data

required
headers Optional[Dict[str, str]]

Optional HTTP headers

None

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

connect_serial_sensor async
connect_serial_sensor(
    sensor_id: str, port: str, baudrate: int = 9600, timeout: float = 1.0
) -> SensorConnectionResponse

Connect to a serial sensor with simplified parameters.

Parameters:

Name Type Description Default
sensor_id str

Unique identifier for the sensor

required
port str

Serial port (e.g., "/dev/ttyUSB0" or "COM1")

required
baudrate int

Serial communication baud rate

9600
timeout float

Serial read timeout

1.0

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

launcher

Sensor API service launcher.

main
main()

Main launcher function.

models

Sensor API models for request/response data structures.

SensorConnectionRequest

Bases: BaseModel

Request to connect to a sensor.

SensorDataRequest

Bases: BaseModel

Request to read data from a connected sensor.

SensorListRequest

Bases: BaseModel

Request to list all sensors.

SensorStatusRequest

Bases: BaseModel

Request to get status of a sensor.

HealthCheckResponse

Bases: BaseModel

Health check response model.

SensorConnectionResponse

Bases: BaseModel

Response from sensor connection operation.

SensorConnectionStatus

Bases: str, Enum

Status of sensor connection.

SensorDataResponse

Bases: BaseModel

Response containing sensor data.

SensorInfo

Bases: BaseModel

Information about a sensor.

SensorListResponse

Bases: BaseModel

Response containing list of sensors.

SensorStatusResponse

Bases: BaseModel

Response containing sensor status information.

requests

Request models for sensor operations.

SensorConnectionRequest

Bases: BaseModel

Request to connect to a sensor.

SensorDataRequest

Bases: BaseModel

Request to read data from a connected sensor.

SensorStatusRequest

Bases: BaseModel

Request to get status of a sensor.

SensorListRequest

Bases: BaseModel

Request to list all sensors.

responses

Response models for sensor operations.

SensorConnectionStatus

Bases: str, Enum

Status of sensor connection.

SensorInfo

Bases: BaseModel

Information about a sensor.

SensorConnectionResponse

Bases: BaseModel

Response from sensor connection operation.

SensorDataResponse

Bases: BaseModel

Response containing sensor data.

SensorStatusResponse

Bases: BaseModel

Response containing sensor status information.

SensorListResponse

Bases: BaseModel

Response containing list of sensors.

HealthCheckResponse

Bases: BaseModel

Health check response model.

schemas

Sensor task schemas for service operations.

SensorDataSchemas

Task schemas for sensor data access.

SensorLifecycleSchemas

Task schemas for sensor lifecycle management.

data

Task schemas for sensor data operations.

SensorDataSchemas

Task schemas for sensor data access.

health

Health check TaskSchema.

lifecycle

Task schemas for sensor lifecycle operations.

SensorLifecycleSchemas

Task schemas for sensor lifecycle management.

service

Sensor Manager Service providing MCP endpoints for sensor operations.

SensorManagerService
SensorManagerService(manager: Optional[SensorManager] = None, **kwargs)

Bases: Service

Service wrapper for SensorManager with MCP endpoint registration.

Initialize the sensor manager service.

Parameters:

Name Type Description Default
manager Optional[SensorManager]

Optional SensorManager instance. If None, creates a new one.

None
**kwargs

Additional arguments passed to the Service base class

{}
manager property
manager: SensorManager

Get the underlying SensorManager instance.

connect_sensor async
connect_sensor(request: SensorConnectionRequest) -> SensorConnectionResponse

Connect to a sensor with specified configuration.

Parameters:

Name Type Description Default
request SensorConnectionRequest

Connection request with sensor configuration

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of connection

disconnect_sensor async
disconnect_sensor(request: SensorStatusRequest) -> SensorConnectionResponse

Disconnect from a connected sensor.

Parameters:

Name Type Description Default
request SensorStatusRequest

Request containing sensor_id to disconnect

required

Returns:

Type Description
SensorConnectionResponse

Response indicating success/failure of disconnection

read_sensor_data async
read_sensor_data(request: SensorDataRequest) -> SensorDataResponse

Read data from a connected sensor.

Parameters:

Name Type Description Default
request SensorDataRequest

Request specifying sensor and read parameters

required

Returns:

Type Description
SensorDataResponse

Response containing sensor data or error information

get_sensor_status async
get_sensor_status(request: SensorStatusRequest) -> SensorStatusResponse

Get status information for a sensor.

Parameters:

Name Type Description Default
request SensorStatusRequest

Request containing sensor_id

required

Returns:

Type Description
SensorStatusResponse

Response containing sensor status information

list_sensors async
list_sensors(request: SensorListRequest) -> SensorListResponse

List all registered sensors.

Parameters:

Name Type Description Default
request SensorListRequest

Request with listing options

required

Returns:

Type Description
SensorListResponse

Response containing list of sensors

shutdown_cleanup async
shutdown_cleanup()

Cleanup sensors on shutdown.

health_check
health_check() -> HealthCheckResponse

Health check endpoint for container healthcheck.

stereo_cameras

StereoCameraService - Service-based stereo camera management API.

StereoCameraConnectionManager
StereoCameraConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for StereoCameraService.

Provides strongly-typed methods for all stereo camera management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available stereo camera backends.

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

discover_cameras async
discover_cameras(backend: Optional[str] = None) -> List[str]

Discover available stereo cameras.

open_camera async
open_camera(camera: str, test_connection: bool = True) -> bool

Open a stereo camera.

open_cameras_batch async
open_cameras_batch(
    cameras: List[str], test_connection: bool = True
) -> Dict[str, Any]

Open multiple stereo cameras.

close_camera async
close_camera(camera: str) -> bool

Close a stereo camera.

close_cameras_batch async
close_cameras_batch(cameras: List[str]) -> Dict[str, Any]

Close multiple stereo cameras.

close_all_cameras async
close_all_cameras() -> bool

Close all active stereo cameras.

get_active_cameras async
get_active_cameras() -> List[str]

Get list of currently active stereo cameras.

get_camera_status async
get_camera_status(camera: str) -> Dict[str, Any]

Get stereo camera status.

get_camera_info async
get_camera_info(camera: str) -> Dict[str, Any]

Get detailed stereo camera information.

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics.

configure_camera async
configure_camera(camera: str, properties: Dict[str, Any]) -> bool

Configure stereo camera parameters.

configure_cameras_batch async
configure_cameras_batch(
    configurations: Dict[str, Dict[str, Any]],
) -> Dict[str, Any]

Configure multiple stereo cameras.

get_camera_configuration async
get_camera_configuration(camera: str) -> Dict[str, Any]

Get current stereo camera configuration.

capture_stereo_pair async
capture_stereo_pair(
    camera: str,
    save_intensity_path: Optional[str] = None,
    save_disparity_path: Optional[str] = None,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
    output_format: str = "pil",
) -> Dict[str, Any]

Capture stereo data (intensity + disparity).

capture_stereo_batch async
capture_stereo_batch(
    captures: List[Dict[str, Any]], output_format: str = "pil"
) -> Dict[str, Any]

Capture stereo data from multiple cameras.

capture_point_cloud async
capture_point_cloud(
    camera: str,
    save_path: Optional[str] = None,
    include_colors: bool = True,
    downsample_factor: int = 1,
    output_format: str = "numpy",
) -> Dict[str, Any]

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]

Capture point clouds from multiple cameras.

StereoCameraService
StereoCameraService(**kwargs)

Bases: Service

Stereo Camera Management Service.

Provides comprehensive REST API and MCP tools for managing stereo cameras with multi-component capture capabilities (intensity, disparity, point clouds).

Supported Operations: - Backend discovery and information - Camera lifecycle management (open, close, status) - Multi-component capture (intensity + disparity) - Point cloud generation with optional color - Camera configuration (depth range, illumination, binning, quality, exposure, gain) - Batch operations for multiple cameras - System diagnostics and monitoring

Initialize StereoCameraService.

Parameters:

Name Type Description Default
**kwargs

Additional arguments passed to Service base class

{}
health_check
health_check() -> HealthCheckResponse

Service health check for container healthcheck.

shutdown_cleanup async
shutdown_cleanup()

Cleanup cameras on shutdown.

get_backends
get_backends() -> BackendsResponse

Get list of available stereo camera backends.

get_backend_info
get_backend_info() -> BackendInfoResponse

Get detailed information about stereo camera backends.

discover_cameras
discover_cameras(request: BackendFilterRequest) -> ListResponse

Discover available stereo cameras.

open_camera async
open_camera(request: StereoCameraOpenRequest) -> BoolResponse

Open a stereo camera connection.

open_cameras_batch async
open_cameras_batch(
    request: StereoCameraOpenBatchRequest,
) -> BatchOperationResponse

Open multiple stereo cameras.

close_camera async
close_camera(request: StereoCameraCloseRequest) -> BoolResponse

Close a stereo camera connection.

close_cameras_batch async
close_cameras_batch(
    request: StereoCameraCloseBatchRequest,
) -> BatchOperationResponse

Close multiple stereo cameras.

close_all_cameras async
close_all_cameras() -> BoolResponse

Close all active stereo cameras.

get_active_cameras
get_active_cameras() -> ActiveStereoCamerasResponse

Get list of active stereo cameras.

get_camera_status async
get_camera_status(
    request: StereoCameraQueryRequest,
) -> StereoCameraStatusResponse

Get stereo camera status.

get_camera_info async
get_camera_info(request: StereoCameraQueryRequest) -> StereoCameraInfoResponse

Get detailed stereo camera information.

get_calibration
get_calibration(camera_name: str)

Get full calibration data including Q matrix for 2D-to-3D projection.

get_system_diagnostics
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics and statistics.

configure_camera async
configure_camera(request: StereoCameraConfigureRequest) -> BoolResponse

Configure stereo camera parameters.

configure_cameras_batch async
configure_cameras_batch(
    request: StereoCameraConfigureBatchRequest,
) -> BatchOperationResponse

Configure multiple stereo cameras.

get_camera_configuration async
get_camera_configuration(
    request: StereoCameraQueryRequest,
) -> StereoCameraConfigurationResponse

Get current stereo camera configuration.

capture_stereo_pair async
capture_stereo_pair(request: StereoCaptureRequest) -> StereoCaptureResponse

Capture stereo data (intensity + disparity).

capture_stereo_batch async
capture_stereo_batch(
    request: StereoCaptureBatchRequest,
) -> StereoCaptureBatchResponse

Capture stereo data from multiple cameras.

capture_point_cloud async
capture_point_cloud(request: PointCloudCaptureRequest) -> PointCloudResponse

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    request: PointCloudCaptureBatchRequest,
) -> PointCloudBatchResponse

Capture point clouds from multiple cameras.

start_stream
start_stream(camera: str, quality: int = 85, fps: int = 10)

Start stereo camera stream.

stop_stream
stop_stream(camera: str)

Stop stereo camera stream.

get_active_streams
get_active_streams()

Get list of active streams.

serve_stereo_stream async
serve_stereo_stream(camera_name: str)

Serve MJPEG video stream for stereo camera (intensity image).

connection_manager

Connection Manager for StereoCameraService.

Provides a strongly-typed client interface for programmatic access to stereo camera management operations.

StereoCameraConnectionManager
StereoCameraConnectionManager(
    url: Url | None = None,
    server_id: UUID | None = None,
    server_pid_file: str | None = None,
)

Bases: ConnectionManager

Connection Manager for StereoCameraService.

Provides strongly-typed methods for all stereo camera management operations, making it easy to use the service programmatically from other applications.

get async
get(endpoint: str, http_timeout: float = 60.0) -> Dict[str, Any]

Make GET request to service endpoint.

post async
post(
    endpoint: str, data: Dict[str, Any] = None, http_timeout: float = 60.0
) -> Dict[str, Any]

Make POST request to service endpoint.

discover_backends async
discover_backends() -> List[str]

Discover available stereo camera backends.

get_backend_info async
get_backend_info() -> Dict[str, Any]

Get detailed information about all backends.

discover_cameras async
discover_cameras(backend: Optional[str] = None) -> List[str]

Discover available stereo cameras.

open_camera async
open_camera(camera: str, test_connection: bool = True) -> bool

Open a stereo camera.

open_cameras_batch async
open_cameras_batch(
    cameras: List[str], test_connection: bool = True
) -> Dict[str, Any]

Open multiple stereo cameras.

close_camera async
close_camera(camera: str) -> bool

Close a stereo camera.

close_cameras_batch async
close_cameras_batch(cameras: List[str]) -> Dict[str, Any]

Close multiple stereo cameras.

close_all_cameras async
close_all_cameras() -> bool

Close all active stereo cameras.

get_active_cameras async
get_active_cameras() -> List[str]

Get list of currently active stereo cameras.

get_camera_status async
get_camera_status(camera: str) -> Dict[str, Any]

Get stereo camera status.

get_camera_info async
get_camera_info(camera: str) -> Dict[str, Any]

Get detailed stereo camera information.

get_system_diagnostics async
get_system_diagnostics() -> Dict[str, Any]

Get system diagnostics.

configure_camera async
configure_camera(camera: str, properties: Dict[str, Any]) -> bool

Configure stereo camera parameters.

configure_cameras_batch async
configure_cameras_batch(
    configurations: Dict[str, Dict[str, Any]],
) -> Dict[str, Any]

Configure multiple stereo cameras.

get_camera_configuration async
get_camera_configuration(camera: str) -> Dict[str, Any]

Get current stereo camera configuration.

capture_stereo_pair async
capture_stereo_pair(
    camera: str,
    save_intensity_path: Optional[str] = None,
    save_disparity_path: Optional[str] = None,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
    output_format: str = "pil",
) -> Dict[str, Any]

Capture stereo data (intensity + disparity).

capture_stereo_batch async
capture_stereo_batch(
    captures: List[Dict[str, Any]], output_format: str = "pil"
) -> Dict[str, Any]

Capture stereo data from multiple cameras.

capture_point_cloud async
capture_point_cloud(
    camera: str,
    save_path: Optional[str] = None,
    include_colors: bool = True,
    downsample_factor: int = 1,
    output_format: str = "numpy",
) -> Dict[str, Any]

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    captures: List[Dict[str, Any]], output_format: str = "numpy"
) -> Dict[str, Any]

Capture point clouds from multiple cameras.

launcher

Stereo Camera API service launcher.

main
main()

Main launcher function.

models

Models for StereoCameraService API.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

PointCloudCaptureBatchRequest

Bases: BaseModel

Request model for batch point cloud capture.

PointCloudCaptureRequest

Bases: BaseModel

Request model for point cloud capture.

StereoCameraCloseBatchRequest

Bases: BaseModel

Request model for batch stereo camera closing.

StereoCameraCloseRequest

Bases: BaseModel

Request model for closing a stereo camera.

StereoCameraConfigureBatchRequest

Bases: BaseModel

Request model for batch stereo camera configuration.

StereoCameraConfigureRequest

Bases: BaseModel

Request model for stereo camera configuration.

StereoCameraOpenBatchRequest

Bases: BaseModel

Request model for batch stereo camera opening.

StereoCameraOpenRequest

Bases: BaseModel

Request model for opening a stereo camera.

StereoCameraQueryRequest

Bases: BaseModel

Request model for stereo camera queries.

StereoCaptureBatchRequest

Bases: BaseModel

Request model for batch stereo capture.

StereoCaptureRequest

Bases: BaseModel

Request model for stereo capture.

ActiveStereoCamerasResponse

Bases: BaseResponse

Response model for listing active stereo cameras.

BackendInfo

Bases: BaseModel

Stereo camera backend information model.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

BatchOperationResult

Bases: BaseModel

Individual batch operation result.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

HealthCheckResponse

Bases: BaseModel

Health check response model.

ListResponse

Bases: BaseResponse

Response model for list data.

PointCloudBatchResponse

Bases: BaseResponse

Response model for batch point cloud capture.

PointCloudBatchResult

Bases: BaseModel

Batch point cloud capture result model.

PointCloudResponse

Bases: BaseResponse

Response model for point cloud capture.

PointCloudResult

Bases: BaseModel

Point cloud capture result model.

StereoCameraConfiguration

Bases: BaseModel

Stereo camera configuration model.

StereoCameraConfigurationResponse

Bases: BaseResponse

Response model for stereo camera configuration.

StereoCameraInfo

Bases: BaseModel

Stereo camera information model.

StereoCameraInfoResponse

Bases: BaseResponse

Response model for stereo camera information.

StereoCameraStatus

Bases: BaseModel

Stereo camera status model.

StereoCameraStatusResponse

Bases: BaseResponse

Response model for stereo camera status.

StereoCaptureBatchResponse

Bases: BaseResponse

Response model for batch stereo capture.

StereoCaptureBatchResult

Bases: BaseModel

Batch stereo capture result model.

StereoCaptureResponse

Bases: BaseResponse

Response model for stereo capture.

StereoCaptureResult

Bases: BaseModel

Stereo capture result model.

StringResponse

Bases: BaseResponse

Response model for string values.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

requests

Request models for StereoCameraService.

Contains all Pydantic models for API requests, ensuring proper input validation and documentation for all stereo camera operations.

BackendFilterRequest

Bases: BaseModel

Request model for backend filtering.

StereoCameraOpenRequest

Bases: BaseModel

Request model for opening a stereo camera.

StereoCameraOpenBatchRequest

Bases: BaseModel

Request model for batch stereo camera opening.

StereoCameraCloseRequest

Bases: BaseModel

Request model for closing a stereo camera.

StereoCameraCloseBatchRequest

Bases: BaseModel

Request model for batch stereo camera closing.

StereoCameraQueryRequest

Bases: BaseModel

Request model for stereo camera queries.

StereoCameraConfigureRequest

Bases: BaseModel

Request model for stereo camera configuration.

StereoCameraConfigureBatchRequest

Bases: BaseModel

Request model for batch stereo camera configuration.

StereoCaptureRequest

Bases: BaseModel

Request model for stereo capture.

StereoCaptureBatchRequest

Bases: BaseModel

Request model for batch stereo capture.

PointCloudCaptureRequest

Bases: BaseModel

Request model for point cloud capture.

PointCloudCaptureBatchRequest

Bases: BaseModel

Request model for batch point cloud capture.

responses

Response models for StereoCameraService.

Contains all Pydantic models for API responses, ensuring consistent response formatting across all stereo camera management endpoints.

BaseResponse

Bases: BaseModel

Base response model for all API endpoints.

BoolResponse

Bases: BaseResponse

Response model for boolean operations.

StringResponse

Bases: BaseResponse

Response model for string values.

ListResponse

Bases: BaseResponse

Response model for list data.

DictResponse

Bases: BaseResponse

Response model for dictionary data.

BackendInfo

Bases: BaseModel

Stereo camera backend information model.

BackendsResponse

Bases: BaseResponse

Response model for backend listing.

BackendInfoResponse

Bases: BaseResponse

Response model for detailed backend information.

StereoCameraStatus

Bases: BaseModel

Stereo camera status model.

StereoCameraStatusResponse

Bases: BaseResponse

Response model for stereo camera status.

StereoCameraInfo

Bases: BaseModel

Stereo camera information model.

StereoCameraInfoResponse

Bases: BaseResponse

Response model for stereo camera information.

StereoCameraConfiguration

Bases: BaseModel

Stereo camera configuration model.

StereoCameraConfigurationResponse

Bases: BaseResponse

Response model for stereo camera configuration.

StereoCaptureResult

Bases: BaseModel

Stereo capture result model.

StereoCaptureResponse

Bases: BaseResponse

Response model for stereo capture.

StereoCaptureBatchResult

Bases: BaseModel

Batch stereo capture result model.

StereoCaptureBatchResponse

Bases: BaseResponse

Response model for batch stereo capture.

PointCloudResult

Bases: BaseModel

Point cloud capture result model.

PointCloudResponse

Bases: BaseResponse

Response model for point cloud capture.

PointCloudBatchResult

Bases: BaseModel

Batch point cloud capture result model.

PointCloudBatchResponse

Bases: BaseResponse

Response model for batch point cloud capture.

BatchOperationResult

Bases: BaseModel

Individual batch operation result.

BatchOperationResponse

Bases: BaseResponse

Response model for batch operations.

ActiveStereoCamerasResponse

Bases: BaseResponse

Response model for listing active stereo cameras.

SystemDiagnostics

Bases: BaseModel

System diagnostics model.

SystemDiagnosticsResponse

Bases: BaseResponse

Response model for system diagnostics.

HealthCheckResponse

Bases: BaseModel

Health check response model.

schemas

MCP TaskSchemas for StereoCameraService.

capture_schemas

Stereo Camera Capture TaskSchemas.

config_schemas

Stereo Camera Configuration TaskSchemas.

health_schemas

Health check TaskSchema.

info_schemas

Stereo Camera Information TaskSchemas.

lifecycle_schemas

Stereo Camera Lifecycle TaskSchemas.

service

StereoCameraService - Service-based API for stereo camera management.

This service provides comprehensive REST API and MCP tools for managing Basler Stereo ace cameras with multi-component capture (intensity, disparity, depth).

StereoCameraService
StereoCameraService(**kwargs)

Bases: Service

Stereo Camera Management Service.

Provides comprehensive REST API and MCP tools for managing stereo cameras with multi-component capture capabilities (intensity, disparity, point clouds).

Supported Operations: - Backend discovery and information - Camera lifecycle management (open, close, status) - Multi-component capture (intensity + disparity) - Point cloud generation with optional color - Camera configuration (depth range, illumination, binning, quality, exposure, gain) - Batch operations for multiple cameras - System diagnostics and monitoring

Initialize StereoCameraService.

Parameters:

Name Type Description Default
**kwargs

Additional arguments passed to Service base class

{}
health_check
health_check() -> HealthCheckResponse

Service health check for container healthcheck.

shutdown_cleanup async
shutdown_cleanup()

Cleanup cameras on shutdown.

get_backends
get_backends() -> BackendsResponse

Get list of available stereo camera backends.

get_backend_info
get_backend_info() -> BackendInfoResponse

Get detailed information about stereo camera backends.

discover_cameras
discover_cameras(request: BackendFilterRequest) -> ListResponse

Discover available stereo cameras.

open_camera async
open_camera(request: StereoCameraOpenRequest) -> BoolResponse

Open a stereo camera connection.

open_cameras_batch async
open_cameras_batch(
    request: StereoCameraOpenBatchRequest,
) -> BatchOperationResponse

Open multiple stereo cameras.

close_camera async
close_camera(request: StereoCameraCloseRequest) -> BoolResponse

Close a stereo camera connection.

close_cameras_batch async
close_cameras_batch(
    request: StereoCameraCloseBatchRequest,
) -> BatchOperationResponse

Close multiple stereo cameras.

close_all_cameras async
close_all_cameras() -> BoolResponse

Close all active stereo cameras.

get_active_cameras
get_active_cameras() -> ActiveStereoCamerasResponse

Get list of active stereo cameras.

get_camera_status async
get_camera_status(
    request: StereoCameraQueryRequest,
) -> StereoCameraStatusResponse

Get stereo camera status.

get_camera_info async
get_camera_info(request: StereoCameraQueryRequest) -> StereoCameraInfoResponse

Get detailed stereo camera information.

get_calibration
get_calibration(camera_name: str)

Get full calibration data including Q matrix for 2D-to-3D projection.

get_system_diagnostics
get_system_diagnostics() -> SystemDiagnosticsResponse

Get system diagnostics and statistics.

configure_camera async
configure_camera(request: StereoCameraConfigureRequest) -> BoolResponse

Configure stereo camera parameters.

configure_cameras_batch async
configure_cameras_batch(
    request: StereoCameraConfigureBatchRequest,
) -> BatchOperationResponse

Configure multiple stereo cameras.

get_camera_configuration async
get_camera_configuration(
    request: StereoCameraQueryRequest,
) -> StereoCameraConfigurationResponse

Get current stereo camera configuration.

capture_stereo_pair async
capture_stereo_pair(request: StereoCaptureRequest) -> StereoCaptureResponse

Capture stereo data (intensity + disparity).

capture_stereo_batch async
capture_stereo_batch(
    request: StereoCaptureBatchRequest,
) -> StereoCaptureBatchResponse

Capture stereo data from multiple cameras.

capture_point_cloud async
capture_point_cloud(request: PointCloudCaptureRequest) -> PointCloudResponse

Capture and generate 3D point cloud.

capture_point_cloud_batch async
capture_point_cloud_batch(
    request: PointCloudCaptureBatchRequest,
) -> PointCloudBatchResponse

Capture point clouds from multiple cameras.

start_stream
start_stream(camera: str, quality: int = 85, fps: int = 10)

Start stereo camera stream.

stop_stream
stop_stream(camera: str)

Stop stereo camera stream.

get_active_streams
get_active_streams()

Get list of active streams.

serve_stereo_stream async
serve_stereo_stream(camera_name: str)

Serve MJPEG video stream for stereo camera (intensity image).

stereo_cameras

Stereo camera support for MindTrace hardware system.

This module provides backends and utilities for stereo camera systems that output multi-component data (intensity, disparity, depth, point clouds).

Available components
  • backends: Stereo camera backend implementations
  • core: Core stereo camera management classes
  • setup: Installation scripts for stereo camera SDKs
Quick Start

from mindtrace.hardware.stereo_cameras import StereoCamera

Open first available stereo camera

camera = StereoCamera()

Capture multi-component data

result = camera.capture() print(f"Intensity: {result.intensity.shape}") print(f"Disparity: {result.disparity.shape}")

Generate point cloud

point_cloud = camera.capture_point_cloud() point_cloud.save_ply("output.ply")

camera.close()

BaslerStereoAceBackend
BaslerStereoAceBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: StereoCameraBackend

Backend for Basler Stereo ace cameras using pypylon.

The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.

Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.

Initialize Basler Stereo ace backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

name property
name: str

Get camera name.

is_open property
is_open: bool

Check if camera is open.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

discover staticmethod
discover() -> List[str]

Discover available Stereo ace cameras.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Stereo ace cameras with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed() - runs discovery in threadpool.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

initialize async
initialize() -> bool

Initialize camera connection.

Returns:

Type Description
bool

True if initialization successful, False otherwise

get_calibration async
get_calibration() -> StereoCalibrationData

Get factory calibration parameters from camera.

Returns:

Type Description
StereoCalibrationData

StereoCalibrationData with factory calibration

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If calibration cannot be read

capture async
capture(
    timeout_ms: int = 20000,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
) -> StereoGrabResult

Capture stereo data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

20000
enable_intensity bool

Whether to capture intensity data

True
enable_disparity bool

Whether to capture disparity data

True
calibrate_disparity bool

Whether to apply calibration to disparity

True

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range.

Parameters:

Name Type Description Default
min_depth float

Minimum depth in meters (e.g., 0.3)

required
max_depth float

Maximum depth in meters (e.g., 5.0)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required
Note

Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example
Low latency configuration

await camera.set_binning(2, 2) await camera.set_depth_quality("Full")

set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Example

await camera.set_pixel_format("Mono8") # Force grayscale

set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_exposure_time(5000) # 5ms exposure

set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_gain(2.0)

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

gain = await camera.get_gain() print(f"Current gain: {gain}")

get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")

get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

format = await camera.get_pixel_format() print(f"Pixel format: {format}")

get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")

get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")

get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await backend.set_trigger_mode("continuous")  # Free running
>>> await backend.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await backend.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes.

Returns:

Type Description
List[str]

List of supported trigger modes: ["continuous", "trigger"]

Note

This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.

start_grabbing async
start_grabbing() -> None

Start grabbing frames.

This must be called before execute_trigger() in software trigger mode.

Raises:

Type Description
CameraConnectionError

If camera not opened

execute_trigger async
execute_trigger() -> None

Execute software trigger.

Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 20000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
include_confidence bool

Whether to include confidence values (not supported)

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

close async
close() -> None

Close camera and release resources.

AsyncStereoCamera
AsyncStereoCamera(backend)

Bases: Mindtrace

Async stereo camera interface.

Provides high-level stereo camera operations including multi-component capture and 3D point cloud generation.

Initialize async stereo camera.

Parameters:

Name Type Description Default
backend

Backend instance (e.g., BaslerStereoAceBackend)

required
name property
name: str

Get camera name.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

is_open property
is_open: bool

Check if camera is open.

open async classmethod
open(name: Optional[str] = None) -> 'AsyncStereoCamera'

Open and initialize a stereo camera.

Parameters:

Name Type Description Default
name Optional[str]

Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera.

None

Returns:

Type Description
'AsyncStereoCamera'

Initialized AsyncStereoCamera instance

Raises:

Type Description
CameraNotFoundError

If camera not found

CameraConnectionError

If connection fails

Examples:

>>> camera = await AsyncStereoCamera.open()
>>> camera = await AsyncStereoCamera.open("BaslerStereoAce:40644640")
initialize async
initialize() -> bool

Initialize camera and load calibration.

Returns:

Type Description
bool

True if initialization successful

Note

Usually not needed as open() handles initialization

close async
close() -> None

Close camera and release resources.

capture async
capture(
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
) -> StereoGrabResult

Capture multi-component stereo data.

Parameters:

Name Type Description Default
enable_intensity bool

Whether to capture intensity image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to disparity

True
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

Examples:

>>> result = await camera.capture()
>>> print(f"Intensity: {result.intensity.shape}")
>>> print(f"Disparity: {result.disparity.shape}")
capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

Examples:

>>> point_cloud = await camera.capture_point_cloud()
>>> point_cloud.save_ply("output.ply")
configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

Examples:

>>> await camera.configure(ExposureTime=15000, Gain=2.0)
set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

Parameters:

Name Type Description Default
min_depth float

Minimum depth (e.g., 0.3 meters)

required
max_depth float

Maximum depth (e.g., 5.0 meters)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_depth_range(0.5, 3.0)
set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await camera.set_illumination_mode("AlternateActive")
set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Binning reduces network transfer and computation.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_binning(2, 2)
>>> await camera.set_depth_quality("Full")  # Recommended for low latency
set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> # Low latency configuration
>>> await camera.set_binning(2, 2)
>>> await camera.set_depth_quality("Full")
set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Examples:

>>> await camera.set_pixel_format("Mono8")  # Force grayscale
set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_exposure_time(5000)  # 5ms exposure
set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_gain(2.0)
get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> exposure = await camera.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> gain = await camera.get_gain()
>>> print(f"Gain: {gain}")
get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> quality = await camera.get_depth_quality()
>>> print(f"Quality: {quality}")
get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> format = await camera.get_pixel_format()
>>> print(f"Format: {format}")
get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> h_bin, v_bin = await camera.get_binning()
>>> print(f"Binning: {h_bin}x{v_bin}")
get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await camera.get_illumination_mode()
>>> print(f"Illumination: {mode}")
get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> min_d, max_d = await camera.get_depth_range()
>>> print(f"Range: {min_d}m - {max_d}m")
set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition - "trigger": Software-triggered acquisition

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await camera.set_trigger_mode("continuous")  # Free running
>>> await camera.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await camera.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> list[str]

Get available trigger modes.

Returns:

Type Description
list[str]

List of supported trigger modes: ["continuous", "trigger"]

Examples:

>>> modes = await camera.get_trigger_modes()
>>> print(f"Available modes: {modes}")
start_grabbing async
start_grabbing() -> None

Start grabbing frames.

Must be called after enable_software_trigger() and before execute_trigger().

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> await camera.enable_software_trigger()
>>> await camera.start_grabbing()
>>> for i in range(10):
...     await camera.execute_trigger()
...     result = await camera.capture()
execute_trigger async
execute_trigger() -> None

Execute software trigger.

Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

Examples:

>>> await camera.enable_software_trigger()
>>> await camera.start_grabbing()
>>> for i in range(10):
...     await camera.execute_trigger()
...     result = await camera.capture()
PointCloudData dataclass
PointCloudData(
    points: ndarray,
    colors: Optional[ndarray] = None,
    num_points: int = 0,
    has_colors: bool = False,
)

3D point cloud data with optional color information.

Attributes:

Name Type Description
points ndarray

Array of 3D points (N, 3) - (x, y, z) in meters

colors Optional[ndarray]

Optional array of RGB colors (N, 3) - values in [0, 1]

num_points int

Number of valid points

has_colors bool

Flag indicating if color information is present

save_ply
save_ply(path: str, binary: bool = True) -> None

Save point cloud as PLY file.

Parameters:

Name Type Description Default
path str

Output file path

required
binary bool

If True, save in binary format; otherwise ASCII

True

Raises:

Type Description
ImportError

If plyfile is not installed

downsample
downsample(factor: int) -> 'PointCloudData'

Downsample point cloud by given factor.

Parameters:

Name Type Description Default
factor int

Downsampling factor (e.g., 2 = keep every 2nd point)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with downsampled points

StereoCalibrationData dataclass
StereoCalibrationData(
    baseline: float,
    focal_length: float,
    principal_point_u: float,
    principal_point_v: float,
    scale3d: float,
    offset3d: float,
    Q: ndarray,
)

Factory calibration parameters for stereo camera.

These parameters are provided by the camera manufacturer and used for 3D reconstruction from disparity maps.

Attributes:

Name Type Description
baseline float

Stereo baseline in meters (distance between camera pair)

focal_length float

Focal length in pixels

principal_point_u float

Principal point U coordinate in pixels

principal_point_v float

Principal point V coordinate in pixels

scale3d float

Scale factor for disparity conversion

offset3d float

Offset for disparity conversion

Q ndarray

4x4 reprojection matrix for point cloud generation

from_camera_params classmethod
from_camera_params(params: dict) -> 'StereoCalibrationData'

Create calibration data from camera parameter dictionary.

Parameters:

Name Type Description Default
params dict

Dictionary containing calibration parameters: - Scan3dBaseline: Baseline in meters - Scan3dFocalLength: Focal length in pixels - Scan3dPrincipalPointU: Principal point U in pixels - Scan3dPrincipalPointV: Principal point V in pixels - Scan3dCoordinateScale: Scale factor - Scan3dCoordinateOffset: Offset

required

Returns:

Type Description
'StereoCalibrationData'

StereoCalibrationData instance

calibrate_disparity
calibrate_disparity(disparity: ndarray) -> np.ndarray

Apply calibration to raw disparity map.

Parameters:

Name Type Description Default
disparity ndarray

Raw disparity map (uint16)

required

Returns:

Type Description
ndarray

Calibrated disparity map (float32)

StereoCamera
StereoCamera(
    async_camera: Optional[AsyncStereoCamera] = None,
    loop: Optional[AbstractEventLoop] = None,
    name: Optional[str] = None,
    **kwargs
)

Bases: Mindtrace

Synchronous wrapper around AsyncStereoCamera.

All operations are executed on a background event loop. This provides a simple synchronous API for stereo camera operations.

Create a synchronous stereo camera wrapper.

Parameters:

Name Type Description Default
async_camera Optional[AsyncStereoCamera]

Existing AsyncStereoCamera instance

None
loop Optional[AbstractEventLoop]

Event loop to use for async operations

None
name Optional[str]

Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera.

None
**kwargs

Additional arguments passed to Mindtrace

{}

Examples:

>>> # Simple usage - opens first available
>>> camera = StereoCamera()
>>> # Open specific camera
>>> camera = StereoCamera(name="BaslerStereoAce:40644640")
>>> # Use existing async camera
>>> async_cam = await AsyncStereoCamera.open()
>>> sync_cam = StereoCamera(async_camera=async_cam, loop=loop)
name property
name: str

Get camera name.

Returns:

Type Description
str

Camera name in format "Backend:serial_number"

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

Returns:

Type Description
Optional[StereoCalibrationData]

StereoCalibrationData if available, None otherwise

is_open property
is_open: bool

Check if camera is open.

Returns:

Type Description
bool

True if camera is open, False otherwise

close
close() -> None

Close camera and release resources.

Examples:

>>> camera = StereoCamera()
>>> # ... use camera ...
>>> camera.close()
capture
capture(
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
) -> StereoGrabResult

Capture multi-component stereo data.

Parameters:

Name Type Description Default
enable_intensity bool

Whether to capture intensity image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to disparity

True
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

Examples:

>>> camera = StereoCamera()
>>> result = camera.capture()
>>> print(f"Intensity: {result.intensity.shape}")
>>> print(f"Disparity: {result.disparity.shape}")
>>> camera.close()
capture_point_cloud
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

Examples:

>>> camera = StereoCamera()
>>> point_cloud = camera.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
>>> camera.close()
configure
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.configure(ExposureTime=15000, Gain=2.0)
>>> camera.close()
set_depth_range
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

Parameters:

Name Type Description Default
min_depth float

Minimum depth (e.g., 0.3 meters)

required
max_depth float

Maximum depth (e.g., 5.0 meters)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_depth_range(0.5, 3.0)
>>> camera.close()
set_illumination_mode
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_illumination_mode("AlternateActive")
>>> camera.close()
set_binning
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Binning reduces network transfer and computation.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_binning(2, 2)
>>> camera.set_depth_quality("Full")  # Recommended for low latency
>>> camera.close()
set_depth_quality
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> # Low latency configuration
>>> camera.set_binning(2, 2)
>>> camera.set_depth_quality("Full")
>>> camera.close()
set_pixel_format
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_pixel_format("Mono8")  # Force grayscale
>>> camera.close()
set_exposure_time
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_exposure_time(5000)  # 5ms exposure
>>> camera.close()
set_gain
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_gain(2.0)
>>> camera.close()
get_exposure_time
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> exposure = camera.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
>>> camera.close()
get_gain
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> gain = camera.get_gain()
>>> print(f"Gain: {gain}")
>>> camera.close()
get_depth_quality
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> quality = camera.get_depth_quality()
>>> print(f"Quality: {quality}")
>>> camera.close()
get_pixel_format
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> format = camera.get_pixel_format()
>>> print(f"Format: {format}")
>>> camera.close()
get_binning
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> h_bin, v_bin = camera.get_binning()
>>> print(f"Binning: {h_bin}x{v_bin}")
>>> camera.close()
get_illumination_mode
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> mode = camera.get_illumination_mode()
>>> print(f"Illumination: {mode}")
>>> camera.close()
get_depth_range
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> min_d, max_d = camera.get_depth_range()
>>> print(f"Range: {min_d}m - {max_d}m")
>>> camera.close()
enable_software_trigger
enable_software_trigger() -> None

Enable software triggering mode.

After enabling, use start_grabbing(), then execute_trigger() to capture frames on demand.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()  # Start grabbing first!
>>> for i in range(10):
...     camera.execute_trigger()
...     result = camera.capture()
>>> camera.close()
start_grabbing
start_grabbing() -> None

Start grabbing frames.

Must be called after enable_software_trigger() and before execute_trigger().

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()
>>> for i in range(10):
...     camera.execute_trigger()
...     result = camera.capture()
>>> camera.close()
execute_trigger
execute_trigger() -> None

Execute software trigger.

Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()
>>> camera.execute_trigger()
>>> result = camera.capture()
>>> camera.close()
StereoGrabResult dataclass
StereoGrabResult(
    intensity: Optional[ndarray],
    disparity: Optional[ndarray],
    timestamp: float,
    frame_number: int,
    disparity_calibrated: Optional[ndarray] = None,
    has_intensity: bool = True,
    has_disparity: bool = True,
)

Result from stereo camera capture containing multi-component data.

Attributes:

Name Type Description
intensity Optional[ndarray]

Intensity image - RGB8 (H, W, 3) or Mono8 (H, W)

disparity Optional[ndarray]

Disparity map - uint16 (H, W)

timestamp float

Capture timestamp in seconds

frame_number int

Sequential frame number

disparity_calibrated Optional[ndarray]

Calibrated disparity map - float32 (H, W), optional

has_intensity bool

Flag indicating if intensity data is present

has_disparity bool

Flag indicating if disparity data is present

intensity_shape property
intensity_shape: tuple

Get shape of intensity image.

disparity_shape property
disparity_shape: tuple

Get shape of disparity map.

is_color_intensity property
is_color_intensity: bool

Check if intensity is color (RGB) vs grayscale.

backends

Stereo camera backends.

This module provides the abstract base class and concrete implementations for stereo camera backends.

BaslerStereoAceBackend
BaslerStereoAceBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: StereoCameraBackend

Backend for Basler Stereo ace cameras using pypylon.

The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.

Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.

Initialize Basler Stereo ace backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

name property
name: str

Get camera name.

is_open property
is_open: bool

Check if camera is open.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

discover staticmethod
discover() -> List[str]

Discover available Stereo ace cameras.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Stereo ace cameras with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed() - runs discovery in threadpool.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

initialize async
initialize() -> bool

Initialize camera connection.

Returns:

Type Description
bool

True if initialization successful, False otherwise

get_calibration async
get_calibration() -> StereoCalibrationData

Get factory calibration parameters from camera.

Returns:

Type Description
StereoCalibrationData

StereoCalibrationData with factory calibration

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If calibration cannot be read

capture async
capture(
    timeout_ms: int = 20000,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
) -> StereoGrabResult

Capture stereo data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

20000
enable_intensity bool

Whether to capture intensity data

True
enable_disparity bool

Whether to capture disparity data

True
calibrate_disparity bool

Whether to apply calibration to disparity

True

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range.

Parameters:

Name Type Description Default
min_depth float

Minimum depth in meters (e.g., 0.3)

required
max_depth float

Maximum depth in meters (e.g., 5.0)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required
Note

Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example
Low latency configuration

await camera.set_binning(2, 2) await camera.set_depth_quality("Full")

set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Example

await camera.set_pixel_format("Mono8") # Force grayscale

set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_exposure_time(5000) # 5ms exposure

set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_gain(2.0)

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

gain = await camera.get_gain() print(f"Current gain: {gain}")

get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")

get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

format = await camera.get_pixel_format() print(f"Pixel format: {format}")

get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")

get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")

get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await backend.set_trigger_mode("continuous")  # Free running
>>> await backend.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await backend.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes.

Returns:

Type Description
List[str]

List of supported trigger modes: ["continuous", "trigger"]

Note

This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.

start_grabbing async
start_grabbing() -> None

Start grabbing frames.

This must be called before execute_trigger() in software trigger mode.

Raises:

Type Description
CameraConnectionError

If camera not opened

execute_trigger async
execute_trigger() -> None

Execute software trigger.

Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 20000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
include_confidence bool

Whether to include confidence values (not supported)

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

close async
close() -> None

Close camera and release resources.

StereoCameraBackend
StereoCameraBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: MindtraceABC

Abstract base class for all stereo camera implementations.

This class defines the async interface that all stereo camera backends must implement to ensure consistent behavior across different stereo camera types and manufacturers. Uses async-first design consistent with CameraBackend and PLC backends.

Attributes:

Name Type Description
serial_number

Unique identifier for the camera

calibration Optional[StereoCalibrationData]

Factory calibration parameters

is_open bool

Camera connection status

Implementation Guide
  • Offload blocking SDK calls from async methods: Use asyncio.to_thread for simple cases or loop.run_in_executor with a per-instance single-thread executor when the SDK requires thread affinity.
  • Thread affinity: Many vendor SDKs are safest when all calls originate from one OS thread. Prefer a dedicated single-thread executor created during initialize() and shut down in close() to serialize SDK access without blocking the event loop.
  • Timeouts and cancellation: Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with asyncio.wait_for to bound runtime. Note that cancelling an await does not stop the underlying thread function; design idempotent/short tasks when possible.
  • Event loop hygiene: Never call blocking functions (e.g., long SDK calls, time.sleep) directly in async methods. Replace sleeps with await asyncio.sleep or run blocking work in the executor.
  • Sync helpers: Lightweight getters/setters that do not touch hardware may remain synchronous. If a "getter" calls into the SDK, route it through the executor to avoid blocking.
  • Errors: Map SDK-specific exceptions to the domain exceptions in mindtrace.hardware.core.exceptions with clear, contextual messages.
  • Cleanup: Ensure resources (device handles, executors, buffers) are released in close(). __aenter__/__aexit__ already call initialize/close for async contexts.
Example Implementation

class MyStereoCameraBackend(StereoCameraBackend): ... async def initialize(self) -> bool: ... # Connect to camera, load calibration ... return True ... ... async def capture(self, ...) -> StereoGrabResult: ... # Capture stereo data ... return StereoGrabResult(...)

Initialize base stereo camera backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Unique identifier for the camera (auto-discovered if None)

None
op_timeout_s float

Default timeout in seconds for SDK operations

30.0
name abstractmethod property
name: str

Get camera name in format 'BackendType:serial_number'.

is_open property
is_open: bool

Check if camera is open.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

discover abstractmethod staticmethod
discover() -> List[str]

Discover available stereo cameras.

Returns:

Type Description
List[str]

List of serial numbers or identifiers for available cameras

Raises:

Type Description
SDKNotAvailableError

If required SDK is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.

Returns:

Type Description
List[str]

List of serial numbers for available cameras

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover cameras with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information (serial_number, model, etc.)

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize abstractmethod async
initialize() -> bool

Initialize camera connection and load calibration.

This method should: 1. Connect to the camera hardware 2. Load factory calibration parameters 3. Apply default configuration

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If camera cannot be found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

capture abstractmethod async
capture(
    timeout_ms: int = 20000,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
) -> StereoGrabResult

Capture stereo data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

20000
enable_intensity bool

Whether to capture intensity/texture image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to raw disparity

True

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

close abstractmethod async
close() -> None

Close camera and release resources.

This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up executors/threads if used

get_calibration async
get_calibration() -> StereoCalibrationData

Get factory calibration parameters from camera.

Returns:

Type Description
StereoCalibrationData

StereoCalibrationData with factory calibration

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If calibration cannot be read

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Default implementation captures stereo data and generates point cloud using calibration parameters. Override for backend-specific optimization.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

set_gain async
set_gain(gain: float) -> None

Set camera gain.

get_gain async
get_gain() -> float

Get current camera gain.

set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

get_depth_range async
get_depth_range() -> Tuple[float, float]

Get current depth measurement range in meters.

set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level (e.g., 'Full', 'Normal', 'Low').

get_depth_quality async
get_depth_quality() -> str

Get current depth quality level.

set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode ('AlwaysActive' or 'AlternateActive').

get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

get_binning async
get_binning() -> Tuple[int, int]

Get current binning settings (horizontal, vertical).

set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('continuous' or 'trigger').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes.

start_grabbing async
start_grabbing() -> None

Start grabbing frames (required before execute_trigger in trigger mode).

execute_trigger async
execute_trigger() -> None

Execute software trigger.

basler

Basler Stereo ace backend.

BaslerStereoAceBackend
BaslerStereoAceBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: StereoCameraBackend

Backend for Basler Stereo ace cameras using pypylon.

The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.

Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.

Initialize Basler Stereo ace backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

name property
name: str

Get camera name.

is_open property
is_open: bool

Check if camera is open.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

discover staticmethod
discover() -> List[str]

Discover available Stereo ace cameras.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Stereo ace cameras with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed() - runs discovery in threadpool.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

initialize async
initialize() -> bool

Initialize camera connection.

Returns:

Type Description
bool

True if initialization successful, False otherwise

get_calibration async
get_calibration() -> StereoCalibrationData

Get factory calibration parameters from camera.

Returns:

Type Description
StereoCalibrationData

StereoCalibrationData with factory calibration

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If calibration cannot be read

capture async
capture(
    timeout_ms: int = 20000,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
) -> StereoGrabResult

Capture stereo data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

20000
enable_intensity bool

Whether to capture intensity data

True
enable_disparity bool

Whether to capture disparity data

True
calibrate_disparity bool

Whether to apply calibration to disparity

True

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range.

Parameters:

Name Type Description Default
min_depth float

Minimum depth in meters (e.g., 0.3)

required
max_depth float

Maximum depth in meters (e.g., 5.0)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required
Note

Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example
Low latency configuration

await camera.set_binning(2, 2) await camera.set_depth_quality("Full")

set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Example

await camera.set_pixel_format("Mono8") # Force grayscale

set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_exposure_time(5000) # 5ms exposure

set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_gain(2.0)

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

gain = await camera.get_gain() print(f"Current gain: {gain}")

get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")

get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

format = await camera.get_pixel_format() print(f"Pixel format: {format}")

get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")

get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")

get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await backend.set_trigger_mode("continuous")  # Free running
>>> await backend.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await backend.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes.

Returns:

Type Description
List[str]

List of supported trigger modes: ["continuous", "trigger"]

Note

This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.

start_grabbing async
start_grabbing() -> None

Start grabbing frames.

This must be called before execute_trigger() in software trigger mode.

Raises:

Type Description
CameraConnectionError

If camera not opened

execute_trigger async
execute_trigger() -> None

Execute software trigger.

Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 20000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
include_confidence bool

Whether to include confidence values (not supported)

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

close async
close() -> None

Close camera and release resources.

basler_stereo_ace

Basler Stereo ace camera backend using pypylon.

This backend provides access to Basler Stereo ace cameras which combine two ace2 Pro cameras with a pattern projector into a unified stereo vision system.

BaslerStereoAceBackend
BaslerStereoAceBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: StereoCameraBackend

Backend for Basler Stereo ace cameras using pypylon.

The Stereo ace camera is accessed through a unified device interface (DeviceClass: BaslerGTC/Basler/basler_xw) that presents the stereo pair as a single camera with multi-component output.

Extends StereoCameraBackend to provide consistent interface across different stereo camera manufacturers.

Initialize Basler Stereo ace backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Serial number or user-defined name of specific camera. If all digits, treated as serial number. Otherwise, treated as user-defined name. If None, opens first available Stereo ace camera.

None
op_timeout_s float

Timeout in seconds for SDK operations (default 30s).

30.0

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

name property
name: str

Get camera name.

is_open property
is_open: bool

Check if camera is open.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

discover staticmethod
discover() -> List[str]

Discover available Stereo ace cameras.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Use this instead of discover() when calling from async context to avoid blocking the event loop during camera discovery.

Returns:

Type Description
List[str]

List of serial numbers for available Stereo ace cameras

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover Stereo ace cameras with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

Raises:

Type Description
SDKNotAvailableError

If pypylon is not available

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed() - runs discovery in threadpool.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information

initialize async
initialize() -> bool

Initialize camera connection.

Returns:

Type Description
bool

True if initialization successful, False otherwise

get_calibration async
get_calibration() -> StereoCalibrationData

Get factory calibration parameters from camera.

Returns:

Type Description
StereoCalibrationData

StereoCalibrationData with factory calibration

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If calibration cannot be read

capture async
capture(
    timeout_ms: int = 20000,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
) -> StereoGrabResult

Capture stereo data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

20000
enable_intensity bool

Whether to capture intensity data

True
enable_disparity bool

Whether to capture disparity data

True
calibrate_disparity bool

Whether to apply calibration to disparity

True

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs to configure. Special parameters: - trigger_mode: "continuous" or "trigger" - depth_range: tuple of (min_depth, max_depth) - illumination_mode: "AlwaysActive" or "AlternateActive" - binning: tuple of (horizontal, vertical) - depth_quality: "Full", "High", "Normal", or "Low" - All other parameters passed directly to camera

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range.

Parameters:

Name Type Description Default
min_depth float

Minimum depth in meters (e.g., 0.3)

required
max_depth float

Maximum depth in meters (e.g., 5.0)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required
Note

Setting quality to "Full" with binning reduces latency while maintaining depth quality. This is recommended for low-latency applications.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example
Low latency configuration

await camera.set_binning(2, 2) await camera.set_depth_quality("Full")

set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Example

await camera.set_pixel_format("Mono8") # Force grayscale

set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_exposure_time(5000) # 5ms exposure

set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Example

await camera.set_gain(2.0)

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

exposure = await camera.get_exposure_time() print(f"Current exposure: {exposure}us")

get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

gain = await camera.get_gain() print(f"Current gain: {gain}")

get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

quality = await camera.get_depth_quality() print(f"Depth quality: {quality}")

get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

format = await camera.get_pixel_format() print(f"Pixel format: {format}")

get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

h_bin, v_bin = await camera.get_binning() print(f"Binning: {h_bin}x{v_bin}")

get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

mode = await camera.get_illumination_mode() print(f"Illumination: {mode}")

get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Example

min_d, max_d = await camera.get_depth_range() print(f"Depth range: {min_d}m - {max_d}m")

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition (TriggerMode=Off) - "trigger": Software-triggered acquisition (TriggerMode=On, TriggerSource=Software)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await backend.set_trigger_mode("continuous")  # Free running
>>> await backend.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" if TriggerMode is Off, "trigger" if TriggerMode is On with Software source

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await backend.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes.

Returns:

Type Description
List[str]

List of supported trigger modes: ["continuous", "trigger"]

Note

This provides a simplified interface. The underlying camera supports additional modes (SingleFrame, MultiFrame, hardware triggers) accessible via direct configure() calls if needed.

start_grabbing async
start_grabbing() -> None

Start grabbing frames.

This must be called before execute_trigger() in software trigger mode.

Raises:

Type Description
CameraConnectionError

If camera not opened

execute_trigger async
execute_trigger() -> None

Execute software trigger.

Note: In software trigger mode, ensure start_grabbing() is called first, or call capture() once before the trigger loop to start grabbing.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True,
    include_confidence: bool = False,
    downsample_factor: int = 1,
    timeout_ms: int = 20000,
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
include_confidence bool

Whether to include confidence values (not supported)

False
downsample_factor int

Downsampling factor (1 = no downsampling)

1
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

close async
close() -> None

Close camera and release resources.

stereo_camera_backend

Abstract base class for stereo camera backends.

This module defines the async interface that all stereo camera backends must implement to ensure consistent behavior across different stereo camera types and manufacturers.

Following the same architectural pattern as CameraBackend for consistency.

StereoCameraBackend
StereoCameraBackend(
    serial_number: Optional[str] = None, op_timeout_s: float = 30.0
)

Bases: MindtraceABC

Abstract base class for all stereo camera implementations.

This class defines the async interface that all stereo camera backends must implement to ensure consistent behavior across different stereo camera types and manufacturers. Uses async-first design consistent with CameraBackend and PLC backends.

Attributes:

Name Type Description
serial_number

Unique identifier for the camera

calibration Optional[StereoCalibrationData]

Factory calibration parameters

is_open bool

Camera connection status

Implementation Guide
  • Offload blocking SDK calls from async methods: Use asyncio.to_thread for simple cases or loop.run_in_executor with a per-instance single-thread executor when the SDK requires thread affinity.
  • Thread affinity: Many vendor SDKs are safest when all calls originate from one OS thread. Prefer a dedicated single-thread executor created during initialize() and shut down in close() to serialize SDK access without blocking the event loop.
  • Timeouts and cancellation: Prefer SDK-native timeouts where available. Otherwise, wrap awaited futures with asyncio.wait_for to bound runtime. Note that cancelling an await does not stop the underlying thread function; design idempotent/short tasks when possible.
  • Event loop hygiene: Never call blocking functions (e.g., long SDK calls, time.sleep) directly in async methods. Replace sleeps with await asyncio.sleep or run blocking work in the executor.
  • Sync helpers: Lightweight getters/setters that do not touch hardware may remain synchronous. If a "getter" calls into the SDK, route it through the executor to avoid blocking.
  • Errors: Map SDK-specific exceptions to the domain exceptions in mindtrace.hardware.core.exceptions with clear, contextual messages.
  • Cleanup: Ensure resources (device handles, executors, buffers) are released in close(). __aenter__/__aexit__ already call initialize/close for async contexts.
Example Implementation

class MyStereoCameraBackend(StereoCameraBackend): ... async def initialize(self) -> bool: ... # Connect to camera, load calibration ... return True ... ... async def capture(self, ...) -> StereoGrabResult: ... # Capture stereo data ... return StereoGrabResult(...)

Initialize base stereo camera backend.

Parameters:

Name Type Description Default
serial_number Optional[str]

Unique identifier for the camera (auto-discovered if None)

None
op_timeout_s float

Default timeout in seconds for SDK operations

30.0
name abstractmethod property
name: str

Get camera name in format 'BackendType:serial_number'.

is_open property
is_open: bool

Check if camera is open.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

discover abstractmethod staticmethod
discover() -> List[str]

Discover available stereo cameras.

Returns:

Type Description
List[str]

List of serial numbers or identifiers for available cameras

Raises:

Type Description
SDKNotAvailableError

If required SDK is not available

discover_async async classmethod
discover_async() -> List[str]

Async wrapper for discover() - runs discovery in threadpool.

Default implementation runs discover() in a thread. Override if your SDK provides native async discovery.

Returns:

Type Description
List[str]

List of serial numbers for available cameras

discover_detailed staticmethod
discover_detailed() -> List[Dict[str, str]]

Discover cameras with detailed information.

Returns:

Type Description
List[Dict[str, str]]

List of dictionaries containing camera information (serial_number, model, etc.)

discover_detailed_async async classmethod
discover_detailed_async() -> List[Dict[str, str]]

Async wrapper for discover_detailed().

initialize abstractmethod async
initialize() -> bool

Initialize camera connection and load calibration.

This method should: 1. Connect to the camera hardware 2. Load factory calibration parameters 3. Apply default configuration

Returns:

Type Description
bool

True if initialization successful

Raises:

Type Description
CameraNotFoundError

If camera cannot be found

CameraInitializationError

If camera initialization fails

CameraConnectionError

If camera connection fails

capture abstractmethod async
capture(
    timeout_ms: int = 20000,
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
) -> StereoGrabResult

Capture stereo data with multiple components.

Parameters:

Name Type Description Default
timeout_ms int

Capture timeout in milliseconds

20000
enable_intensity bool

Whether to capture intensity/texture image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to raw disparity

True

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraTimeoutError

If capture times out

close abstractmethod async
close() -> None

Close camera and release resources.

This method should: 1. Stop any ongoing acquisition 2. Release hardware handles 3. Clean up executors/threads if used

get_calibration async
get_calibration() -> StereoCalibrationData

Get factory calibration parameters from camera.

Returns:

Type Description
StereoCalibrationData

StereoCalibrationData with factory calibration

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If calibration cannot be read

capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Default implementation captures stereo data and generates point cloud using calibration parameters. Override for backend-specific optimization.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional attributes

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

set_gain async
set_gain(gain: float) -> None

Set camera gain.

get_gain async
get_gain() -> float

Get current camera gain.

set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

get_depth_range async
get_depth_range() -> Tuple[float, float]

Get current depth measurement range in meters.

set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level (e.g., 'Full', 'Normal', 'Low').

get_depth_quality async
get_depth_quality() -> str

Get current depth quality level.

set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode ('AlwaysActive' or 'AlternateActive').

get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

get_binning async
get_binning() -> Tuple[int, int]

Get current binning settings (horizontal, vertical).

set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode ('continuous' or 'trigger').

get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode.

get_trigger_modes async
get_trigger_modes() -> List[str]

Get available trigger modes.

start_grabbing async
start_grabbing() -> None

Start grabbing frames (required before execute_trigger in trigger mode).

execute_trigger async
execute_trigger() -> None

Execute software trigger.

core

Core stereo camera interfaces and data models.

AsyncStereoCamera
AsyncStereoCamera(backend)

Bases: Mindtrace

Async stereo camera interface.

Provides high-level stereo camera operations including multi-component capture and 3D point cloud generation.

Initialize async stereo camera.

Parameters:

Name Type Description Default
backend

Backend instance (e.g., BaslerStereoAceBackend)

required
name property
name: str

Get camera name.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

is_open property
is_open: bool

Check if camera is open.

open async classmethod
open(name: Optional[str] = None) -> 'AsyncStereoCamera'

Open and initialize a stereo camera.

Parameters:

Name Type Description Default
name Optional[str]

Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera.

None

Returns:

Type Description
'AsyncStereoCamera'

Initialized AsyncStereoCamera instance

Raises:

Type Description
CameraNotFoundError

If camera not found

CameraConnectionError

If connection fails

Examples:

>>> camera = await AsyncStereoCamera.open()
>>> camera = await AsyncStereoCamera.open("BaslerStereoAce:40644640")
initialize async
initialize() -> bool

Initialize camera and load calibration.

Returns:

Type Description
bool

True if initialization successful

Note

Usually not needed as open() handles initialization

close async
close() -> None

Close camera and release resources.

capture async
capture(
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
) -> StereoGrabResult

Capture multi-component stereo data.

Parameters:

Name Type Description Default
enable_intensity bool

Whether to capture intensity image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to disparity

True
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

Examples:

>>> result = await camera.capture()
>>> print(f"Intensity: {result.intensity.shape}")
>>> print(f"Disparity: {result.disparity.shape}")
capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

Examples:

>>> point_cloud = await camera.capture_point_cloud()
>>> point_cloud.save_ply("output.ply")
configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

Examples:

>>> await camera.configure(ExposureTime=15000, Gain=2.0)
set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

Parameters:

Name Type Description Default
min_depth float

Minimum depth (e.g., 0.3 meters)

required
max_depth float

Maximum depth (e.g., 5.0 meters)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_depth_range(0.5, 3.0)
set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await camera.set_illumination_mode("AlternateActive")
set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Binning reduces network transfer and computation.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_binning(2, 2)
>>> await camera.set_depth_quality("Full")  # Recommended for low latency
set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> # Low latency configuration
>>> await camera.set_binning(2, 2)
>>> await camera.set_depth_quality("Full")
set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Examples:

>>> await camera.set_pixel_format("Mono8")  # Force grayscale
set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_exposure_time(5000)  # 5ms exposure
set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_gain(2.0)
get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> exposure = await camera.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> gain = await camera.get_gain()
>>> print(f"Gain: {gain}")
get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> quality = await camera.get_depth_quality()
>>> print(f"Quality: {quality}")
get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> format = await camera.get_pixel_format()
>>> print(f"Format: {format}")
get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> h_bin, v_bin = await camera.get_binning()
>>> print(f"Binning: {h_bin}x{v_bin}")
get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await camera.get_illumination_mode()
>>> print(f"Illumination: {mode}")
get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> min_d, max_d = await camera.get_depth_range()
>>> print(f"Range: {min_d}m - {max_d}m")
set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition - "trigger": Software-triggered acquisition

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await camera.set_trigger_mode("continuous")  # Free running
>>> await camera.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await camera.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> list[str]

Get available trigger modes.

Returns:

Type Description
list[str]

List of supported trigger modes: ["continuous", "trigger"]

Examples:

>>> modes = await camera.get_trigger_modes()
>>> print(f"Available modes: {modes}")
start_grabbing async
start_grabbing() -> None

Start grabbing frames.

Must be called after enable_software_trigger() and before execute_trigger().

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> await camera.enable_software_trigger()
>>> await camera.start_grabbing()
>>> for i in range(10):
...     await camera.execute_trigger()
...     result = await camera.capture()
execute_trigger async
execute_trigger() -> None

Execute software trigger.

Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

Examples:

>>> await camera.enable_software_trigger()
>>> await camera.start_grabbing()
>>> for i in range(10):
...     await camera.execute_trigger()
...     result = await camera.capture()
PointCloudData dataclass
PointCloudData(
    points: ndarray,
    colors: Optional[ndarray] = None,
    num_points: int = 0,
    has_colors: bool = False,
)

3D point cloud data with optional color information.

Attributes:

Name Type Description
points ndarray

Array of 3D points (N, 3) - (x, y, z) in meters

colors Optional[ndarray]

Optional array of RGB colors (N, 3) - values in [0, 1]

num_points int

Number of valid points

has_colors bool

Flag indicating if color information is present

save_ply
save_ply(path: str, binary: bool = True) -> None

Save point cloud as PLY file.

Parameters:

Name Type Description Default
path str

Output file path

required
binary bool

If True, save in binary format; otherwise ASCII

True

Raises:

Type Description
ImportError

If plyfile is not installed

downsample
downsample(factor: int) -> 'PointCloudData'

Downsample point cloud by given factor.

Parameters:

Name Type Description Default
factor int

Downsampling factor (e.g., 2 = keep every 2nd point)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with downsampled points

StereoCalibrationData dataclass
StereoCalibrationData(
    baseline: float,
    focal_length: float,
    principal_point_u: float,
    principal_point_v: float,
    scale3d: float,
    offset3d: float,
    Q: ndarray,
)

Factory calibration parameters for stereo camera.

These parameters are provided by the camera manufacturer and used for 3D reconstruction from disparity maps.

Attributes:

Name Type Description
baseline float

Stereo baseline in meters (distance between camera pair)

focal_length float

Focal length in pixels

principal_point_u float

Principal point U coordinate in pixels

principal_point_v float

Principal point V coordinate in pixels

scale3d float

Scale factor for disparity conversion

offset3d float

Offset for disparity conversion

Q ndarray

4x4 reprojection matrix for point cloud generation

from_camera_params classmethod
from_camera_params(params: dict) -> 'StereoCalibrationData'

Create calibration data from camera parameter dictionary.

Parameters:

Name Type Description Default
params dict

Dictionary containing calibration parameters: - Scan3dBaseline: Baseline in meters - Scan3dFocalLength: Focal length in pixels - Scan3dPrincipalPointU: Principal point U in pixels - Scan3dPrincipalPointV: Principal point V in pixels - Scan3dCoordinateScale: Scale factor - Scan3dCoordinateOffset: Offset

required

Returns:

Type Description
'StereoCalibrationData'

StereoCalibrationData instance

calibrate_disparity
calibrate_disparity(disparity: ndarray) -> np.ndarray

Apply calibration to raw disparity map.

Parameters:

Name Type Description Default
disparity ndarray

Raw disparity map (uint16)

required

Returns:

Type Description
ndarray

Calibrated disparity map (float32)

StereoGrabResult dataclass
StereoGrabResult(
    intensity: Optional[ndarray],
    disparity: Optional[ndarray],
    timestamp: float,
    frame_number: int,
    disparity_calibrated: Optional[ndarray] = None,
    has_intensity: bool = True,
    has_disparity: bool = True,
)

Result from stereo camera capture containing multi-component data.

Attributes:

Name Type Description
intensity Optional[ndarray]

Intensity image - RGB8 (H, W, 3) or Mono8 (H, W)

disparity Optional[ndarray]

Disparity map - uint16 (H, W)

timestamp float

Capture timestamp in seconds

frame_number int

Sequential frame number

disparity_calibrated Optional[ndarray]

Calibrated disparity map - float32 (H, W), optional

has_intensity bool

Flag indicating if intensity data is present

has_disparity bool

Flag indicating if disparity data is present

intensity_shape property
intensity_shape: tuple

Get shape of intensity image.

disparity_shape property
disparity_shape: tuple

Get shape of disparity map.

is_color_intensity property
is_color_intensity: bool

Check if intensity is color (RGB) vs grayscale.

StereoCamera
StereoCamera(
    async_camera: Optional[AsyncStereoCamera] = None,
    loop: Optional[AbstractEventLoop] = None,
    name: Optional[str] = None,
    **kwargs
)

Bases: Mindtrace

Synchronous wrapper around AsyncStereoCamera.

All operations are executed on a background event loop. This provides a simple synchronous API for stereo camera operations.

Create a synchronous stereo camera wrapper.

Parameters:

Name Type Description Default
async_camera Optional[AsyncStereoCamera]

Existing AsyncStereoCamera instance

None
loop Optional[AbstractEventLoop]

Event loop to use for async operations

None
name Optional[str]

Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera.

None
**kwargs

Additional arguments passed to Mindtrace

{}

Examples:

>>> # Simple usage - opens first available
>>> camera = StereoCamera()
>>> # Open specific camera
>>> camera = StereoCamera(name="BaslerStereoAce:40644640")
>>> # Use existing async camera
>>> async_cam = await AsyncStereoCamera.open()
>>> sync_cam = StereoCamera(async_camera=async_cam, loop=loop)
name property
name: str

Get camera name.

Returns:

Type Description
str

Camera name in format "Backend:serial_number"

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

Returns:

Type Description
Optional[StereoCalibrationData]

StereoCalibrationData if available, None otherwise

is_open property
is_open: bool

Check if camera is open.

Returns:

Type Description
bool

True if camera is open, False otherwise

close
close() -> None

Close camera and release resources.

Examples:

>>> camera = StereoCamera()
>>> # ... use camera ...
>>> camera.close()
capture
capture(
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
) -> StereoGrabResult

Capture multi-component stereo data.

Parameters:

Name Type Description Default
enable_intensity bool

Whether to capture intensity image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to disparity

True
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

Examples:

>>> camera = StereoCamera()
>>> result = camera.capture()
>>> print(f"Intensity: {result.intensity.shape}")
>>> print(f"Disparity: {result.disparity.shape}")
>>> camera.close()
capture_point_cloud
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

Examples:

>>> camera = StereoCamera()
>>> point_cloud = camera.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
>>> camera.close()
configure
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.configure(ExposureTime=15000, Gain=2.0)
>>> camera.close()
set_depth_range
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

Parameters:

Name Type Description Default
min_depth float

Minimum depth (e.g., 0.3 meters)

required
max_depth float

Maximum depth (e.g., 5.0 meters)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_depth_range(0.5, 3.0)
>>> camera.close()
set_illumination_mode
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_illumination_mode("AlternateActive")
>>> camera.close()
set_binning
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Binning reduces network transfer and computation.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_binning(2, 2)
>>> camera.set_depth_quality("Full")  # Recommended for low latency
>>> camera.close()
set_depth_quality
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> # Low latency configuration
>>> camera.set_binning(2, 2)
>>> camera.set_depth_quality("Full")
>>> camera.close()
set_pixel_format
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_pixel_format("Mono8")  # Force grayscale
>>> camera.close()
set_exposure_time
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_exposure_time(5000)  # 5ms exposure
>>> camera.close()
set_gain
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_gain(2.0)
>>> camera.close()
get_exposure_time
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> exposure = camera.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
>>> camera.close()
get_gain
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> gain = camera.get_gain()
>>> print(f"Gain: {gain}")
>>> camera.close()
get_depth_quality
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> quality = camera.get_depth_quality()
>>> print(f"Quality: {quality}")
>>> camera.close()
get_pixel_format
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> format = camera.get_pixel_format()
>>> print(f"Format: {format}")
>>> camera.close()
get_binning
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> h_bin, v_bin = camera.get_binning()
>>> print(f"Binning: {h_bin}x{v_bin}")
>>> camera.close()
get_illumination_mode
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> mode = camera.get_illumination_mode()
>>> print(f"Illumination: {mode}")
>>> camera.close()
get_depth_range
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> min_d, max_d = camera.get_depth_range()
>>> print(f"Range: {min_d}m - {max_d}m")
>>> camera.close()
enable_software_trigger
enable_software_trigger() -> None

Enable software triggering mode.

After enabling, use start_grabbing(), then execute_trigger() to capture frames on demand.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()  # Start grabbing first!
>>> for i in range(10):
...     camera.execute_trigger()
...     result = camera.capture()
>>> camera.close()
start_grabbing
start_grabbing() -> None

Start grabbing frames.

Must be called after enable_software_trigger() and before execute_trigger().

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()
>>> for i in range(10):
...     camera.execute_trigger()
...     result = camera.capture()
>>> camera.close()
execute_trigger
execute_trigger() -> None

Execute software trigger.

Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()
>>> camera.execute_trigger()
>>> result = camera.capture()
>>> camera.close()
async_stereo_camera

Async stereo camera interface providing high-level stereo capture operations.

AsyncStereoCamera
AsyncStereoCamera(backend)

Bases: Mindtrace

Async stereo camera interface.

Provides high-level stereo camera operations including multi-component capture and 3D point cloud generation.

Initialize async stereo camera.

Parameters:

Name Type Description Default
backend

Backend instance (e.g., BaslerStereoAceBackend)

required
name property
name: str

Get camera name.

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

is_open property
is_open: bool

Check if camera is open.

open async classmethod
open(name: Optional[str] = None) -> 'AsyncStereoCamera'

Open and initialize a stereo camera.

Parameters:

Name Type Description Default
name Optional[str]

Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera.

None

Returns:

Type Description
'AsyncStereoCamera'

Initialized AsyncStereoCamera instance

Raises:

Type Description
CameraNotFoundError

If camera not found

CameraConnectionError

If connection fails

Examples:

>>> camera = await AsyncStereoCamera.open()
>>> camera = await AsyncStereoCamera.open("BaslerStereoAce:40644640")
initialize async
initialize() -> bool

Initialize camera and load calibration.

Returns:

Type Description
bool

True if initialization successful

Note

Usually not needed as open() handles initialization

close async
close() -> None

Close camera and release resources.

capture async
capture(
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
) -> StereoGrabResult

Capture multi-component stereo data.

Parameters:

Name Type Description Default
enable_intensity bool

Whether to capture intensity image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to disparity

True
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

Examples:

>>> result = await camera.capture()
>>> print(f"Intensity: {result.intensity.shape}")
>>> print(f"Disparity: {result.disparity.shape}")
capture_point_cloud async
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

Examples:

>>> point_cloud = await camera.capture_point_cloud()
>>> point_cloud.save_ply("output.ply")
configure async
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

Examples:

>>> await camera.configure(ExposureTime=15000, Gain=2.0)
set_depth_range async
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

Parameters:

Name Type Description Default
min_depth float

Minimum depth (e.g., 0.3 meters)

required
max_depth float

Maximum depth (e.g., 5.0 meters)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_depth_range(0.5, 3.0)
set_illumination_mode async
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await camera.set_illumination_mode("AlternateActive")
set_binning async
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Binning reduces network transfer and computation.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_binning(2, 2)
>>> await camera.set_depth_quality("Full")  # Recommended for low latency
set_depth_quality async
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> # Low latency configuration
>>> await camera.set_binning(2, 2)
>>> await camera.set_depth_quality("Full")
set_pixel_format async
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Examples:

>>> await camera.set_pixel_format("Mono8")  # Force grayscale
set_exposure_time async
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_exposure_time(5000)  # 5ms exposure
set_gain async
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> await camera.set_gain(2.0)
get_exposure_time async
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> exposure = await camera.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
get_gain async
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> gain = await camera.get_gain()
>>> print(f"Gain: {gain}")
get_depth_quality async
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> quality = await camera.get_depth_quality()
>>> print(f"Quality: {quality}")
get_pixel_format async
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> format = await camera.get_pixel_format()
>>> print(f"Format: {format}")
get_binning async
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> h_bin, v_bin = await camera.get_binning()
>>> print(f"Binning: {h_bin}x{v_bin}")
get_illumination_mode async
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await camera.get_illumination_mode()
>>> print(f"Illumination: {mode}")
get_depth_range async
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> min_d, max_d = await camera.get_depth_range()
>>> print(f"Range: {min_d}m - {max_d}m")
set_trigger_mode async
set_trigger_mode(mode: str) -> None

Set trigger mode (simplified interface).

Parameters:

Name Type Description Default
mode str

Trigger mode ("continuous" or "trigger") - "continuous": Free-running continuous acquisition - "trigger": Software-triggered acquisition

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> await camera.set_trigger_mode("continuous")  # Free running
>>> await camera.set_trigger_mode("trigger")     # Software triggered
get_trigger_mode async
get_trigger_mode() -> str

Get current trigger mode (simplified interface).

Returns:

Type Description
str

"continuous" or "trigger"

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> mode = await camera.get_trigger_mode()
>>> print(f"Current mode: {mode}")
get_trigger_modes async
get_trigger_modes() -> list[str]

Get available trigger modes.

Returns:

Type Description
list[str]

List of supported trigger modes: ["continuous", "trigger"]

Examples:

>>> modes = await camera.get_trigger_modes()
>>> print(f"Available modes: {modes}")
start_grabbing async
start_grabbing() -> None

Start grabbing frames.

Must be called after enable_software_trigger() and before execute_trigger().

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> await camera.enable_software_trigger()
>>> await camera.start_grabbing()
>>> for i in range(10):
...     await camera.execute_trigger()
...     result = await camera.capture()
execute_trigger async
execute_trigger() -> None

Execute software trigger.

Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

Examples:

>>> await camera.enable_software_trigger()
>>> await camera.start_grabbing()
>>> for i in range(10):
...     await camera.execute_trigger()
...     result = await camera.capture()
models

Data models for stereo camera operations.

This module provides data structures for handling stereo camera data including multi-component capture results, calibration parameters, and 3D point clouds.

StereoGrabResult dataclass
StereoGrabResult(
    intensity: Optional[ndarray],
    disparity: Optional[ndarray],
    timestamp: float,
    frame_number: int,
    disparity_calibrated: Optional[ndarray] = None,
    has_intensity: bool = True,
    has_disparity: bool = True,
)

Result from stereo camera capture containing multi-component data.

Attributes:

Name Type Description
intensity Optional[ndarray]

Intensity image - RGB8 (H, W, 3) or Mono8 (H, W)

disparity Optional[ndarray]

Disparity map - uint16 (H, W)

timestamp float

Capture timestamp in seconds

frame_number int

Sequential frame number

disparity_calibrated Optional[ndarray]

Calibrated disparity map - float32 (H, W), optional

has_intensity bool

Flag indicating if intensity data is present

has_disparity bool

Flag indicating if disparity data is present

intensity_shape property
intensity_shape: tuple

Get shape of intensity image.

disparity_shape property
disparity_shape: tuple

Get shape of disparity map.

is_color_intensity property
is_color_intensity: bool

Check if intensity is color (RGB) vs grayscale.

StereoCalibrationData dataclass
StereoCalibrationData(
    baseline: float,
    focal_length: float,
    principal_point_u: float,
    principal_point_v: float,
    scale3d: float,
    offset3d: float,
    Q: ndarray,
)

Factory calibration parameters for stereo camera.

These parameters are provided by the camera manufacturer and used for 3D reconstruction from disparity maps.

Attributes:

Name Type Description
baseline float

Stereo baseline in meters (distance between camera pair)

focal_length float

Focal length in pixels

principal_point_u float

Principal point U coordinate in pixels

principal_point_v float

Principal point V coordinate in pixels

scale3d float

Scale factor for disparity conversion

offset3d float

Offset for disparity conversion

Q ndarray

4x4 reprojection matrix for point cloud generation

from_camera_params classmethod
from_camera_params(params: dict) -> 'StereoCalibrationData'

Create calibration data from camera parameter dictionary.

Parameters:

Name Type Description Default
params dict

Dictionary containing calibration parameters: - Scan3dBaseline: Baseline in meters - Scan3dFocalLength: Focal length in pixels - Scan3dPrincipalPointU: Principal point U in pixels - Scan3dPrincipalPointV: Principal point V in pixels - Scan3dCoordinateScale: Scale factor - Scan3dCoordinateOffset: Offset

required

Returns:

Type Description
'StereoCalibrationData'

StereoCalibrationData instance

calibrate_disparity
calibrate_disparity(disparity: ndarray) -> np.ndarray

Apply calibration to raw disparity map.

Parameters:

Name Type Description Default
disparity ndarray

Raw disparity map (uint16)

required

Returns:

Type Description
ndarray

Calibrated disparity map (float32)

PointCloudData dataclass
PointCloudData(
    points: ndarray,
    colors: Optional[ndarray] = None,
    num_points: int = 0,
    has_colors: bool = False,
)

3D point cloud data with optional color information.

Attributes:

Name Type Description
points ndarray

Array of 3D points (N, 3) - (x, y, z) in meters

colors Optional[ndarray]

Optional array of RGB colors (N, 3) - values in [0, 1]

num_points int

Number of valid points

has_colors bool

Flag indicating if color information is present

save_ply
save_ply(path: str, binary: bool = True) -> None

Save point cloud as PLY file.

Parameters:

Name Type Description Default
path str

Output file path

required
binary bool

If True, save in binary format; otherwise ASCII

True

Raises:

Type Description
ImportError

If plyfile is not installed

downsample
downsample(factor: int) -> 'PointCloudData'

Downsample point cloud by given factor.

Parameters:

Name Type Description Default
factor int

Downsampling factor (e.g., 2 = keep every 2nd point)

required

Returns:

Type Description
'PointCloudData'

New PointCloudData with downsampled points

stereo_camera

Synchronous stereo camera interface.

This module provides a synchronous wrapper around AsyncStereoCamera, following the same pattern as the regular Camera class.

StereoCamera
StereoCamera(
    async_camera: Optional[AsyncStereoCamera] = None,
    loop: Optional[AbstractEventLoop] = None,
    name: Optional[str] = None,
    **kwargs
)

Bases: Mindtrace

Synchronous wrapper around AsyncStereoCamera.

All operations are executed on a background event loop. This provides a simple synchronous API for stereo camera operations.

Create a synchronous stereo camera wrapper.

Parameters:

Name Type Description Default
async_camera Optional[AsyncStereoCamera]

Existing AsyncStereoCamera instance

None
loop Optional[AbstractEventLoop]

Event loop to use for async operations

None
name Optional[str]

Camera identifier. Format: "BaslerStereoAce:serial_number" If None, opens first available Stereo ace camera.

None
**kwargs

Additional arguments passed to Mindtrace

{}

Examples:

>>> # Simple usage - opens first available
>>> camera = StereoCamera()
>>> # Open specific camera
>>> camera = StereoCamera(name="BaslerStereoAce:40644640")
>>> # Use existing async camera
>>> async_cam = await AsyncStereoCamera.open()
>>> sync_cam = StereoCamera(async_camera=async_cam, loop=loop)
name property
name: str

Get camera name.

Returns:

Type Description
str

Camera name in format "Backend:serial_number"

calibration property
calibration: Optional[StereoCalibrationData]

Get calibration data.

Returns:

Type Description
Optional[StereoCalibrationData]

StereoCalibrationData if available, None otherwise

is_open property
is_open: bool

Check if camera is open.

Returns:

Type Description
bool

True if camera is open, False otherwise

close
close() -> None

Close camera and release resources.

Examples:

>>> camera = StereoCamera()
>>> # ... use camera ...
>>> camera.close()
capture
capture(
    enable_intensity: bool = True,
    enable_disparity: bool = True,
    calibrate_disparity: bool = True,
    timeout_ms: int = 20000,
) -> StereoGrabResult

Capture multi-component stereo data.

Parameters:

Name Type Description Default
enable_intensity bool

Whether to capture intensity image

True
enable_disparity bool

Whether to capture disparity map

True
calibrate_disparity bool

Whether to apply calibration to disparity

True
timeout_ms int

Capture timeout in milliseconds

20000

Returns:

Type Description
StereoGrabResult

StereoGrabResult containing captured data

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

Examples:

>>> camera = StereoCamera()
>>> result = camera.capture()
>>> print(f"Intensity: {result.intensity.shape}")
>>> print(f"Disparity: {result.disparity.shape}")
>>> camera.close()
capture_point_cloud
capture_point_cloud(
    include_colors: bool = True, downsample_factor: int = 1
) -> PointCloudData

Capture and generate 3D point cloud.

Parameters:

Name Type Description Default
include_colors bool

Whether to include color information from intensity

True
downsample_factor int

Downsampling factor (1 = no downsampling)

1

Returns:

Type Description
PointCloudData

PointCloudData with 3D points and optional colors

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraCaptureError

If capture fails

CameraConfigurationError

If calibration not available

Examples:

>>> camera = StereoCamera()
>>> point_cloud = camera.capture_point_cloud()
>>> print(f"Points: {point_cloud.num_points}")
>>> point_cloud.save_ply("output.ply")
>>> camera.close()
configure
configure(**params) -> None

Configure camera parameters.

Parameters:

Name Type Description Default
**params

Parameter name-value pairs

{}

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.configure(ExposureTime=15000, Gain=2.0)
>>> camera.close()
set_depth_range
set_depth_range(min_depth: float, max_depth: float) -> None

Set depth measurement range in meters.

Parameters:

Name Type Description Default
min_depth float

Minimum depth (e.g., 0.3 meters)

required
max_depth float

Maximum depth (e.g., 5.0 meters)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_depth_range(0.5, 3.0)
>>> camera.close()
set_illumination_mode
set_illumination_mode(mode: str) -> None

Set illumination mode.

Parameters:

Name Type Description Default
mode str

'AlwaysActive' (low latency) or 'AlternateActive' (clean intensity)

required

Raises:

Type Description
CameraConfigurationError

If invalid mode or configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_illumination_mode("AlternateActive")
>>> camera.close()
set_binning
set_binning(horizontal: int = 2, vertical: int = 2) -> None

Enable binning for latency reduction.

Binning reduces network transfer and computation.

Parameters:

Name Type Description Default
horizontal int

Horizontal binning factor (typically 2)

2
vertical int

Vertical binning factor (typically 2)

2
Note

When using binning for low latency, consider also setting depth quality to "Full" using set_depth_quality("Full").

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_binning(2, 2)
>>> camera.set_depth_quality("Full")  # Recommended for low latency
>>> camera.close()
set_depth_quality
set_depth_quality(quality: str) -> None

Set depth quality level.

Parameters:

Name Type Description Default
quality str

Depth quality setting. Common values: - "Full": Highest quality, recommended with binning - "Normal": Standard quality - "Low": Lower quality, faster processing

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> # Low latency configuration
>>> camera.set_binning(2, 2)
>>> camera.set_depth_quality("Full")
>>> camera.close()
set_pixel_format
set_pixel_format(format: str) -> None

Set pixel format for intensity component.

Parameters:

Name Type Description Default
format str

Pixel format ("RGB8", "Mono8", etc.)

required

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If format not available or configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_pixel_format("Mono8")  # Force grayscale
>>> camera.close()
set_exposure_time
set_exposure_time(microseconds: float) -> None

Set exposure time in microseconds.

Parameters:

Name Type Description Default
microseconds float

Exposure time in microseconds (e.g., 5000 = 5ms)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_exposure_time(5000)  # 5ms exposure
>>> camera.close()
set_gain
set_gain(gain: float) -> None

Set camera gain.

Parameters:

Name Type Description Default
gain float

Gain value (typically 0.0 to 24.0, camera-dependent)

required

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.set_gain(2.0)
>>> camera.close()
get_exposure_time
get_exposure_time() -> float

Get current exposure time in microseconds.

Returns:

Type Description
float

Current exposure time in microseconds

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> exposure = camera.get_exposure_time()
>>> print(f"Exposure: {exposure}μs")
>>> camera.close()
get_gain
get_gain() -> float

Get current camera gain.

Returns:

Type Description
float

Current gain value

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> gain = camera.get_gain()
>>> print(f"Gain: {gain}")
>>> camera.close()
get_depth_quality
get_depth_quality() -> str

Get current depth quality setting.

Returns:

Type Description
str

Current depth quality level (e.g., "Full", "Normal", "Low")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> quality = camera.get_depth_quality()
>>> print(f"Quality: {quality}")
>>> camera.close()
get_pixel_format
get_pixel_format() -> str

Get current pixel format.

Returns:

Type Description
str

Current pixel format (e.g., "RGB8", "Mono8", "Coord3D_C16")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> format = camera.get_pixel_format()
>>> print(f"Format: {format}")
>>> camera.close()
get_binning
get_binning() -> tuple[int, int]

Get current binning settings.

Returns:

Type Description
tuple[int, int]

Tuple of (horizontal_binning, vertical_binning)

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> h_bin, v_bin = camera.get_binning()
>>> print(f"Binning: {h_bin}x{v_bin}")
>>> camera.close()
get_illumination_mode
get_illumination_mode() -> str

Get current illumination mode.

Returns:

Type Description
str

Current illumination mode ("AlwaysActive" or "AlternateActive")

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> mode = camera.get_illumination_mode()
>>> print(f"Illumination: {mode}")
>>> camera.close()
get_depth_range
get_depth_range() -> tuple[float, float]

Get current depth measurement range in meters.

Returns:

Type Description
tuple[float, float]

Tuple of (min_depth, max_depth) in meters

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> min_d, max_d = camera.get_depth_range()
>>> print(f"Range: {min_d}m - {max_d}m")
>>> camera.close()
enable_software_trigger
enable_software_trigger() -> None

Enable software triggering mode.

After enabling, use start_grabbing(), then execute_trigger() to capture frames on demand.

Raises:

Type Description
CameraConfigurationError

If configuration fails

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()  # Start grabbing first!
>>> for i in range(10):
...     camera.execute_trigger()
...     result = camera.capture()
>>> camera.close()
start_grabbing
start_grabbing() -> None

Start grabbing frames.

Must be called after enable_software_trigger() and before execute_trigger().

Raises:

Type Description
CameraConnectionError

If camera not opened

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()
>>> for i in range(10):
...     camera.execute_trigger()
...     result = camera.capture()
>>> camera.close()
execute_trigger
execute_trigger() -> None

Execute software trigger.

Triggers a frame capture when in software trigger mode. Note: start_grabbing() must be called first after enabling software trigger.

Raises:

Type Description
CameraConnectionError

If camera not opened

CameraConfigurationError

If trigger execution fails

Examples:

>>> camera = StereoCamera()
>>> camera.enable_software_trigger()
>>> camera.start_grabbing()
>>> camera.execute_trigger()
>>> result = camera.capture()
>>> camera.close()
setup

Setup scripts for stereo camera SDKs.

This module provides installation scripts for stereo camera systems.

Available CLI commands (after package installation): mindtrace-stereo-basler install # Install Stereo ace package mindtrace-stereo-basler uninstall # Uninstall Stereo ace package

Each setup script uses Typer for CLI and can be run independently.

StereoAceInstaller
StereoAceInstaller(
    installation_method: str = "tarball",
    install_dir: Optional[str] = None,
    package_path: Optional[str] = None,
)

Bases: Mindtrace

Basler Stereo ace Supplementary Package installer with guided wizard.

This class provides an interactive installation wizard that guides users through downloading and installing the Stereo ace package from the official Basler website.

Initialize the Stereo ace installer.

Parameters:

Name Type Description Default
installation_method str

Installation method ("deb" or "tarball")

'tarball'
install_dir Optional[str]

Custom installation directory (for tarball method)

None
package_path Optional[str]

Path to pre-downloaded package file (optional)

None
install
install() -> bool

Install the Stereo ace Supplementary Package.

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall
uninstall() -> bool

Uninstall the Stereo ace Supplementary Package.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

setup_stereo_ace

Basler Stereo ace Setup Script

This script provides a guided installation wizard for the Basler pylon Supplementary Package for Stereo ace cameras on Linux systems. The package provides the GenTL Producer needed to connect and use Stereo ace camera systems.

Features: - Interactive guided wizard with browser integration - Supports both Debian package (.deb) and tar.gz archive installation - Custom installation path support (default: ~/.local/share/pylon_stereo) - Environment variable setup for GenTL Producer - Shell environment script generation - Support for pre-downloaded packages (--package flag) - Comprehensive logging and error handling - Uninstallation support

Installation Methods
  1. Debian Package (Recommended - requires sudo):
  2. Installs to /opt/pylon
  3. Automatic environment configuration
  4. System-wide availability

  5. tar.gz Archive (Portable - no sudo):

  6. Installs to user-specified or default directory
  7. Requires manual environment setup
  8. Per-user installation
Usage

python setup_stereo_ace.py # Interactive wizard python setup_stereo_ace.py --method deb # Use Debian package python setup_stereo_ace.py --method tarball # Use tar.gz archive python setup_stereo_ace.py --package /path/to/file # Use pre-downloaded file python setup_stereo_ace.py --install-dir ~/pylon # Custom install location python setup_stereo_ace.py --uninstall # Uninstall mindtrace-stereo-basler-install # Console script (install) mindtrace-stereo-basler-uninstall # Console script (uninstall)

Environment Setup

After installation, you must set environment variables:

For Debian package: source /opt/pylon/bin/pylon-setup-env.sh /opt/pylon

For tar.gz archive: source /setup_stereo_env.sh

Or add to ~/.bashrc for persistence: echo "source /setup_stereo_env.sh" >> ~/.bashrc

StereoAceInstaller
StereoAceInstaller(
    installation_method: str = "tarball",
    install_dir: Optional[str] = None,
    package_path: Optional[str] = None,
)

Bases: Mindtrace

Basler Stereo ace Supplementary Package installer with guided wizard.

This class provides an interactive installation wizard that guides users through downloading and installing the Stereo ace package from the official Basler website.

Initialize the Stereo ace installer.

Parameters:

Name Type Description Default
installation_method str

Installation method ("deb" or "tarball")

'tarball'
install_dir Optional[str]

Custom installation directory (for tarball method)

None
package_path Optional[str]

Path to pre-downloaded package file (optional)

None
install
install() -> bool

Install the Stereo ace Supplementary Package.

Returns:

Type Description
bool

True if installation successful, False otherwise

uninstall
uninstall() -> bool

Uninstall the Stereo ace Supplementary Package.

Returns:

Type Description
bool

True if uninstallation successful, False otherwise

install
install(
    method: str = typer.Option(
        "tarball",
        "--method",
        "-m",
        help="Installation method: 'deb' (requires sudo) or 'tarball' (portable)",
    ),
    package: Optional[Path] = typer.Option(
        None,
        "--package",
        "-p",
        help="Path to pre-downloaded package file (.deb or .tar.gz)",
        exists=True,
        dir_okay=False,
    ),
    install_dir: Optional[Path] = typer.Option(
        None,
        "--install-dir",
        "-d",
        help="Custom installation directory (for tarball method)",
    ),
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
) -> None

Install the Basler Stereo ace Supplementary Package using an interactive wizard.

The wizard will guide you through downloading and installing the package from Basler's official website where you'll accept their EULA.

For CI/automation, use --package to provide a pre-downloaded file.

uninstall
uninstall(
    method: str = typer.Option(
        "tarball",
        "--method",
        "-m",
        help="Installation method used: 'deb' or 'tarball'",
    ),
    install_dir: Optional[Path] = typer.Option(
        None,
        "--install-dir",
        "-d",
        help="Custom installation directory (for tarball method)",
    ),
    verbose: bool = typer.Option(
        False, "--verbose", "-v", help="Enable verbose logging"
    ),
) -> None

Uninstall the Basler Stereo ace Supplementary Package.

main
main() -> None

Main entry point for the script.