Skip to main content

What is a compatible driver?

A compatible driver creates the connection between a hardware device’s own API and the Cyberwave platform. It is responsible for translating the device’s native protocol into the twin model that the rest of the platform understands. Drivers can run anywhere — on a dedicated edge device, directly on the robot hardware, in the cloud, or on a developer laptop. In production, they typically run on edge hardware co-located with the device they control. Each driver is packaged as a Docker container image. Edge Core pulls and runs that image, injecting the environment variables described below. This means you can develop and test your driver locally using the same image that will run in production.

Quickstart: scaffold with the Claude skill

The fastest way to get started is the Cyberwave Driver skill for Claude Code. It asks you a few questions about your hardware and scaffolds a complete, production-ready driver project — including the Dockerfile, local dev setup, and a working twin connection. Install the skill:
git clone https://github.com/cyberwave-os/driver-skill ~/.claude/skills/cyberwave-driver
Then in any Claude Code session:
/cyberwave-driver
Claude will generate the full project tree and walk you through connecting it to a real twin locally. The skill source is open source at cyberwave-os/driver-skill.

Quickstart: use the SDK

The fastest way to write a compatible driver is to use one of the official SDKs: The SDKs handle twin synchronization, file I/O, reconnection logic, and more, so you can focus on the hardware integration.

Environment variables

When Edge Core starts a driver container it injects the following environment variables. You can develop your driver assuming these are always set to valid values — no need to handle the case where they are absent.
VariableDescription
CYBERWAVE_TWIN_UUIDUUID of the twin instance this driver manages
CYBERWAVE_API_KEYAPI key scoped to this driver for authenticating platform calls
CYBERWAVE_TWIN_JSON_FILEAbsolute path to a writable JSON file containing the twin’s current state (see Twin JSON file)
CYBERWAVE_CHILD_TWIN_UUIDS(optional) Comma-separated UUIDs of child twins (e.g. cameras) attached to this driver
CYBERWAVE_CHILD_TWIN_UUIDS is set when child twins are attached to the driver twin. Drivers can use this to coordinate child devices (for example, multiple cameras) without additional configuration.

Restart behavior tuning

The following optional variables let you override Edge Core’s restart defaults:
VariableDefaultDescription
CYBERWAVE_DRIVER_RESTART_LOOP_THRESHOLD4Number of restarts before the driver is marked as flapping
CYBERWAVE_DRIVER_RESTART_LOOP_WINDOW_SECONDS60Time window (seconds) used to count restarts
CYBERWAVE_DRIVER_TROUBLESHOOTING_URLhttps://docs.cyberwave.comURL surfaced in platform alerts for operator guidance

Driver failure handling

Drivers must exit with a non-zero code when they cannot access required hardware (for example, a missing /dev/video* device or a disconnected peripheral). This allows Edge Core to detect startup failures and trigger restart logic. Edge Core raises the following alerts:
  • driver_start_failure — raised when a driver container cannot reach a stable running state.
  • driver_restart_loop — raised when a driver exceeds the restart threshold within the window. The container is stopped and marked as flapping.

Twin JSON file

CYBERWAVE_TWIN_JSON_FILE points to a JSON file on disk that contains the digital twin instance (including its metadata) and the associated catalog twin data, matching the TwinSchema and AssetSchema API schemas. Drivers may read and modify this file. Edge Core syncs any changes back to the backend when connectivity is available.

Runtime configuration

Drivers should treat metadata["edge_configs"] as the source of truth for per-device runtime configuration, and metadata["edge_fingerprint"] as the edge identity (not duplicated inside edge_configs). Read edge_configs from CYBERWAVE_TWIN_JSON_FILE at startup to obtain per-device settings without hardcoding them in the image.

Sensor data output (stub)

If your driver produces sensor data (video frames, depth maps, audio, joint states, etc.), it should write that data to the sensor data convention path so that edge scripts and ML models can consume it locally — with zero network overhead.

The convention

Write sensor data to a subfolder of the config directory that Edge Core already mounts into your container:
$CYBERWAVE_EDGE_CONFIG_DIR/data/{twin_uuid}/{channel}/{sensor_name}/
CYBERWAVE_EDGE_CONFIG_DIR is always set by Edge Core (defaults to /app/.cyberwave). CYBERWAVE_TWIN_UUID gives you the twin UUID.

Ring buffer (for stream data)

Use this pattern for high-frequency data like video frames, depth maps, audio chunks, or point clouds.
data/{twin_uuid}/frames/default/
├── ring/
│   ├── 000000.npy
│   ├── 000001.npy
│   └── ...         # numbered slots, wraps around
└── meta.json       # write pointer + format info
Rules:
  • Write .npy files (numpy format) to numbered slots: {slot:06d}.npy
  • Slot index = write_count % buffer_size (default buffer size: 120)
  • Atomic writes: write to {slot}.npy.tmp, then rename() to {slot}.npy
  • Update meta.json after each write with at least: write_idx, size, shape, dtype
Python example (camera driver):
import os, json, numpy as np
from pathlib import Path

config_dir = os.getenv("CYBERWAVE_EDGE_CONFIG_DIR", "/app/.cyberwave")
twin_uuid = os.getenv("CYBERWAVE_TWIN_UUID")
data_dir = Path(config_dir) / "data" / twin_uuid / "frames" / "default"
ring_dir = data_dir / "ring"
ring_dir.mkdir(parents=True, exist_ok=True)

BUFFER_SIZE = 120
write_idx = 0

def on_frame(frame: np.ndarray, frame_count: int):
    global write_idx
    slot = write_idx % BUFFER_SIZE
    tmp = ring_dir / f"{slot:06d}.npy.tmp"
    target = ring_dir / f"{slot:06d}.npy"
    np.save(tmp, frame)
    tmp.rename(target)
    meta = {"write_idx": write_idx, "size": BUFFER_SIZE,
            "shape": list(frame.shape), "dtype": str(frame.dtype)}
    (data_dir / "meta.json.tmp").write_text(json.dumps(meta))
    (data_dir / "meta.json.tmp").rename(data_dir / "meta.json")
    write_idx += 1

Latest value (for state data)

Use this pattern for low-frequency state data like joint positions, robot pose, or telemetry.
data/{twin_uuid}/joint_states/
└── latest.json     # overwritten each update
Rules:
  • Write a single JSON file: latest.json
  • Atomic writes: write to latest.json.tmp, then rename()
  • Include a timestamp field
Python example (robot driver):
import os, json, time
from pathlib import Path

config_dir = os.getenv("CYBERWAVE_EDGE_CONFIG_DIR", "/app/.cyberwave")
twin_uuid = os.getenv("CYBERWAVE_TWIN_UUID")
data_dir = Path(config_dir) / "data" / twin_uuid / "joint_states"
data_dir.mkdir(parents=True, exist_ok=True)

def publish_joints(positions, velocities):
    payload = {"positions": positions, "velocities": velocities,
               "timestamp": time.time()}
    tmp = data_dir / "latest.json.tmp"
    tmp.write_text(json.dumps(payload))
    tmp.rename(data_dir / "latest.json")

Supported channels

ChannelData typePatternRecommended buffer size
framesVideo frames (numpy BGR uint8)Ring buffer120 (~4s at 30fps)
depthDepth maps (numpy uint16)Ring buffer120
audioAudio chunks (numpy float32)Ring buffer300
pointcloudPoint clouds (numpy Nx3 float32)Ring buffer30
joint_statesJoint positions/velocitiesLatest value
position3D poseLatest value
telemetryDevice metricsLatest value
You can define custom channels by picking any channel name and following either the ring buffer or latest value pattern.

Why this matters

Edge scripts and ML models read from these exact paths to do local inference — detecting objects in camera frames, fusing depth + RGB data, monitoring joint states, etc. By following this convention your driver automatically becomes compatible with the edge scripting system, without any coupling to specific ML frameworks or script logic.

Language-agnostic

This is a filesystem convention, not a Python API. C++, Rust, or any other language can write .npy files and JSON to the same paths. The Python SDK provides optional helpers (SharedVolumeWriter, LatestValueWriter) but they are not required.

Licensing your driver

You own your driver code. There are two common paths:
  • Open source — publish your driver as a public repository on GitHub under the Apache 2.0 license. This is our recommended default and makes it easier for the community to contribute and reuse your work.
  • Closed source — keep your driver proprietary. In this case, we recommend obfuscating your code before distributing the image and including a clear license file that reflects your distribution terms. Interested in writing a closed-source driver? Reach out to us.

Example drivers

The following open-source drivers are good starting points and reference implementations:

Advanced topics

Once you have a working driver, these guides cover the platform features your driver can leverage:

Edge Workers

Hook-based worker modules for on-device ML inference and event-driven processing.

Data Wire Format

SDK header encoding, key expressions, and the on-wire contract for edge data channels.

Data Fusion Primitives

Time-aware sensor fusion: interpolated point reads and time-window queries.

Synchronized Multi-Channel Hooks

Approximate time synchronizer that fires when samples from all listed channels arrive within tolerance.

Record & Replay

Capture live edge data to disk and replay it for deterministic debugging.