Skip to main content
Robotics teams work with diverse sensor configurations — depth cameras, stereo rigs, LiDAR, and multi-camera setups that change between robot platforms. Avala handles this variety with native MCAP support for playing back recorded sensor data and a full annotation toolkit for labeling the perception training data that manipulation, navigation, and scene understanding models require.

Visualization for Robotics Data

Robotics sensor data often arrives as recorded bags or MCAP files from test runs, field deployments, or simulation. Avala’s multi-sensor viewer lets you play back these recordings and inspect them before committing to annotation.

MCAP Playback

Upload MCAP recordings from your robot and play back camera, depth, LiDAR, and IMU streams in a synchronized viewer.

Point Cloud Visualization

Render point clouds from depth cameras and LiDAR with GPU acceleration. Switch between 6 visualization modes to inspect density, intensity, and spatial structure.

Multi-Camera Views

View multiple camera streams (RGB, depth, stereo) side by side, synchronized to the same timestamps in the recording.

Timeline Navigation

Step frame-by-frame through robot operations to find key moments — grasp attempts, navigation decisions, collision events.
If your team is currently using Foxglove or Rerun to review robot recordings, Avala replaces the visualization step and adds annotation, review, and export — all in one platform.

Data Types

ApplicationAvala Data TypeTypical Annotation
Indoor navigationImage, Point Cloud2D/3D bounding boxes, segmentation
Pick-and-placeImageBounding boxes, keypoints, segmentation masks
Outdoor mobile robotsMCAP, Point Cloud3D cuboids, polylines
ManipulationImage, VideoKeypoints, bounding boxes
Warehouse robotsImage, MCAPBounding boxes, segmentation, classification

Common Tasks

Object Detection and Grasping

Label objects on shelves, tables, and conveyor belts with bounding boxes and instance segmentation masks for grasp planning models. For bin-picking tasks, combine bounding boxes with keypoint annotations to mark grasp points on each object.

Scene Segmentation

Create pixel-level segmentation masks for floors, walls, obstacles, free space, and other surface types. Segmentation data trains navigation models to understand which areas the robot can traverse and which are blocked.

Keypoint Annotation

Mark joint positions, tool tips, grasping points, and pose landmarks. Keypoint skeletons are configurable — define the number of points and their connections to match your model’s expected input.

Terrain Classification

For outdoor mobile robots, classify traversable vs. non-traversable surfaces. Combine image-level classification (terrain type, slope) with segmentation masks that delineate safe zones from obstacles.

Activity and Event Detection

Annotate video recordings of robot operations to label specific events: successful grasp, failed grasp, collision, recovery. Use classification labels on sequences or frame ranges for temporal event annotation.

Avala Features Used

FeaturePurposeLearn More
MCAP / ROS integrationIngest robot sensor recordings directlyMCAP & ROS
Multi-sensor viewerSynchronized playback of robot sensor streamsMulti-Sensor Viewer
Point cloud visualizationInspect depth camera and LiDAR data with 6 rendering modesVisualization Overview
Bounding box annotationLabel objects for detection modelsBounding Box Tool
Keypoint annotationMark joint positions and grasp pointsKeypoint Tool
Segmentation annotationPixel-level masks for scene understandingSegmentation Tool
Polygon annotationPrecise boundaries for irregular objectsPolygon Tool
Quality controlMulti-stage review for precision-critical labelsQuality Control
SlicesOrganize data by environment, scenario, or robot platformSlices API

Example Pipeline

Robot sensor recordings (MCAP from test runs)
  -> Upload to Avala dataset
  -> Review recordings in multi-sensor viewer
  -> Identify frames with relevant scenarios (grasps, navigation events)
  -> Create annotation project (bounding boxes + keypoints)
  -> Annotators label objects and grasp points
  -> QC review with spot-checking and targeted review
  -> Export in COCO or custom format
  -> Train manipulation / navigation model

Getting Started

1

Upload robot recordings

Create a dataset and upload your MCAP files. The viewer automatically detects camera, depth, LiDAR, and IMU topics.
2

Explore the data

Play back recordings to understand sensor coverage and data quality. Use frame-by-frame navigation to find key moments.
3

Define your annotation task

Choose the annotation type that matches your model’s input: bounding boxes for detection, keypoints for pose estimation, segmentation for scene understanding.
4

Set up label taxonomy

Define object classes and attributes relevant to your robot’s task environment (e.g., cup, plate, obstacle, free_space).
5

Annotate, review, and export

Your team labels the data, reviewers check quality, and you export in the format your training pipeline expects.

Fleet Management

For teams operating robot fleets, Avala provides fleet-scale recording management and observability:
  • Device registry — Track all robots in your fleet with metadata, firmware versions, and health status.
  • Recording browser — Filter and sort recordings across devices by date, status, and tags.
  • Timeline events — Mark errors, anomalies, and state changes on recordings for fleet-wide analysis.
  • Recording rules — Auto-flag recordings matching conditions (e.g., high latency, error frequency).
  • Alerts — Route notifications to Slack, email, or webhooks when fleet conditions change.
See Fleet Dashboard to get started.

Next Steps