Skip to main content
Autonomous vehicle teams produce the largest and most complex sensor datasets in the industry. Avala provides a single platform to visualize raw multi-sensor recordings, annotate perception training data, and run quality control — without switching between separate visualization and labeling tools.

Visualization First

Before annotating, AV teams need to explore and understand their data. Avala’s multi-sensor viewer handles the full AV sensor stack:

MCAP Playback

Upload MCAP recordings from your vehicle fleet and play back all sensor streams in a synchronized multi-panel viewer with 8 panel types.

Surround Camera + LiDAR

View all surround cameras alongside LiDAR point clouds with automatic calibration-aware projection for cross-sensor verification.

GPU-Accelerated 3D

Render LiDAR point clouds with WebGPU acceleration and 6 visualization modes: Neutral, Intensity, Rainbow, Label, Panoptic, and Image Projection.

Timeline Navigation

Scrub through drive logs, step frame-by-frame, and jump to specific timestamps. All panels stay synchronized across different sensor rates.
This means your engineers can use Avala for data review and debugging (replacing Foxglove or Rerun), and your annotation team can label the same data in the same interface.

Data Types

SensorAvala Data TypeTypical Annotation
Front/surround camerasImage, Video2D bounding boxes, lane polylines, segmentation masks
LiDARPoint Cloud3D cuboids with heading, dimensions, and tracking IDs
RadarMCAP (via Point Cloud panels)3D cuboids, detection markers
Multi-sensor fusionMCAPSynchronized camera + LiDAR annotation with 3D-to-2D projection

Common Tasks

3D Object Detection

Label vehicles, pedestrians, cyclists, and static objects with 3D cuboids in LiDAR point clouds. The 3D annotation editor provides bird’s-eye, perspective, and side views for precise cuboid placement. Cuboids include full position (x, y, z), dimensions (length, width, height), and heading (yaw) parameters.

Multi-Camera Projection

Annotate 3D cuboids in the LiDAR view and automatically project them onto surround camera images for visual verification. The viewer supports both pinhole and double-sphere camera models, so projection works with standard and fisheye lenses.
Multi-camera projection is one of the most effective ways to verify 3D annotation quality. Depth and heading errors that are hard to spot in a top-down view become obvious when the cuboid is overlaid on the camera image.

Lane and Road Boundary Annotation

Use polyline tools to trace lane markings, curbs, and road edges in camera views. Polylines support connected segments with vertex-level editing, making them suitable for curved lanes and complex intersections.

Temporal Object Tracking

Track objects across frames with consistent IDs for motion prediction and trajectory forecasting models. Object IDs persist across the sequence timeline, and the viewer’s frame-by-frame navigation makes it straightforward to verify tracking continuity.

Scene Classification

Classify driving conditions at the scene level — weather (clear, rainy, foggy), time of day (daytime, dusk, nighttime), road type (highway, urban, rural), and traffic density. Classification labels apply to the entire frame and can be combined with object-level annotations.

Avala Features Used

FeaturePurposeLearn More
MCAP / ROS integrationIngest multi-sensor recordings from your vehicle fleetMCAP & ROS
Multi-sensor viewerSynchronized playback of cameras, LiDAR, radar, and IMUMulti-Sensor Viewer
GPU-accelerated point cloudsInspect LiDAR data with 6 visualization modesVisualization Overview
3D cuboid annotationLabel objects in 3D with bird’s-eye, perspective, and side views3D Cuboid Tool
Object trackingConsistent IDs across frame sequencesVideo Annotation
Polyline annotationTrace lanes, curbs, and road boundariesPolyline Tool
Multi-camera projectionProject 3D annotations onto camera imagesMulti-Camera Setup
Batch auto-labelingBootstrap annotations with model predictionsBatch Auto-Labeling
Quality controlMulti-stage review workflowsQuality Control
Cloud storageConnect S3 buckets for large driving datasetsCloud Storage

Example Pipeline

Raw sensor data (MCAP recordings from vehicle fleet)
  -> Upload to Avala via cloud storage integration (S3)
  -> Explore recordings in multi-sensor viewer
  -> Verify calibration with LiDAR-to-camera projection
  -> Create annotation project with 3D cuboid + tracking task type
  -> Annotators label 3D cuboids with tracking IDs
  -> Auto-label next batch with model predictions (batch auto-labeling)
  -> QC review with multi-stage workflow
  -> Export in KITTI, COCO, or custom format
  -> Train perception model
  -> Use model predictions for next round of auto-labeling

Getting Started

1

Upload your drive data

Create a dataset with mcap data type and upload MCAP recordings from your fleet. For large datasets, use cloud storage integration to connect your S3 bucket directly.
2

Explore in the viewer

Open a recording in the multi-sensor viewer. Verify that camera, LiDAR, and transform data are present. Check calibration by enabling LiDAR-to-camera projection.
3

Set up your annotation project

Create a project with 3D cuboid annotation, define your label taxonomy (vehicle, pedestrian, cyclist, etc.), and configure quality control settings.
4

Annotate and review

Your team annotates 3D cuboids with tracking IDs. Reviewers verify annotations using multi-camera projection to catch depth and heading errors.
5

Export and train

Export labeled data in your preferred format. Use the Python or TypeScript SDK to integrate exports into your training pipeline.

Next Steps