Skip to main content
Physical AI systems — embodied agents, digital twins, spatial computing applications, and sim-to-real transfer pipelines — require training data that captures the 3D structure of real environments. Avala provides the visualization and annotation tools these teams need: Gaussian Splat scene rendering, GPU-accelerated point cloud visualization, multi-sensor MCAP playback, and 3D annotation tools that work directly on spatial data.

Why Avala for Physical AI

Traditional annotation platforms are built for 2D images. Physical AI teams need to work with 3D scenes, point clouds, multi-sensor recordings, and spatial reconstructions. Avala handles all of these natively.

Gaussian Splat Viewer

Load 3D Gaussian Splat scene reconstructions into a WebGPU-rendered viewer with scene hierarchy, properties inspector, and real-time statistics. Navigate and annotate directly in the reconstructed 3D environment.

Point Cloud Visualization

GPU-accelerated point cloud rendering with 6 visualization modes. Inspect spatial structure, density, and sensor coverage with Neutral, Intensity, Rainbow, Label, Panoptic, and Image Projection views.

Multi-Sensor MCAP Playback

Play back recorded sensor data from embodied AI systems — cameras, depth sensors, LiDAR, IMU — in a synchronized multi-panel viewer.

3D Annotation Tools

Annotate 3D cuboids, classification labels, and object attributes directly on point clouds and Gaussian Splat scenes without switching tools.

Data Types

ApplicationAvala Data TypeTypical Annotation
Scene reconstructionSplat (Gaussian Splat)3D cuboids, classification
Spatial mappingPoint Cloud3D cuboids, segmentation
Embodied agent recordingsMCAPMulti-sensor annotation with tracking
Navigation trainingPoint Cloud, MCAP3D cuboids, polylines
Object recognitionImage, Point CloudBounding boxes, 3D cuboids
Digital twin generationSplat, Point CloudClassification, object attributes

Use Cases

Scene Understanding for Embodied AI

Embodied AI agents need to understand the 3D structure of their environment: where objects are, what surfaces are traversable, and how the space is organized. Avala’s point cloud and Gaussian Splat viewers let you visualize captured environments, then annotate objects, regions, and spatial relationships that train scene understanding models. Workflow: Capture environment with LiDAR or depth cameras -> upload point cloud or Gaussian Splat reconstruction -> visualize and inspect in 3D viewer -> annotate objects with 3D cuboids and classification labels -> export for model training.

3D Object Recognition

Train models to recognize objects in 3D space using annotated point clouds and scene reconstructions. The 3D cuboid tool lets annotators place precise bounding volumes around objects with full position, dimension, and heading control. Classification attributes add category, material, and state metadata to each object.

Sim-to-Real Transfer

Teams building simulation environments need labeled real-world data to validate and calibrate their simulations. Avala handles the real-world side of the pipeline:
1

Capture real-world data

Record multi-sensor data from the target environment using your robot or sensor rig. Use MCAP format for multi-sensor recordings or PCD/PLY for standalone point clouds.
2

Reconstruct and visualize

Upload Gaussian Splat reconstructions or raw point clouds. Explore the data in Avala’s 3D viewers to understand scene structure.
3

Annotate ground truth

Label objects, regions, and spatial relationships that your simulation needs to replicate accurately.
4

Export for simulation alignment

Export annotations with precise 3D coordinates. Use these as ground truth to validate and tune your simulation parameters.

Digital Twin Data Annotation

Digital twin applications need annotated data that maps the physical world to its virtual representation. Avala’s Gaussian Splat viewer is particularly useful here — it renders photorealistic 3D scene reconstructions that annotators can navigate and label as if they were in the real environment. The viewer provides:
  • Scene hierarchy panel — Browse and select objects in the scene tree
  • Properties inspector — View and edit object attributes
  • Real-time statistics — Monitor rendering performance
  • Undo/redo — Full edit history for annotation corrections
For robots and autonomous systems that need to navigate physical spaces, annotate traversable regions, obstacles, and waypoints in point cloud data. Use polylines to define paths and boundaries, and 3D cuboids to mark obstacles with size and orientation.

Avala Features Used

FeaturePurposeLearn More
Gaussian Splat viewerVisualize and annotate 3D scene reconstructionsVisualization Overview
Point cloud visualizationInspect spatial data with 6 rendering modesVisualization Overview
MCAP / ROS integrationIngest multi-sensor recordings from embodied AI systemsMCAP & ROS
3D cuboid annotationLabel objects in 3D space with precise position and dimensions3D Cuboid Tool
ClassificationScene-level and object-level categorical labelsClassification Tool
Python SDKProgrammatic dataset management and exportPython SDK
TypeScript SDKIntegrate with Node.js pipelinesTypeScript SDK
Cloud storageConnect S3 or GCS for large 3D datasetsCloud Storage

Example Pipeline

Real-world environment capture (LiDAR, cameras, depth sensors)
  -> Generate 3D reconstruction (Gaussian Splat or point cloud)
  -> Upload to Avala dataset
  -> Visualize in 3D viewer -- inspect scene structure and quality
  -> Create annotation project with 3D cuboid + classification task
  -> Annotators label objects, regions, and spatial relationships
  -> QC review in the same 3D viewer
  -> Export with 3D coordinates and metadata
  -> Feed into embodied AI training / simulation calibration pipeline

Getting Started

1

Choose your data format

Use Gaussian Splat format for scene reconstructions, PCD/PLY for raw point clouds, or MCAP for multi-sensor recordings from embodied systems.
2

Upload and visualize

Create a dataset with the appropriate data type and upload your files. Open them in the 3D viewer to explore the spatial data.
3

Define your annotation schema

Set up object classes and attributes that match your model’s requirements — object categories, materials, states, spatial relationships.
4

Annotate in 3D

Your team places 3D cuboids and classification labels directly in the point cloud or Gaussian Splat scene.
5

Export and integrate

Export annotations with 3D coordinates via the API or SDK. Integrate into your training pipeline, simulation, or digital twin system.

Next Steps