Uploading Multi-Camera Recordings
Upload your MCAP file
Navigate to your dataset in Avala and upload the MCAP file containing your camera topics. The platform parses the recording and extracts all available topics.
Select camera topics
After parsing, Avala presents a topic selection screen listing every detected topic. Choose the camera streams you want to visualize. You can also select LiDAR, IMU, and other topics at this stage.
Smart Grid Layout
The viewer arranges camera panels based on how many image topics are active:| Camera Count | Layout | Description |
|---|---|---|
| 1 | Full width | Single camera fills the center area |
| 2 | Side-by-side | Two cameras at equal width |
| 3 | 2 + 1 | Two panels on top, one spanning the bottom row |
| 4+ | Grid | 2-column grid, rows added as needed |
Calibration and Transforms
For LiDAR-to-camera projection to work, the viewer needs two pieces of information: where each camera is in 3D space (extrinsics) and how each camera forms images (intrinsics).Extrinsics from TF Messages
Avala reads coordinate frame transforms fromtf2_msgs/TFMessage and foxglove.FrameTransform messages in your recording. These provide the rigid-body transforms (rotation + translation) between sensor frames.
The viewer resolves the full transform chain from the LiDAR frame to each camera frame automatically. For example, if your recording contains:
lidar_top -> base_link -> camera_front.
Include both
/tf (dynamic transforms) and /tf_static (fixed transforms) topics in your recording. Static transforms are typically published once at the start of the recording and define the fixed mounting positions of sensors on the vehicle.Camera Intrinsics
Avala supports two camera models for projection:| Model | Parameters | Use Case |
|---|---|---|
| Pinhole | fx, fy, cx, cy + distortion coefficients k1-k4, p1, p2 | Standard cameras with rectilinear lenses |
| Double-sphere | fx, fy, cx, cy, xi, alpha | Wide-angle and fisheye lenses |
- CameraInfo messages —
sensor_msgs/CameraInfotopics published alongside image topics - Embedded calibration — Calibration data stored in the MCAP file metadata
Projection Behavior
When calibration data is present, the viewer can project LiDAR points onto camera images. Each projected point is colored by its depth, intensity, or label — matching the active visualization mode in the 3D panel. This cross-view projection is useful for:- Verifying that 3D cuboid annotations align with objects in camera views
- Checking sensor calibration accuracy
- Understanding the spatial relationship between LiDAR returns and visual features
Visualization Modes in Camera Panels
When LiDAR projection is active, the projected points inherit the color scheme from the current point cloud visualization mode:| Mode | Projected Point Color |
|---|---|
| Neutral | Uniform color |
| Intensity | LiDAR return intensity (blue→green→yellow→red gradient) |
| Rainbow | Cycling hue per frame |
| Label | Semantic label color from the annotation class |
| Panoptic | Instance-level color per annotated object |
| Image Projection | Camera pixel color back-projected onto LiDAR (3D panel only) |
Independent Panel Controls
Each camera panel supports independent interaction while maintaining timeline synchronization:- Zoom — Scroll to zoom into a specific region of the camera image
- Pan — Click and drag to pan across zoomed images
- Reset — Press
0to reset zoom and pan to the default view
Best Practices for Multi-Camera Recordings
Include TF topics
Always record
/tf and /tf_static topics. Without transforms, the viewer cannot resolve coordinate frames between sensors.Publish CameraInfo
Publish
sensor_msgs/CameraInfo alongside each image topic. This provides the intrinsics needed for accurate projection.Use consistent frame IDs
Ensure each sensor topic references the correct frame ID in its message header. Mismatched frame IDs break the transform chain.
Compress images
Use
sensor_msgs/CompressedImage with JPEG compression to reduce file sizes. Uncompressed images dramatically increase MCAP file size.Next Steps
Recording Best Practices
Tips for recording data that works well in Avala, including format, compression, and naming conventions.
Rendering Modes
Deep dive into the 6 point cloud visualization modes and when to use each one.
Timeline Navigation
Playback controls, frame stepping, and timestamp seeking across all panels.
MCAP & ROS Overview
Supported formats, message types, and the upload workflow for multi-sensor recordings.