Visualization Modes
The viewer provides six rendering modes, each designed for a different inspection task. Switch between modes at any time without reloading the data.Neutral
Single-color rendering that displays all points in a uniform hue (default white). Strips away all per-point metadata to focus purely on the 3D geometry of the scan. Best for: Geometry inspection, verifying scan coverage, checking for missing regions or alignment issues.Intensity
Colors points by their return intensity value using a three-range mapping system. The intensity range is divided into three bands, each mapped to a distinct color gradient:| Range | Intensity Values | Color Gradient |
|---|---|---|
| Low | 0 — 8 | Blue to Green |
| Mid | 8 — 34 | Green to Yellow |
| High | 34 — 255 | Yellow to Red |
Rainbow
Assigns each frame a distinct hue from 6 evenly-spaced colors cycling across the color wheel (0°, 60°, 120°, 180°, 240°, 300°). Frames cycle through these hues in order, making it easy to distinguish temporal boundaries in accumulated point clouds. Best for: Multi-frame visualization, verifying frame alignment in sequences, identifying temporal drift or registration errors.Label
Colors points by their semantic class assignment. The viewer auto-generates 50 deterministic colors using golden-ratio hue distribution, ensuring that every class in your label set receives a visually distinct color. Best for: Semantic segmentation verification, checking label consistency across frames, identifying mislabeled regions.Panoptic
Colors points by instance identity, combining semantic class and instance ID. Points with no instance assignment (instanceId = 0) fall back to their category color. Points with an assigned instance ID receive a unique color generated by hashing the ID with the golden-ratio algorithm.
Best for: Instance segmentation verification, checking that individual objects are correctly separated, verifying tracking consistency across frames.
Image Projection
Projects camera images directly onto the point cloud, texturing each 3D point with the corresponding pixel color from the best available camera. When multiple cameras are available, the viewer selects the camera where the point projects closest to the image center (principal point), favoring the sharpest, least-distorted view. Best for: Sensor fusion verification, checking LiDAR-to-camera alignment, verifying calibration accuracy, visual context for 3D annotations.Image projection requires camera calibration data (intrinsics and extrinsics) to be present in the recording. See Camera Models for supported calibration formats.
Camera Models
Image projection supports two camera models for mapping 3D points to 2D image coordinates.Pinhole
The standard perspective projection model used by most cameras:- Intrinsics: Focal lengths (
fx,fy) and principal point (cx,cy) - Distortion: Up to four radial coefficients (
k1—k4) and two tangential coefficients (p1,p2)
Double-Sphere
A fisheye projection model designed for wide-angle and omnidirectional cameras. In addition to the standard intrinsics (fx, fy, cx, cy), this model uses two additional parameters:
- xi — Controls the curvature of the first sphere, determining how much the projection deviates from a standard pinhole model
- alpha — Blending parameter between the two spheres, controlling the field-of-view characteristics
View Modes
The viewer provides multiple viewpoints for inspecting point cloud data:| View | Description |
|---|---|
| Perspective | Standard 3D perspective projection with depth foreshortening. Default view. |
| Bird’s-eye | Top-down orthographic view looking straight down the Z-axis. Best for spatial layout. |
| Front | Front-facing view along the Y-axis. Useful for checking object heights. |
| Side | Side-facing view along the X-axis. Useful for verifying depth placement. |
GPU Acceleration
The viewer uses GPU compute for real-time rendering of large point clouds.WebGPU Pipeline
When WebGPU is available, the viewer enables hardware-accelerated features:- Compute shaders — GPU-based frustum culling removes off-screen points before rendering
- Level of detail (LOD) — Dynamically adjusts point density based on camera distance
- Render bundles — Pre-recorded GPU command sequences reduce per-frame overhead
- Buffer pooling — Reuses GPU memory allocations to minimize allocation stalls
WebGL Fallback
On browsers without WebGPU support, the viewer falls back to WebGL rendering. All visualization modes and view controls remain available, though large point clouds may render at reduced frame rates compared to the WebGPU path.Browser Support
| Browser | Version | WebGPU Status |
|---|---|---|
| Chrome | 113+ | Enabled by default |
| Edge | 113+ | Enabled by default |
| Firefox | Nightly | Requires dom.webgpu.enabled flag |
| Safari | Technology Preview | Requires feature flag |
Next Steps
3D Cuboid Tool
Place and adjust 3D bounding boxes in the point cloud viewer.
Multi-Sensor Viewer
Synchronized playback of camera, LiDAR, radar, and IMU data.
Supported Formats
Point cloud, MCAP, and ROS message formats supported by the platform.
Gaussian Splat Viewer
Explore and annotate 3D Gaussian Splat scene reconstructions.