Skip to main content
The 3D point cloud viewer renders LiDAR and depth sensor data with GPU acceleration. Choose from six visualization modes to inspect your data from different perspectives — whether you need to verify raw geometry, check intensity distributions, validate semantic labels, or project camera images onto the point cloud.

Visualization Modes

The viewer provides six rendering modes, each designed for a different inspection task. Switch between modes at any time without reloading the data.

Neutral

Single-color rendering that displays all points in a uniform hue (default white). Strips away all per-point metadata to focus purely on the 3D geometry of the scan. Best for: Geometry inspection, verifying scan coverage, checking for missing regions or alignment issues.

Intensity

Colors points by their return intensity value using a three-range mapping system. The intensity range is divided into three bands, each mapped to a distinct color gradient:
RangeIntensity ValuesColor Gradient
Low0 — 8Blue to Green
Mid8 — 34Green to Yellow
High34 — 255Yellow to Red
Color is interpolated within each range, producing a continuous gradient from blue (lowest intensity) through green and yellow to red (highest intensity). Best for: Surface material analysis, distinguishing road markings from asphalt, identifying highly reflective objects (signs, license plates).

Rainbow

Assigns each frame a distinct hue from 6 evenly-spaced colors cycling across the color wheel (0°, 60°, 120°, 180°, 240°, 300°). Frames cycle through these hues in order, making it easy to distinguish temporal boundaries in accumulated point clouds. Best for: Multi-frame visualization, verifying frame alignment in sequences, identifying temporal drift or registration errors.

Label

Colors points by their semantic class assignment. The viewer auto-generates 50 deterministic colors using golden-ratio hue distribution, ensuring that every class in your label set receives a visually distinct color. Best for: Semantic segmentation verification, checking label consistency across frames, identifying mislabeled regions.
The 50-color palette is deterministic — the same class ID always maps to the same color, making it easy to compare labels across different frames or recordings.

Panoptic

Colors points by instance identity, combining semantic class and instance ID. Points with no instance assignment (instanceId = 0) fall back to their category color. Points with an assigned instance ID receive a unique color generated by hashing the ID with the golden-ratio algorithm. Best for: Instance segmentation verification, checking that individual objects are correctly separated, verifying tracking consistency across frames.

Image Projection

Projects camera images directly onto the point cloud, texturing each 3D point with the corresponding pixel color from the best available camera. When multiple cameras are available, the viewer selects the camera where the point projects closest to the image center (principal point), favoring the sharpest, least-distorted view. Best for: Sensor fusion verification, checking LiDAR-to-camera alignment, verifying calibration accuracy, visual context for 3D annotations.
Image projection requires camera calibration data (intrinsics and extrinsics) to be present in the recording. See Camera Models for supported calibration formats.

Camera Models

Image projection supports two camera models for mapping 3D points to 2D image coordinates.

Pinhole

The standard perspective projection model used by most cameras:
  • Intrinsics: Focal lengths (fx, fy) and principal point (cx, cy)
  • Distortion: Up to four radial coefficients (k1k4) and two tangential coefficients (p1, p2)
This is the default model for standard automotive and robotics cameras.

Double-Sphere

A fisheye projection model designed for wide-angle and omnidirectional cameras. In addition to the standard intrinsics (fx, fy, cx, cy), this model uses two additional parameters:
  • xi — Controls the curvature of the first sphere, determining how much the projection deviates from a standard pinhole model
  • alpha — Blending parameter between the two spheres, controlling the field-of-view characteristics
The projection works by mapping each 3D point through two concentric spheres, producing accurate results even at extreme viewing angles where the pinhole model breaks down. Use the double-sphere model when: Your cameras have a field of view greater than 120 degrees, or when you see distortion artifacts at the image edges with the pinhole model.

View Modes

The viewer provides multiple viewpoints for inspecting point cloud data:
ViewDescription
PerspectiveStandard 3D perspective projection with depth foreshortening. Default view.
Bird’s-eyeTop-down orthographic view looking straight down the Z-axis. Best for spatial layout.
FrontFront-facing view along the Y-axis. Useful for checking object heights.
SideSide-facing view along the X-axis. Useful for verifying depth placement.
Switch between perspective and orthographic projection at any time. Orthographic projection removes depth foreshortening, making it easier to judge relative sizes and distances.

GPU Acceleration

The viewer uses GPU compute for real-time rendering of large point clouds.

WebGPU Pipeline

When WebGPU is available, the viewer enables hardware-accelerated features:
  • Compute shaders — GPU-based frustum culling removes off-screen points before rendering
  • Level of detail (LOD) — Dynamically adjusts point density based on camera distance
  • Render bundles — Pre-recorded GPU command sequences reduce per-frame overhead
  • Buffer pooling — Reuses GPU memory allocations to minimize allocation stalls

WebGL Fallback

On browsers without WebGPU support, the viewer falls back to WebGL rendering. All visualization modes and view controls remain available, though large point clouds may render at reduced frame rates compared to the WebGPU path.

Browser Support

BrowserVersionWebGPU Status
Chrome113+Enabled by default
Edge113+Enabled by default
FirefoxNightlyRequires dom.webgpu.enabled flag
SafariTechnology PreviewRequires feature flag
WebGPU is required for compute shader features (GPU frustum culling, GPU LOD). These features are silently disabled on browsers that only support WebGL — the viewer remains functional but may have lower performance with very large point clouds.

Next Steps