Modes Overview
| Mode | Purpose | Best For |
|---|---|---|
| Neutral | Uniform single color | Structural overview, shape inspection |
| Intensity | Color by return intensity | Surface material analysis, reflectivity |
| Rainbow | Cycling hue per frame | Distinguishing temporal frames in sequences |
| Label | Color by semantic class | Reviewing semantic segmentation annotations |
| Panoptic | Color by instance identity | Reviewing instance segmentation annotations |
| Image Projection | Camera RGB projected onto points | Fusing camera and LiDAR data visually |
Neutral
Neutral mode renders all points with a single HSL color. The default is white. This strips away all attribute-based coloring so you can focus on the geometry of the point cloud — the shape of objects, the distribution of points, and the overall scene structure. When to use: Initial scene inspection, verifying point cloud alignment, checking for gaps or artifacts in the data.Intensity
Intensity mode maps each point’s return intensity value to a color gradient. The intensity range is divided into three bands, each using an RGB color interpolation:| Band | Intensity Range | Color Gradient | Description |
|---|---|---|---|
| Low | 0 — 8 | Blue to Green | Weak returns (dark surfaces, distant objects) |
| Mid | 8 — 34 | Green to Yellow | Moderate returns (road surfaces, vegetation) |
| High | 34 — 255 | Yellow to Red | Strong returns (retroreflectors, lane markings, signs) |
Rainbow
Rainbow mode assigns a hue to each frame in a sequence using 6 evenly-spaced hues cycling across the color wheel (0°, 60°, 120°, 180°, 240°, 300°). Frames cycle through these hues in order, then repeat. Within each frame, point lightness varies by intensity — low-intensity points appear lighter and high-intensity points appear darker, using per-band lightness ranges (85%→74%, 74%→62%, 62%→50%). The 6-color palette ensures that adjacent frames are always visually distinct while keeping the overall scene readable. When to use: Visualizing motion over time, identifying frame boundaries in accumulated point clouds, verifying temporal alignment across sensors.Label
Label mode colors points by their semantic class assignment using a palette of 50 deterministic colors. Colors are generated using golden-ratio hue spreading, which distributes them evenly across the color wheel so that visually similar classes are easy to distinguish. Each label index always maps to the same color, ensuring consistency across sessions and datasets. The 50-color palette covers most annotation taxonomies while maintaining visual distinctness between classes. When to use: Reviewing semantic segmentation ground truth, verifying label consistency across frames, comparing model predictions against annotations.Panoptic
Panoptic mode combines semantic and instance information. Each unique instance receives its own color, computed by hashing the instance ID through a golden-ratio function. This produces a deterministic but visually varied palette where each object instance is clearly distinguishable from its neighbors. Points that belong to a semantic category but have no assigned instance ID (unassigned points) fall back to the category’s base color from the label palette. When to use: Reviewing instance segmentation annotations, verifying that individual objects are correctly separated, checking panoptic segmentation quality.Panoptic mode requires both semantic labels and instance IDs to be present in the annotation data. If only semantic labels are available, use Label mode instead.
Image Projection
Image projection mode maps camera RGB values onto LiDAR points using calibration data. For each point in the cloud, the system:- Projects the 3D point into every available camera’s image plane using the appropriate camera model (pinhole or double-sphere)
- Discards cameras where the point falls outside the image bounds or depth range
- Among the remaining cameras, selects the one where the projected pixel is closest to the image center (principal point)
- Samples the pixel color at the projected coordinates and applies it to the point
Choosing a Mode
Inspecting geometry
Start with Neutral to see the raw point cloud shape, then switch to Intensity to understand surface properties.
Reviewing annotations
Use Label for semantic segmentation review, Panoptic for instance-level review.
Temporal analysis
Use Rainbow to see how frames overlap and verify temporal alignment in sequences.
Camera-LiDAR fusion
Use Image Projection to verify calibration and see the scene with camera colors mapped onto 3D points.
Related
- Camera Projection — How pinhole and double-sphere camera models map 3D points to pixel coordinates
- Point Cloud Rendering — Octree spatial indexing and chunked loading pipeline
- Point Cloud Panel — Panel configuration and interaction controls