Skip to main content
Avala provides six visualization modes that control how point cloud data is colored in the 3D viewer. Each mode maps point attributes to colors differently, optimized for specific analysis tasks. Modes can be switched in real-time without reloading the data.

Modes Overview

ModePurposeBest For
NeutralUniform single colorStructural overview, shape inspection
IntensityColor by return intensitySurface material analysis, reflectivity
RainbowCycling hue per frameDistinguishing temporal frames in sequences
LabelColor by semantic classReviewing semantic segmentation annotations
PanopticColor by instance identityReviewing instance segmentation annotations
Image ProjectionCamera RGB projected onto pointsFusing camera and LiDAR data visually

Neutral

Neutral mode renders all points with a single HSL color. The default is white. This strips away all attribute-based coloring so you can focus on the geometry of the point cloud — the shape of objects, the distribution of points, and the overall scene structure. When to use: Initial scene inspection, verifying point cloud alignment, checking for gaps or artifacts in the data.

Intensity

Intensity mode maps each point’s return intensity value to a color gradient. The intensity range is divided into three bands, each using an RGB color interpolation:
BandIntensity RangeColor GradientDescription
Low0 — 8Blue to GreenWeak returns (dark surfaces, distant objects)
Mid8 — 34Green to YellowModerate returns (road surfaces, vegetation)
High34 — 255Yellow to RedStrong returns (retroreflectors, lane markings, signs)
Colors interpolate smoothly between the band boundaries, producing a continuous gradient from blue (lowest intensity) through green and yellow to red (highest intensity). When to use: Identifying surface materials, spotting lane markings and road signs, distinguishing asphalt from painted lines, detecting retroreflective surfaces.

Rainbow

Rainbow mode assigns a hue to each frame in a sequence using 6 evenly-spaced hues cycling across the color wheel (0°, 60°, 120°, 180°, 240°, 300°). Frames cycle through these hues in order, then repeat. Within each frame, point lightness varies by intensity — low-intensity points appear lighter and high-intensity points appear darker, using per-band lightness ranges (85%→74%, 74%→62%, 62%→50%). The 6-color palette ensures that adjacent frames are always visually distinct while keeping the overall scene readable. When to use: Visualizing motion over time, identifying frame boundaries in accumulated point clouds, verifying temporal alignment across sensors.

Label

Label mode colors points by their semantic class assignment using a palette of 50 deterministic colors. Colors are generated using golden-ratio hue spreading, which distributes them evenly across the color wheel so that visually similar classes are easy to distinguish. Each label index always maps to the same color, ensuring consistency across sessions and datasets. The 50-color palette covers most annotation taxonomies while maintaining visual distinctness between classes. When to use: Reviewing semantic segmentation ground truth, verifying label consistency across frames, comparing model predictions against annotations.

Panoptic

Panoptic mode combines semantic and instance information. Each unique instance receives its own color, computed by hashing the instance ID through a golden-ratio function. This produces a deterministic but visually varied palette where each object instance is clearly distinguishable from its neighbors. Points that belong to a semantic category but have no assigned instance ID (unassigned points) fall back to the category’s base color from the label palette. When to use: Reviewing instance segmentation annotations, verifying that individual objects are correctly separated, checking panoptic segmentation quality.
Panoptic mode requires both semantic labels and instance IDs to be present in the annotation data. If only semantic labels are available, use Label mode instead.

Image Projection

Image projection mode maps camera RGB values onto LiDAR points using calibration data. For each point in the cloud, the system:
  1. Projects the 3D point into every available camera’s image plane using the appropriate camera model (pinhole or double-sphere)
  2. Discards cameras where the point falls outside the image bounds or depth range
  3. Among the remaining cameras, selects the one where the projected pixel is closest to the image center (principal point)
  4. Samples the pixel color at the projected coordinates and applies it to the point
When multiple cameras cover the scene, this principal-point-proximity selection produces seamless color mapping across the full field of view, favoring each camera’s sharpest region. When to use: Visual fusion of camera and LiDAR data, verifying camera-LiDAR calibration quality, presenting point clouds with photorealistic coloring for reports and presentations.
Image projection requires calibration data (camera intrinsics and extrinsics) to be present in the recording. See Camera Projection for details on supported camera models.

Choosing a Mode

Inspecting geometry

Start with Neutral to see the raw point cloud shape, then switch to Intensity to understand surface properties.

Reviewing annotations

Use Label for semantic segmentation review, Panoptic for instance-level review.

Temporal analysis

Use Rainbow to see how frames overlap and verify temporal alignment in sequences.

Camera-LiDAR fusion

Use Image Projection to verify calibration and see the scene with camera colors mapped onto 3D points.