Supported Formats
| Schema | Format | Notes |
|---|---|---|
sensor_msgs/PointCloud2 | ROS point cloud | Standard LiDAR format with configurable fields |
foxglove.PointCloud | Foxglove point cloud | Indexed points with typed fields |
sensor_msgs/LaserScan | ROS laser scan | 2D LiDAR rendered as a 3D ring (requires keyword in topic name) |
Auto-Detection
A topic is assigned to the Point Cloud panel when:- Its schema is
sensor_msgs/PointCloud2orfoxglove.PointCloud(schema-matched), or - Its topic name contains
lidar,pointcloud,velodyne, orpoints
sensor_msgs/LaserScan and radar_msgs/RadarScan are not schema-matched — they are routed to this panel only when the topic name contains one of the keywords above.
Visualization Modes
The panel supports six modes for coloring points, each designed for a different analysis task. Modes can be switched in real-time without reloading data.| Mode | Coloring | Best For |
|---|---|---|
| Neutral | Uniform single color | Structural overview |
| Intensity | Three-band gradient by return intensity | Surface material analysis |
| Rainbow | 6 evenly-spaced hues cycling per frame | Temporal frame distinction |
| Label | Deterministic color per semantic class | Segmentation review |
| Panoptic | Hashed color per instance ID | Instance segmentation review |
| Image Projection | Camera RGB projected onto points | Camera-LiDAR fusion |
View Perspectives
The panel provides preset camera perspectives for common analysis tasks:| View | Description | Use Case |
|---|---|---|
| Perspective | Free-orbit 3D camera | General exploration and annotation |
| Bird’s-eye | Top-down orthographic view | Overhead scene layout, object spacing |
| Side view | Lateral orthographic view | Height verification, ground plane alignment |
Camera Controls
| Action | Control |
|---|---|
| Orbit | Left-click + drag |
| Pan | Right-click + drag (or middle-click + drag) |
| Zoom | Scroll wheel |
| Reset view | Press R |
Camera Projection Overlay
When camera calibration data is available, the panel can overlay camera imagery onto the point cloud using the Image Projection visualization mode. The system projects each point into all available cameras and selects the one where the point falls closest to the image center (principal point), then maps that camera’s pixel color onto the point. This produces a photorealistic colored point cloud that combines the spatial accuracy of LiDAR with the visual detail of cameras. See Camera Projection for technical details on how the projection works.Rendering Pipeline
The Point Cloud panel uses the full rendering pipeline described in Point Cloud Rendering:- Octree spatial indexing for hierarchical culling and level-of-detail
- Chunk-based loading for progressive streaming of large datasets
- Background workers for off-main-thread octree construction and point processing
- GPU acceleration via WebGPU when available (compute shaders, frustum culling, LOD selection)
The rendering pipeline adapts to your hardware automatically. WebGPU-capable browsers get GPU-accelerated compute and rendering. Other browsers fall back to WebGL with CPU-based processing.
Interaction
Point Selection
Click on individual points to inspect their attributes (position, intensity, label, instance ID). The viewer uses the octree spatial index for fast nearest-neighbor queries, enabling responsive point picking even in dense clouds.Synchronized Timeline
The Point Cloud panel is synchronized with all other panels through the shared timeline. When you seek, step, or play the recording, the point cloud updates to show the LiDAR scan at the current timestamp.Annotation Overlay
When viewing annotated datasets, 3D annotations (cuboids, segmentation masks) are rendered as overlays on the point cloud. The annotations are interactive — click to select, drag to reposition, and use handles to resize.Common Use Cases
Scene exploration
Navigate large point clouds to understand the 3D environment, identify objects, and assess data quality.
Annotation review
Verify 3D cuboid placement, segmentation coverage, and label accuracy across frames.
Calibration verification
Use Image Projection mode to check that camera-LiDAR calibration produces accurate alignment.
Multi-sensor analysis
View point clouds alongside camera, IMU, and GPS panels for full scene understanding.
Related
- Visualization Modes — Detailed documentation of all six coloring modes
- Camera Projection — Pinhole and double-sphere projection models
- Point Cloud Rendering — Octree, chunking, and GPU acceleration details
- WebGPU Acceleration — Feature flags and performance components
- Image Panel — Camera image display with LiDAR overlay