Browsing Datasets and Sequences
Dataset List
The Datasets page shows all datasets in your organization. Each entry displays the dataset name, data type, item count, and last-modified date. Navigate the list with:| Shortcut | Action |
|---|---|
Up Arrow / Down Arrow | Navigate between datasets |
Enter | Open the selected dataset |
Cmd + Enter | Open in a new tab |
Sequence View
Datasets that contain temporal data (video, MCAP, point cloud sequences) group items into sequences. Opening a dataset shows its sequences, each representing a continuous recording or collection run. Click a sequence to open it in the multi-sensor viewer, where all items in the sequence are laid out on the timeline.Navigating Recordings
Once inside a recording, the multi-sensor viewer provides full playback and inspection controls.Timeline Navigation
The timeline bar at the bottom of the viewer spans the full recording duration. Use it to:- Play/pause continuous playback at adjustable speeds (0.25x to 4x)
- Step frame-by-frame with arrow keys for precise inspection
- Scrub by clicking and dragging on the timeline
- Jump to timestamps by clicking the timestamp display and entering a specific time
Panel-Level Inspection
Each panel supports independent zoom, pan, and interaction while remaining locked to the shared timeline:| Panel Type | Inspection Actions |
|---|---|
| Image | Zoom, pan, inspect pixel values |
| 3D / Point Cloud | Rotate, pan, zoom, switch visualization modes |
| Plot | Hover for data values, zoom into time ranges |
| Map | Pan, zoom, follow vehicle position |
| Raw Messages | Expand nested fields, copy values |
| Log | Scroll through timestamped entries |
| Gauge | View current reading and range |
| State | View transition history |
Switching Visualization Modes
The 3D / Point Cloud panel supports six visualization modes, each revealing different information about the same data:| Mode | What It Shows | When to Use |
|---|---|---|
| Neutral | Uniform color for all points | Inspecting point cloud density and coverage |
| Intensity | LiDAR return signal strength | Distinguishing materials (metal vs. fabric vs. pavement) |
| Rainbow | Cycling hue per frame | Distinguishing temporal frames and verifying alignment |
| Label | Semantic label color per annotation class | Reviewing labeled data and checking class assignments |
| Panoptic | Unique color per annotated instance | Verifying instance separation and tracking IDs |
| Image Projection | Camera pixel colors projected onto LiDAR points | Correlating 3D geometry with visual appearance |
| Shortcut | Mode |
|---|---|
1 | Label color view |
2 | Intensity view |
3 | Image projection view |
Cross-Referencing Sensor Streams
One of the most effective exploration techniques is cross-referencing data across panels. The synchronized viewer makes this straightforward: LiDAR + Camera: Open the 3D panel alongside camera panels. As you step through frames, observe how 3D structures in the point cloud correspond to objects in the camera images. Enable LiDAR-to-camera projection for a direct overlay. LiDAR + Plot: Add a plot panel for IMU or velocity data. Correlate vehicle dynamics (acceleration, yaw rate) with what you see in the point cloud or camera views. Camera + Map: Pair camera views with the map panel to understand the geographic context of what the camera is seeing. Useful for fleet data where location matters. Camera + Log: View diagnostic logs alongside camera feeds to correlate software events with sensor observations.Filtering and Searching
Query Language
Avala provides a structured query language for filtering dataset items. Use the search bar to write filter expressions. Filter by annotation label:Using Slices
Slices are saved subsets of a dataset. Use them to organize data for exploration:| Slice Strategy | Example |
|---|---|
| By scenario | highway, intersection, parking-lot |
| By condition | rainy, nighttime, heavy-traffic |
| By quality | needs-review, edge-cases, golden-set |
| By split | training, validation, test |
AutoTag
AutoTag automatically groups visually similar items using embedding-based similarity. This is useful for discovering patterns in your data without manual tagging:- Find clusters of similar scenes (all highway on-ramps, all parking lots)
- Identify near-duplicates that may skew model training
- Discover underrepresented scenarios that need more data collection
Exploration Workflow
A typical data exploration workflow before starting annotation:Browse the dataset
Open the dataset and scan through sequences to understand the scope and variety of the data.
Play back representative recordings
Open a few sequences in the viewer and play them at 1x or 2x speed to get an overall sense of the data.
Switch visualization modes
Toggle between Neutral, Intensity, and Rainbow modes to understand point cloud quality and coverage.
Cross-reference sensors
Open camera and LiDAR panels side by side. Enable LiDAR projection to verify calibration accuracy.
Filter for specific scenarios
Use the query language or slices to find items matching conditions relevant to your annotation task (e.g., nighttime scenes, crowded intersections).