What Traceability Means in Avala
Every annotation in Avala carries full lineage metadata:| Entity | What is tracked |
|---|---|
| Dataset Item | Source file URL, upload timestamp, sequence membership, sensor metadata |
| Task | Assigned annotator, creation time, completion time, state transitions |
| Result | Annotation data, tool used, annotator ID, submission timestamp |
| QA Review | Reviewer ID, review decision (accept/reject/fix), review comments |
| Export | Export format, included datasets/projects/slices, creation timestamp, version |
Walkthrough: Debugging a Model Failure
Here is a concrete example of how traceability helps you debug a production model issue.1. Model fails on an edge case
Your perception model misclassifies a partially occluded pedestrian in a LiDAR scan. You identify the prediction and want to understand why the model learned this behavior.2. Find the training data
Use the SDK to search your exports for the dataset items that contributed to the model’s training set.3. Inspect individual results
Each result in the export includes the source dataset item, annotator information, and QA status.4. Trace back to the source
Once you identify the problematic label, you can look up the original dataset item to see its source file, sensor metadata, and full annotation history.5. Fix and retrain
With the root cause identified — for example, an annotation error on the occluded pedestrian — you fix the label in Avala, create a new export, and retrain your model with corrected data.Benefits
Reproducibility
Every export is versioned. You can recreate the exact training set used for any model version by referencing the export UID. No guessing which labels were included or excluded.Faster debugging
Instead of manually searching through thousands of annotations to find an error, you trace directly from the model’s failure to the specific label that caused it. What used to take days takes minutes.Compliance and audit trails
For regulated industries (automotive, medical, defense), traceability provides the documentation trail that auditors require. Every annotation decision is attributed, timestamped, and linked to its QA review.Continuous improvement
Track annotation quality over time by correlating model performance with specific annotators, review stages, and dataset versions. Identify systematic labeling issues before they propagate through your training pipeline.Traceability via the API
All traceability data is available through the REST API and SDKs. Key endpoints:| Endpoint | What it returns |
|---|---|
GET /api/v1/exports/{uid}/ | Export metadata including datasets, projects, and creation timestamp |
GET /api/v1/tasks/ | Task list with status, annotator, and dataset item references |
GET /api/v1/datasets/{uid}/items/ | Dataset items with source URLs and sequence membership |
GET /api/v1/datasets/{uid}/sequences/ | Sequences with frame count and item references |
Next Steps
Quality Control
Learn how Avala’s multi-stage QA workflows catch annotation errors before they reach your model.
Exports
Create versioned exports of your annotated data with full lineage metadata.
Quality SLAs
Understand Avala’s quality guarantees, accuracy targets, and turnaround times.
Python SDK
Install the SDK and start querying your data programmatically.