Quality Process
Every annotation passes through a 3-layer quality assurance pipeline before it reaches your export.Layer 1: Automated Checks
Before a human reviewer sees the result, automated validation catches structural errors.| Check | What it catches |
|---|---|
| Schema validation | Missing required attributes, invalid label values, out-of-range coordinates |
| Geometric validation | Zero-area bounding boxes, self-intersecting polygons, cuboids outside the point cloud bounds |
| Consistency checks | Duplicate object IDs, broken tracking links across frames, label/attribute mismatches |
| Coverage checks | Unannotated regions that should have labels based on the project ontology |
Layer 2: Human Review
A dedicated reviewer — a senior annotator with deep knowledge of your ontology — inspects each result for accuracy, completeness, and adherence to your labeling guidelines. Reviewers check for:- Correct object classification and attribute values
- Tight bounding box / polygon / cuboid fit
- Consistent object tracking across frames
- Edge cases handled per your project-specific instructions
Layer 3: Expert Audit
A random sample of reviewed results is escalated to domain experts for a final audit. This layer calibrates reviewer accuracy and catches systematic issues before they affect your training data. Audit findings feed back into annotator training and guideline refinements, creating a continuous improvement loop.Accuracy Targets
| Metric | Target |
|---|---|
| First-pass yield | > 99% of annotations accepted without rework |
| Classification accuracy | > 99% correct label assignment |
| Localization accuracy | Bounding box IoU > 0.90 with ground truth |
| Tracking consistency | > 99% correct object ID continuity across frames |
| Attribute accuracy | > 99% correct attribute values (occlusion, truncation, etc.) |
Accuracy targets apply to Avala’s managed labeling service. Self-service annotation accuracy depends on your team’s annotators and QA configuration.
Turnaround Times
Turnaround depends on annotation complexity and volume. The table below shows typical timelines for common annotation types when using Avala’s managed labeling service.| Annotation Type | Typical Turnaround | Notes |
|---|---|---|
| 2D bounding boxes (images) | 1-3 business days | Standard object detection |
| 2D polygons (images) | 2-5 business days | Instance segmentation |
| Semantic segmentation (images) | 3-7 business days | Pixel-level classification |
| 3D cuboids (LiDAR) | 3-7 business days | Point cloud annotation with BEV + perspective views |
| Multi-sensor 3D (LiDAR + camera) | 5-10 business days | Synchronized sensor annotation |
| Video object tracking | 3-7 business days | Per-sequence, depends on frame count and object density |
| Keypoint annotation | 2-5 business days | Pose estimation and landmark labeling |
Workforce Quality
Domain Specialization
Avala’s annotators are career professionals, not gig workers. Each annotator specializes in a specific domain (autonomous driving, robotics, medical imaging) for 12 months or more.| Attribute | Details |
|---|---|
| Specialization period | 12+ months on a single customer domain |
| Training | Project-specific onboarding with your ontology, edge case library, and labeling guidelines |
| Retention rate | > 90% annual retention — annotators build deep institutional knowledge |
| Team size | 15,000+ annotators across all domains |
Why Retention Matters
High annotator retention directly impacts data quality:- Institutional knowledge — Annotators learn your edge cases, naming conventions, and domain-specific nuances over time. A new annotator takes weeks to reach the same level.
- Fewer rework cycles — Experienced annotators produce fewer errors on the first pass, reducing review overhead and turnaround time.
- Ontology evolution — When you update your label taxonomy, experienced annotators adapt faster because they understand the reasoning behind the changes.
Quality Metrics via API
Quality metrics for your projects are available programmatically through the API and SDKs.Project-Level Metrics
Task-Level Quality Data
Export with Quality Metadata
When you create an export, each annotation result includes its QA review status, allowing you to filter by quality level in your training pipeline.Quality Control Configuration
For self-service annotation, Avala provides configurable QA workflows.| Feature | Description |
|---|---|
| Multi-stage review | Route annotations through one or more review stages before acceptance |
| Consensus workflows | Require multiple annotators to agree on the same label |
| Acceptance criteria | Set minimum quality thresholds for task acceptance |
| Issue tracking | Flag and track annotation issues with comments and resolution status |
| Inter-annotator agreement | Measure consistency across annotators on the same data |
Next Steps
Quality Control Guide
Configure multi-stage review, consensus, and acceptance workflows for your projects.
Traceability
Trace any annotation back to its source data, annotator, and QA review.
Why Avala
See what makes Avala different from Scale AI, Labelbox, and Label Studio.
Talk to Sales
Discuss managed labeling, custom SLAs, and enterprise deployment.