Skip to main content
Avala’s managed labeling service delivers production-quality annotations backed by a structured QA process, career domain experts, and measurable quality metrics. This page documents what you can expect when you use Avala’s workforce for annotation.

Quality Process

Every annotation passes through a 3-layer quality assurance pipeline before it reaches your export.

Layer 1: Automated Checks

Before a human reviewer sees the result, automated validation catches structural errors.
CheckWhat it catches
Schema validationMissing required attributes, invalid label values, out-of-range coordinates
Geometric validationZero-area bounding boxes, self-intersecting polygons, cuboids outside the point cloud bounds
Consistency checksDuplicate object IDs, broken tracking links across frames, label/attribute mismatches
Coverage checksUnannotated regions that should have labels based on the project ontology

Layer 2: Human Review

A dedicated reviewer — a senior annotator with deep knowledge of your ontology — inspects each result for accuracy, completeness, and adherence to your labeling guidelines. Reviewers check for:
  • Correct object classification and attribute values
  • Tight bounding box / polygon / cuboid fit
  • Consistent object tracking across frames
  • Edge cases handled per your project-specific instructions

Layer 3: Expert Audit

A random sample of reviewed results is escalated to domain experts for a final audit. This layer calibrates reviewer accuracy and catches systematic issues before they affect your training data. Audit findings feed back into annotator training and guideline refinements, creating a continuous improvement loop.

Accuracy Targets

MetricTarget
First-pass yield> 99% of annotations accepted without rework
Classification accuracy> 99% correct label assignment
Localization accuracyBounding box IoU > 0.90 with ground truth
Tracking consistency> 99% correct object ID continuity across frames
Attribute accuracy> 99% correct attribute values (occlusion, truncation, etc.)
Accuracy targets apply to Avala’s managed labeling service. Self-service annotation accuracy depends on your team’s annotators and QA configuration.

Turnaround Times

Turnaround depends on annotation complexity and volume. The table below shows typical timelines for common annotation types when using Avala’s managed labeling service.
Annotation TypeTypical TurnaroundNotes
2D bounding boxes (images)1-3 business daysStandard object detection
2D polygons (images)2-5 business daysInstance segmentation
Semantic segmentation (images)3-7 business daysPixel-level classification
3D cuboids (LiDAR)3-7 business daysPoint cloud annotation with BEV + perspective views
Multi-sensor 3D (LiDAR + camera)5-10 business daysSynchronized sensor annotation
Video object tracking3-7 business daysPer-sequence, depends on frame count and object density
Keypoint annotation2-5 business daysPose estimation and landmark labeling
Turnaround begins when data is uploaded and the project ontology is finalized. Pilot datasets (< 1,000 items) can often be completed faster.
Need a specific SLA for your project? Contact sales@avala.ai to discuss guaranteed turnaround commitments.

Workforce Quality

Domain Specialization

Avala’s annotators are career professionals, not gig workers. Each annotator specializes in a specific domain (autonomous driving, robotics, medical imaging) for 12 months or more.
AttributeDetails
Specialization period12+ months on a single customer domain
TrainingProject-specific onboarding with your ontology, edge case library, and labeling guidelines
Retention rate> 90% annual retention — annotators build deep institutional knowledge
Team size15,000+ annotators across all domains

Why Retention Matters

High annotator retention directly impacts data quality:
  • Institutional knowledge — Annotators learn your edge cases, naming conventions, and domain-specific nuances over time. A new annotator takes weeks to reach the same level.
  • Fewer rework cycles — Experienced annotators produce fewer errors on the first pass, reducing review overhead and turnaround time.
  • Ontology evolution — When you update your label taxonomy, experienced annotators adapt faster because they understand the reasoning behind the changes.

Quality Metrics via API

Quality metrics for your projects are available programmatically through the API and SDKs.

Project-Level Metrics

from avala import Client

client = Client()

# Get project details including quality metrics
projects = client.projects.list()

for project in projects:
    print(f"Project: {project.name}")
    print(f"  Tasks completed: {project.task_count}")

Task-Level Quality Data

# List tasks with their review status
tasks = client.tasks.list(project="project_uid")

for task in tasks:
    print(f"Task: {task.uid}")
    print(f"  Status: {task.status}")

Export with Quality Metadata

When you create an export, each annotation result includes its QA review status, allowing you to filter by quality level in your training pipeline.
export = client.exports.create(
    name="Training data - QA passed only",
    format="avala-json-external",
    projects=["project_uid"]
)

Quality Control Configuration

For self-service annotation, Avala provides configurable QA workflows.
FeatureDescription
Multi-stage reviewRoute annotations through one or more review stages before acceptance
Consensus workflowsRequire multiple annotators to agree on the same label
Acceptance criteriaSet minimum quality thresholds for task acceptance
Issue trackingFlag and track annotation issues with comments and resolution status
Inter-annotator agreementMeasure consistency across annotators on the same data
See Quality Control for setup instructions.

Next Steps