Skip to main content
This guide covers the quality control features in Mission Control, including review workflows, issue tracking, and quality metrics.

Overview

Quality control in Avala ensures that annotations meet your accuracy and consistency standards before they are used for model training or analysis. The QC system provides:
  • Review workflows: Multi-stage review pipelines with configurable reviewers
  • Annotation issues: A structured way to flag, track, and resolve problems
  • Quality metrics: Quantitative measures of annotation quality
  • Consensus scoring: Multi-annotator agreement for validation

Review Workflows

How Reviews Work

When an annotator completes a sequence, it enters a review stage. Reviewers examine the annotations and either approve the work or send it back for correction.
Annotator submits → Review stage → Approved → Complete
                                 ↘ Rejected → Rework → Re-submit

Reviewer Assignment

Assign reviewers at the project level:
  1. Go to Projects → select your project → Settings
  2. Under Quality Control, click Configure Review
  3. Add reviewers:
    • Specific users: Assign named reviewers
    • Team: Any member of a team can review
    • Auto-assign: Distribute review work evenly across designated reviewers
  4. Save the configuration
Reviewers should not review their own annotations. Avala automatically prevents self-review when auto-assignment is enabled.

Review Stages

Avala supports multi-stage review for projects that require multiple levels of quality assurance.
StagePurposeWho Reviews
First reviewCheck annotation accuracy and completenessTeam annotators or QA specialists
Final reviewVerify overall quality before approvalSenior reviewers or project managers

Configuring Multiple Stages

  1. In project SettingsQuality Control
  2. Enable Multi-Stage Review
  3. Define each stage:
    • Stage name
    • Assigned reviewers
    • Pass criteria (e.g., minimum accuracy threshold)
  4. Sequences must pass all stages to reach completed status

Performing a Review

  1. Navigate to your project → Review tab
  2. Select a sequence pending review
  3. Open the annotation viewer
  4. For each annotation:
    • Verify label correctness
    • Check boundary accuracy (tight fit, no missing regions)
    • Confirm attributes are set correctly
  5. Mark the sequence as:
    • Approved: Annotations meet quality standards
    • Rejected: Annotations need correction (add specific feedback)

Rejection Feedback

When rejecting, provide actionable feedback:
  • Describe what needs to change
  • Reference specific frames or objects
  • Use annotation issues (see below) for persistent tracking

Annotation Issues

What Are Annotation Issues?

Annotation issues are structured flags attached to specific annotations, frames, or sequences. They create a trackable record of problems and their resolution.

Creating an Issue

  1. In the annotation viewer, right-click an annotation or frame
  2. Select Create Issue
  3. Fill in:
    • Type: Select an issue category
    • Description: Explain the problem
    • Severity: Low, Medium, or High
  4. Click Submit

Issue Types

TypeDescription
Incorrect labelObject is labeled with the wrong class
Missing annotationAn object in the scene is not annotated
Poor boundaryAnnotation shape does not closely fit the object
Incorrect attributesAttributes (occlusion, truncation, etc.) are wrong
Duplicate annotationSame object is annotated more than once
Tracking errorObject ID changes or is inconsistent across frames

Issue Lifecycle

Open → In Progress → Resolved → Verified
                   ↘ Won't Fix
  1. Open: Issue is created and assigned
  2. In Progress: Annotator is working on the correction
  3. Resolved: Annotator marks the issue as fixed
  4. Verified: Reviewer confirms the fix is correct
  5. Won’t Fix: Issue is acknowledged but intentionally left as-is (with a reason)

Viewing and Filtering Issues

  • Project level: Go to Projects → your project → Issues tab to see all issues
  • Sequence level: Issues for the current sequence appear in the viewer sidebar
  • Filter by: Status, type, severity, assignee, date range

Quality Metrics

Mission Control tracks quality metrics at the annotator, project, and sequence level.

Key Metrics

MetricDescriptionMeasured At
Acceptance ratePercentage of sequences approved on first reviewPer annotator, per project
Rejection ratePercentage of sequences rejected during reviewPer annotator, per project
Issue densityNumber of issues per sequence or per annotationPer annotator, per project
Completion ratePercentage of assigned work that is completedPer annotator, per project
Annotation timeAverage time spent per sequence or per annotationPer annotator, per project
Rework ratePercentage of sequences requiring rework after rejectionPer annotator, per project

Viewing Metrics

  1. Navigate to Projects → your project → Analytics
  2. Select the metric and time range
  3. View breakdowns by annotator, label, or sequence
  4. Export metrics as CSV for external analysis

Setting Quality Targets

Define quality targets in project settings:
  1. Go to SettingsQuality ControlTargets
  2. Set thresholds:
    • Minimum acceptance rate (e.g., 95%)
    • Maximum issue density (e.g., < 2 issues per sequence)
  3. Annotators and reviewers see these targets in their dashboards
  4. Alerts trigger when performance drops below targets

Consensus Scoring

Consensus scoring measures agreement between multiple annotators who label the same data.

How It Works

  1. Configure consensus tasks in project settings
  2. Avala assigns the same sequences to multiple annotators (typically 2-3)
  3. Each annotator works independently
  4. Avala compares annotations and computes agreement scores

Agreement Metrics

MetricWhat It Measures
IoU (Intersection over Union)Overlap between bounding boxes or masks from different annotators
Label agreementPercentage of objects where annotators assigned the same label
Count agreementWhether annotators found the same number of objects

Using Consensus Data

  • Identify ambiguous classes: Low agreement on specific labels may indicate unclear label definitions
  • Calibrate annotators: Use consensus to identify annotators who need additional training
  • Build golden sets: Use high-agreement annotations as ground truth benchmarks
Consensus scoring requires additional annotation effort (each item is labeled multiple times). Use it strategically on a representative sample rather than the entire dataset.

Best Practices

  1. Define clear guidelines: Write detailed annotation guidelines with visual examples before starting a project
  2. Start with a pilot: Run a small batch through the full QC pipeline to identify issues early
  3. Use multi-stage review for critical projects: Two review stages catch more errors than one
  4. Monitor metrics continuously: Do not wait until the project is complete to check quality
  5. Provide constructive feedback: Specific, actionable rejection comments help annotators improve faster
  6. Calibrate regularly: Use consensus tasks periodically to maintain consistency as the team grows
  7. Iterate on guidelines: Update annotation guidelines when recurring issues indicate ambiguity

Next Steps