This guide covers the quality control features in Mission Control, including review workflows, issue tracking, and quality metrics.
Overview
Quality control in Avala ensures that annotations meet your accuracy and consistency standards before they are used for model training or analysis. The QC system provides:
- Review workflows: Multi-stage review pipelines with configurable reviewers
- Annotation issues: A structured way to flag, track, and resolve problems
- Quality metrics: Quantitative measures of annotation quality
- Consensus scoring: Multi-annotator agreement for validation
Review Workflows
How Reviews Work
When an annotator completes a sequence, it enters a review stage. Reviewers examine the annotations and either approve the work or send it back for correction.
Annotator submits → Review stage → Approved → Complete
↘ Rejected → Rework → Re-submit
Reviewer Assignment
Assign reviewers at the project level:
- Go to Projects → select your project → Settings
- Under Quality Control, click Configure Review
- Add reviewers:
- Specific users: Assign named reviewers
- Team: Any member of a team can review
- Auto-assign: Distribute review work evenly across designated reviewers
- Save the configuration
Reviewers should not review their own annotations. Avala automatically prevents self-review when auto-assignment is enabled.
Review Stages
Avala supports multi-stage review for projects that require multiple levels of quality assurance.
| Stage | Purpose | Who Reviews |
|---|
| First review | Check annotation accuracy and completeness | Team annotators or QA specialists |
| Final review | Verify overall quality before approval | Senior reviewers or project managers |
Configuring Multiple Stages
- In project Settings → Quality Control
- Enable Multi-Stage Review
- Define each stage:
- Stage name
- Assigned reviewers
- Pass criteria (e.g., minimum accuracy threshold)
- Sequences must pass all stages to reach completed status
- Navigate to your project → Review tab
- Select a sequence pending review
- Open the annotation viewer
- For each annotation:
- Verify label correctness
- Check boundary accuracy (tight fit, no missing regions)
- Confirm attributes are set correctly
- Mark the sequence as:
- Approved: Annotations meet quality standards
- Rejected: Annotations need correction (add specific feedback)
Rejection Feedback
When rejecting, provide actionable feedback:
- Describe what needs to change
- Reference specific frames or objects
- Use annotation issues (see below) for persistent tracking
Annotation Issues
What Are Annotation Issues?
Annotation issues are structured flags attached to specific annotations, frames, or sequences. They create a trackable record of problems and their resolution.
Creating an Issue
- In the annotation viewer, right-click an annotation or frame
- Select Create Issue
- Fill in:
- Type: Select an issue category
- Description: Explain the problem
- Severity: Low, Medium, or High
- Click Submit
Issue Types
| Type | Description |
|---|
| Incorrect label | Object is labeled with the wrong class |
| Missing annotation | An object in the scene is not annotated |
| Poor boundary | Annotation shape does not closely fit the object |
| Incorrect attributes | Attributes (occlusion, truncation, etc.) are wrong |
| Duplicate annotation | Same object is annotated more than once |
| Tracking error | Object ID changes or is inconsistent across frames |
Issue Lifecycle
Open → In Progress → Resolved → Verified
↘ Won't Fix
- Open: Issue is created and assigned
- In Progress: Annotator is working on the correction
- Resolved: Annotator marks the issue as fixed
- Verified: Reviewer confirms the fix is correct
- Won’t Fix: Issue is acknowledged but intentionally left as-is (with a reason)
Viewing and Filtering Issues
- Project level: Go to Projects → your project → Issues tab to see all issues
- Sequence level: Issues for the current sequence appear in the viewer sidebar
- Filter by: Status, type, severity, assignee, date range
Quality Metrics
Mission Control tracks quality metrics at the annotator, project, and sequence level.
Key Metrics
| Metric | Description | Measured At |
|---|
| Acceptance rate | Percentage of sequences approved on first review | Per annotator, per project |
| Rejection rate | Percentage of sequences rejected during review | Per annotator, per project |
| Issue density | Number of issues per sequence or per annotation | Per annotator, per project |
| Completion rate | Percentage of assigned work that is completed | Per annotator, per project |
| Annotation time | Average time spent per sequence or per annotation | Per annotator, per project |
| Rework rate | Percentage of sequences requiring rework after rejection | Per annotator, per project |
Viewing Metrics
- Navigate to Projects → your project → Analytics
- Select the metric and time range
- View breakdowns by annotator, label, or sequence
- Export metrics as CSV for external analysis
Setting Quality Targets
Define quality targets in project settings:
- Go to Settings → Quality Control → Targets
- Set thresholds:
- Minimum acceptance rate (e.g., 95%)
- Maximum issue density (e.g., < 2 issues per sequence)
- Annotators and reviewers see these targets in their dashboards
- Alerts trigger when performance drops below targets
Consensus Scoring
Consensus scoring measures agreement between multiple annotators who label the same data.
How It Works
- Configure consensus tasks in project settings
- Avala assigns the same sequences to multiple annotators (typically 2-3)
- Each annotator works independently
- Avala compares annotations and computes agreement scores
Agreement Metrics
| Metric | What It Measures |
|---|
| IoU (Intersection over Union) | Overlap between bounding boxes or masks from different annotators |
| Label agreement | Percentage of objects where annotators assigned the same label |
| Count agreement | Whether annotators found the same number of objects |
Using Consensus Data
- Identify ambiguous classes: Low agreement on specific labels may indicate unclear label definitions
- Calibrate annotators: Use consensus to identify annotators who need additional training
- Build golden sets: Use high-agreement annotations as ground truth benchmarks
Consensus scoring requires additional annotation effort (each item is labeled multiple times). Use it strategically on a representative sample rather than the entire dataset.
Best Practices
- Define clear guidelines: Write detailed annotation guidelines with visual examples before starting a project
- Start with a pilot: Run a small batch through the full QC pipeline to identify issues early
- Use multi-stage review for critical projects: Two review stages catch more errors than one
- Monitor metrics continuously: Do not wait until the project is complete to check quality
- Provide constructive feedback: Specific, actionable rejection comments help annotators improve faster
- Calibrate regularly: Use consensus tasks periodically to maintain consistency as the team grows
- Iterate on guidelines: Update annotation guidelines when recurring issues indicate ambiguity
Next Steps