Skip to main content
This tutorial covers using AI-powered tools in Mission Control to speed up your annotation workflow, including SAM segmentation and model-based auto-labeling.

What You’ll Learn

  • How to use SAM (Segment Anything Model) for click-to-segment annotation
  • How to set up auto-labeling with connected models
  • How to review and refine AI predictions

What AI-Assisted Annotation Offers

Avala integrates AI models directly into the annotation workflow to reduce manual effort:
  • SAM Segmentation: Click on an object to instantly generate a precise segmentation mask
  • Auto-Labeling: Run connected ML models to pre-annotate entire datasets
  • Smart Suggestions: Get label recommendations based on object appearance
These tools do not replace human annotators. They accelerate the process by providing a strong starting point that annotators refine and verify.
AI predictions always require human review. Automated outputs should be treated as drafts that need verification before they are accepted as ground truth.

SAM Segmentation

What Is SAM?

SAM (Segment Anything Model) is an interactive segmentation model built into Mission Control. It generates pixel-precise object masks from simple click or box prompts.

Using SAM in the Viewer

  1. Open an image or video frame in the annotation viewer
  2. Select the SAM tool from the toolbar (or press S)
  3. Click on the object you want to segment
  4. SAM generates a mask around the object
  5. Refine the mask if needed:
    • Add region: Click on areas that should be included
    • Remove region: Hold Alt and click on areas to exclude
    • Box prompt: Draw a bounding box around the object for a more targeted prediction
  6. When satisfied, press Enter or click Accept to convert the mask into an annotation
  7. Assign a label from the dropdown

SAM Tips

  • Click near the center of the object for the best initial prediction
  • Use box prompts for objects that are close together or have ambiguous boundaries
  • Combine positive and negative clicks to refine edges around complex shapes
  • Works best on distinct objects: SAM excels on objects that stand out from their background
  • Frame-by-frame: In video mode, run SAM on individual frames and use tracking to propagate
SAM runs in the browser and requires a stable connection. Large images may take a moment to process on the first click as the model loads.

Auto-Labeling with Connected Models

Prerequisites

Before using auto-labeling, you need:
  • A trained model or a model endpoint connected to your Avala organization
  • An inference integration configured in SettingsIntegrationsInference
  • A project with labels that match the model’s output classes
See the Inference Integration guide for setup instructions.

Triggering Auto-Label Predictions

  1. Navigate to your project in Mission Control
  2. Go to Sequences or Items
  3. Select the items you want to auto-label:
    • Single item: Click the item, then click Auto-Label in the toolbar
    • Batch: Select multiple items with checkboxes, then click Auto-Label
    • Full dataset: Use Auto-Label All from the project actions menu
  4. Choose the model from the dropdown
  5. Set a confidence threshold (predictions below this threshold are discarded)
  6. Click Run

Monitoring Auto-Label Progress

  • A progress bar shows the status of the auto-labeling job
  • Results appear on each item as they complete
  • Check the Activity panel for job status and any errors

Reviewing AI Predictions

Regardless of whether predictions come from SAM or auto-labeling, they must be reviewed.

Review Workflow

  1. Open an item that has AI-generated predictions
  2. Predictions are displayed with a visual indicator (dashed outline or distinct color) to distinguish them from human annotations
  3. For each prediction:
    • Accept: Click the prediction and press Enter or click Accept to confirm it
    • Modify: Adjust the position, size, or label before accepting
    • Reject: Press Delete or click Reject to remove the prediction
  4. Save your reviewed annotations

Bulk Review

For large batches of auto-labeled data:
  1. Go to the project Review tab
  2. Filter by Source: Auto-Label to see only AI-generated annotations
  3. Use the review controls to accept or reject predictions per item
  4. Track review progress in the project dashboard

Quality Checks

After reviewing AI predictions:
  • Verify label accuracy, especially for ambiguous objects
  • Check boundary precision on segmentation masks
  • Ensure no objects were missed (false negatives)
  • Confirm that no background was incorrectly labeled (false positives)

Tips for Best Results

TipWhy It Helps
Use high-quality training data for auto-label modelsBetter model input produces better predictions
Set confidence thresholds appropriatelyHigher thresholds reduce false positives; lower thresholds reduce missed objects
Review a sample firstCheck a small batch before auto-labeling the full dataset to gauge model quality
Combine SAM with auto-labelingUse auto-labeling for detection, then SAM to refine boundaries
Iterate on your modelExport reviewed annotations and retrain for improved auto-labeling over time

Next Steps