Skip to main content
Autotag uses machine learning models to automatically generate suggested tags and labels for your data. These auto-generated tags help accelerate annotation by pre-labeling items that can then be reviewed and corrected by annotators.

Concept

When autotag is enabled for a project, ML models analyze your data and produce confidence-scored predictions. These predictions appear as suggested annotations that annotators can accept, modify, or reject. Autotag works with both image and object-level predictions.

Image Prefix Queries

Filter items by image-level autotag predictions using the image_prefix: syntax:
image_prefix:weather_clear
image_prefix:scene_highway
image_prefix:time_of_day_night
Image prefix queries match items where the autotag model has predicted a specific image-level classification.

Object Prefix Queries

Filter items by object-level autotag predictions using the object_prefix: syntax:
object_prefix:car
object_prefix:pedestrian
object_prefix:traffic_sign
Object prefix queries match items that contain at least one predicted object of the specified type.

Score Ranges

Autotag predictions include a confidence score ranging from -1 to 1:
Score RangeMeaning
0.8 to 1.0High confidence — model is very certain
0.5 to 0.8Medium confidence — likely correct but should be verified
0.0 to 0.5Low confidence — uncertain prediction
-1.0 to 0.0Negative confidence — model predicts the tag does not apply
Filter by confidence score to focus on predictions that need review:
image_prefix:weather_clear AND score >= 0.8
object_prefix:car AND score < 0.5
Items with low confidence scores are good candidates for manual review, as they represent cases where the model is uncertain.

Training Set Queries

Filter items by the training set used to generate autotag predictions:
training_set = "model_v2"
training_set = "detector_2025_01"
This is useful when multiple autotag models have been run on the same dataset and you want to compare or filter by a specific model version.

Usage in the Annotation Workflow

Enabling Autotag

  1. Navigate to your project in Mission Control.
  2. Go to Settings → Autotag.
  3. Select the ML model to use for predictions.
  4. Choose the data to run autotag on (full dataset or specific slices).
  5. Click Run Autotag to start the prediction job.

Reviewing Autotag Predictions

  1. Open the annotation editor for an item with autotag predictions.
  2. Suggested annotations appear with a distinct visual indicator.
  3. For each suggestion:
    • Accept — Confirm the prediction as correct.
    • Modify — Adjust the annotation (resize, relabel, etc.).
    • Reject — Remove the incorrect prediction.
  4. Save your review to finalize the annotations.

Filtering by Autotag Status

Use the query language to find items based on their autotag review status:
autotag_status = "pending"
autotag_status = "reviewed"
autotag_status = "accepted"

Best Practices

  • Start with high-confidence predictions — Review items with scores above 0.8 first for quick wins.
  • Use low-confidence items for model improvement — These edge cases are valuable for retraining.
  • Run autotag on new data incrementally — Process new uploads as they arrive rather than waiting for large batches.
  • Compare model versions — Use training set queries to evaluate whether a newer model performs better.