Skip to main content
Build custom agents that automatically process annotations at every stage of your workflow — approve high-quality results, reject obvious errors, flag edge cases for human review, or run custom ML pipelines.
Install the avala-agents Python SDK with pip install avala-agents (PyPI · GitHub). The server-side Agent Framework API is also available — you can register agents and submit actions directly via the REST API.

Overview

Agent Event Flow The Avala Agent Framework lets you write Python code that reacts to annotation workflow events. When an annotator submits a result, your agent receives the data and decides what to do: approve, reject, or flag it.

Quickstart

Install

pip install avala-agents

Write Your First Agent

from avala_agents import TaskAgent

agent = TaskAgent(
    api_key="avk_...",
    name="quality-checker",
    project="proj_abc123",  # optional: scope to one project
)

@agent.on("result.submitted")
def check_quality(context):
    annotations = context.result_data

    if len(annotations) == 0:
        context.reject("No annotations found")
    elif any(a.get("confidence", 1.0) < 0.3 for a in annotations):
        context.flag("Low confidence annotation detected")
    else:
        context.approve()

# Start processing (blocks)
agent.run()

Run It

python my_agent.py
Your agent registers with Avala, then polls for events. When a result is submitted in your project, your handler runs and submits the decision.

Events

Agents can listen to these events:
EventTriggerContext Type
result.submittedAnnotator submits a new resultResultContext
result.acceptedA result is approved (manual or agent)ResultContext
result.rejectedA result is rejected (manual or agent)ResultContext
task.completedA task is marked completeTaskContext
dataset.createdA new dataset is createdEventContext
dataset.updatedA dataset is modifiedEventContext
dataset.deletedA dataset is deletedEventContext
export.completedAn export finishes successfullyEventContext
export.failedAn export failsEventContext

Context Objects

ResultContext

Passed to result.* event handlers:
@agent.on("result.submitted")
def handle(context):
    context.execution_uid   # Unique execution ID
    context.event_type      # "result.submitted"
    context.task_uid        # Task UID
    context.result_uid      # Result UID
    context.result_data     # List of annotation objects
    context.result_metadata # Result metadata dict
    context.task_name       # Task name (e.g., "box")
    context.task_type       # Task type
    context.project_uid     # Project UID

    # Actions
    context.approve("Looks good")
    context.reject("Missing labels")
    context.flag("Needs expert review")
    context.skip()  # No action

TaskContext

Passed to task.* event handlers:
@agent.on("task.completed")
def handle(context):
    context.execution_uid
    context.event_type      # "task.completed"
    context.task_uid
    context.task_name
    context.task_type
    context.task_status
    context.project_uid

    # Same actions available
    context.approve()
    context.reject("QC check failed")
    context.flag("Review needed")
    context.skip()

Configuration

TaskAgent Options

agent = TaskAgent(
    api_key="avk_...",          # or set AVALA_API_KEY env var
    base_url="https://...",     # or set AVALA_BASE_URL env var
    name="my-agent",            # Agent name (unique per org)
    project="proj_uid",         # Scope to specific project (optional)
    task_types=["box", "cuboid"],  # Filter by task type (optional)
    poll_interval=5.0,          # Seconds between polls (default: 5)
)

Multiple Event Handlers

@agent.on("result.submitted")
def auto_review(context):
    # Custom QA logic
    if passes_quality_check(context.result_data):
        context.approve()
    else:
        context.reject("Failed QC")

@agent.on("result.rejected")
def notify_on_rejection(context):
    # Send Slack notification, log to analytics, etc.
    send_slack_message(f"Result {context.result_uid} rejected")
    context.skip()  # Don't take further action

Non-Blocking Mode

For integration with existing services (e.g., a Flask/FastAPI app):
# Process pending events once, then return
count = agent.run_once()
print(f"Processed {count} events")

Agent Registration API

Agents are managed via the REST API:
# Register an agent
curl -X POST https://api.avala.ai/api/v1/agents/ \
  -H "X-Avala-Api-Key: avk_..." \
  -H "Content-Type: application/json" \
  -d '{
    "name": "quality-checker",
    "events": ["result.submitted"],
    "project": "proj_abc123"
  }'

# List agents
curl https://api.avala.ai/api/v1/agents/ \
  -H "X-Avala-Api-Key: avk_..."

# View execution log
curl https://api.avala.ai/api/v1/agents/{uid}/executions/ \
  -H "X-Avala-Api-Key: avk_..."

Patterns

Confidence-Based Routing

@agent.on("result.submitted")
def route_by_confidence(context):
    confidences = [
        a.get("confidence", 0)
        for a in context.result_data
    ]
    avg_confidence = sum(confidences) / len(confidences) if confidences else 0

    if avg_confidence >= 0.9:
        context.approve("High confidence")
    elif avg_confidence >= 0.5:
        context.flag("Medium confidence — needs review")
    else:
        context.reject("Low confidence")

LLM-Powered Review

from openai import OpenAI

llm = OpenAI()

@agent.on("result.submitted")
def llm_review(context):
    prompt = f"""Review this annotation:
    Task: {context.task_name}
    Annotations: {context.result_data}

    Is this annotation correct? Reply APPROVE, REJECT, or FLAG with a reason."""

    response = llm.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
    )
    decision = response.choices[0].message.content

    if "APPROVE" in decision:
        context.approve(decision)
    elif "REJECT" in decision:
        context.reject(decision)
    else:
        context.flag(decision)

Annotation Count Validation

@agent.on("result.submitted")
def validate_count(context):
    annotations = context.result_data
    task_type = context.task_type

    min_annotations = {
        "box": 1,
        "polygon": 1,
        "classification": 1,
    }

    required = min_annotations.get(task_type, 1)
    if len(annotations) < required:
        context.reject(f"Expected at least {required} annotations, got {len(annotations)}")
    else:
        context.approve()

Webhook Mode (Coming Soon)

Webhook-based (push) delivery is planned for a future release. Currently, agents use polling to receive events. The callback_url field on agent registration is reserved for this upcoming feature.

Comparison with MCP Server

FeatureAgent FrameworkMCP Server
PurposeAutomated workflow processingAI assistant integration
TriggerAnnotation events (result submitted, etc.)User requests via AI chat
LanguagePythonAny (via MCP protocol)
ActionsApprove, reject, flag annotationsRead data, create exports
Use CaseQA automation, custom ML pipelinesAI assistants querying data
Both can be used together: MCP for interactive exploration, agents for automated processing.

Next Steps