Build custom agents that automatically process annotations at every stage of your workflow — approve high-quality results, reject obvious errors, flag edge cases for human review, or run custom ML pipelines.
Install the avala-agents Python SDK with pip install avala-agents (PyPI · GitHub). The server-side Agent Framework API is also available — you can register agents and submit actions directly via the REST API.
Overview
The Avala Agent Framework lets you write Python code that reacts to annotation workflow events. When an annotator submits a result, your agent receives the data and decides what to do: approve, reject, or flag it.
Quickstart
Install
Write Your First Agent
from avala_agents import TaskAgent
agent = TaskAgent(
api_key="avk_...",
name="quality-checker",
project="proj_abc123", # optional: scope to one project
)
@agent.on("result.submitted")
def check_quality(context):
annotations = context.result_data
if len(annotations) == 0:
context.reject("No annotations found")
elif any(a.get("confidence", 1.0) < 0.3 for a in annotations):
context.flag("Low confidence annotation detected")
else:
context.approve()
# Start processing (blocks)
agent.run()
Run It
Your agent registers with Avala, then polls for events. When a result is submitted in your project, your handler runs and submits the decision.
Events
Agents can listen to these events:
| Event | Trigger | Context Type |
|---|
result.submitted | Annotator submits a new result | ResultContext |
result.accepted | A result is approved (manual or agent) | ResultContext |
result.rejected | A result is rejected (manual or agent) | ResultContext |
task.completed | A task is marked complete | TaskContext |
dataset.created | A new dataset is created | EventContext |
dataset.updated | A dataset is modified | EventContext |
dataset.deleted | A dataset is deleted | EventContext |
export.completed | An export finishes successfully | EventContext |
export.failed | An export fails | EventContext |
Context Objects
ResultContext
Passed to result.* event handlers:
@agent.on("result.submitted")
def handle(context):
context.execution_uid # Unique execution ID
context.event_type # "result.submitted"
context.task_uid # Task UID
context.result_uid # Result UID
context.result_data # List of annotation objects
context.result_metadata # Result metadata dict
context.task_name # Task name (e.g., "box")
context.task_type # Task type
context.project_uid # Project UID
# Actions
context.approve("Looks good")
context.reject("Missing labels")
context.flag("Needs expert review")
context.skip() # No action
TaskContext
Passed to task.* event handlers:
@agent.on("task.completed")
def handle(context):
context.execution_uid
context.event_type # "task.completed"
context.task_uid
context.task_name
context.task_type
context.task_status
context.project_uid
# Same actions available
context.approve()
context.reject("QC check failed")
context.flag("Review needed")
context.skip()
Configuration
TaskAgent Options
agent = TaskAgent(
api_key="avk_...", # or set AVALA_API_KEY env var
base_url="https://...", # or set AVALA_BASE_URL env var
name="my-agent", # Agent name (unique per org)
project="proj_uid", # Scope to specific project (optional)
task_types=["box", "cuboid"], # Filter by task type (optional)
poll_interval=5.0, # Seconds between polls (default: 5)
)
Multiple Event Handlers
@agent.on("result.submitted")
def auto_review(context):
# Custom QA logic
if passes_quality_check(context.result_data):
context.approve()
else:
context.reject("Failed QC")
@agent.on("result.rejected")
def notify_on_rejection(context):
# Send Slack notification, log to analytics, etc.
send_slack_message(f"Result {context.result_uid} rejected")
context.skip() # Don't take further action
Non-Blocking Mode
For integration with existing services (e.g., a Flask/FastAPI app):
# Process pending events once, then return
count = agent.run_once()
print(f"Processed {count} events")
Agent Registration API
Agents are managed via the REST API:
# Register an agent
curl -X POST https://api.avala.ai/api/v1/agents/ \
-H "X-Avala-Api-Key: avk_..." \
-H "Content-Type: application/json" \
-d '{
"name": "quality-checker",
"events": ["result.submitted"],
"project": "proj_abc123"
}'
# List agents
curl https://api.avala.ai/api/v1/agents/ \
-H "X-Avala-Api-Key: avk_..."
# View execution log
curl https://api.avala.ai/api/v1/agents/{uid}/executions/ \
-H "X-Avala-Api-Key: avk_..."
Patterns
Confidence-Based Routing
@agent.on("result.submitted")
def route_by_confidence(context):
confidences = [
a.get("confidence", 0)
for a in context.result_data
]
avg_confidence = sum(confidences) / len(confidences) if confidences else 0
if avg_confidence >= 0.9:
context.approve("High confidence")
elif avg_confidence >= 0.5:
context.flag("Medium confidence — needs review")
else:
context.reject("Low confidence")
LLM-Powered Review
from openai import OpenAI
llm = OpenAI()
@agent.on("result.submitted")
def llm_review(context):
prompt = f"""Review this annotation:
Task: {context.task_name}
Annotations: {context.result_data}
Is this annotation correct? Reply APPROVE, REJECT, or FLAG with a reason."""
response = llm.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}],
)
decision = response.choices[0].message.content
if "APPROVE" in decision:
context.approve(decision)
elif "REJECT" in decision:
context.reject(decision)
else:
context.flag(decision)
Annotation Count Validation
@agent.on("result.submitted")
def validate_count(context):
annotations = context.result_data
task_type = context.task_type
min_annotations = {
"box": 1,
"polygon": 1,
"classification": 1,
}
required = min_annotations.get(task_type, 1)
if len(annotations) < required:
context.reject(f"Expected at least {required} annotations, got {len(annotations)}")
else:
context.approve()
Webhook Mode (Coming Soon)
Webhook-based (push) delivery is planned for a future release. Currently, agents use polling to receive events. The callback_url field on agent registration is reserved for this upcoming feature.
Comparison with MCP Server
| Feature | Agent Framework | MCP Server |
|---|
| Purpose | Automated workflow processing | AI assistant integration |
| Trigger | Annotation events (result submitted, etc.) | User requests via AI chat |
| Language | Python | Any (via MCP protocol) |
| Actions | Approve, reject, flag annotations | Read data, create exports |
| Use Case | QA automation, custom ML pipelines | AI assistants querying data |
Both can be used together: MCP for interactive exploration, agents for automated processing.
Next Steps