Prerequisites
Query Your Datasets with an LLM
Use the Avala SDK to fetch project data, then let an LLM reason over it:Auto-Tag Exports with LLM Classification
After exporting annotations, use an LLM to add metadata tags:Build a QA Chain Over Annotation Data
Create a question-answering system that lets you query your annotation data naturally:Use with the MCP Server
For the most natural AI integration, use the Avala MCP Server with LangChain’s MCP tool support. This lets LLMs call Avala tools directly without writing SDK code:The SDK is currently read-only for datasets. LangChain workflows can read and analyze data, but dataset creation and uploads require the REST API. See File Uploads.
Next Steps
- MCP Server setup for AI assistant integration
- Python SDK reference for all available methods
- Webhooks to trigger LangChain pipelines on Avala events