Develop AI agents on Apify
The Apify platform provides everything you need to build, test, and deploy AI agents. This page walks you through the full stack: templates, sandbox code execution, LLM access through OpenRouter, pay-per-event monetization, and deployment to Apify Store.
Looking to use AI coding assistants (Claude Code, Cursor, GitHub Copilot) to help you develop Actors? See Build Actors with AI.
Start from a template
The fastest way to start your AI agent is to use one of the AI framework templates. Each template comes pre-configured with the right file structure, dependencies, and the Apify SDK integration.
Available AI framework templates include:
- LangChain - LLM pipelines with chain-of-thought and tool use
- Mastra - TypeScript-native AI agent framework
- CrewAI - multi-agent orchestration for complex tasks
- LlamaIndex - retrieval-augmented generation (RAG) workflows
- PydanticAI - Python agents with structured, validated outputs
- Smolagents - lightweight agents from Hugging Face
- MCP - expose your Actor as an MCP server
Initialize a template with the Apify CLI:
apify create my-agent
The command guides you through template selection. Browse all available templates at apify.com/templates.
If you don't have the Apify CLI installed, see the installation guide.
Use AI Sandbox for code execution
AI Sandbox is an isolated, containerized environment where your AI agent can execute code and system commands at runtime. Your agent Actor starts the sandbox and communicates with it through a REST API or MCP interface.
Key capabilities
- Code execution - run JavaScript, TypeScript, Python, and bash via
POST /execwith captured stdout/stderr and exit codes - Filesystem access - read, write, list, and delete files through
/fs/{path}endpoints - Dynamic reverse proxy - start a web server inside the sandbox and expose it externally
- Dependency installation - install npm and pip packages at startup through Actor input
- Idle timeout - the sandbox automatically stops after a period of inactivity
- MCP interface - connect directly from Claude Code or other MCP clients for live debugging
Example workflow
- Your agent Actor starts AI Sandbox as a sub-Actor
- The agent sends code to execute via the REST API (
POST /exec) - AI Sandbox runs the code in isolation and returns results
- The agent processes results and iterates
AI Sandbox runs on a Debian image with Node.js version 24 and Python 3 pre-installed. You can install additional dependencies through the Actor input configuration.
Access LLMs with OpenRouter
The OpenRouter Actor provides access to 100+ LLMs through your Apify account. Supported providers include OpenAI, Anthropic, Google, Mistral, Meta, and more. No separate API keys or billing setup required - all costs are billed as platform usage.
OpenRouter exposes an OpenAI-compatible API, so you can use it with any SDK that supports the OpenAI API format.
Connect to OpenRouter
Use the Apify OpenRouter proxy endpoint with your Apify token:
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
const openrouter = createOpenRouter({
baseURL: 'https://openrouter.apify.actor/api/v1',
apiKey: 'api-key-not-required',
headers: {
Authorization: `Bearer ${process.env.APIFY_TOKEN}`,
},
});
The proxy supports chat completions, streaming, text embeddings, and image generation through vision-capable models.
Pay-per-event pricing can charge users per token. To do this, extract token counts from OpenRouter responses. Check the OpenRouter Actor README for the latest guidance on this workflow.
Monetize with pay-per-event pricing
Pay-per-event (PPE) pricing lets you charge users for specific actions your agent performs. Use Actor.charge() from the JavaScript SDK or Python SDK to bill users for events like API calls, generated results, or token usage.
PPE for AI agents
For AI agents that use OpenRouter, consider these pricing strategies:
- Fixed pricing - charge a flat fee per task or request, regardless of the underlying LLM costs
- Usage-based pricing - charge per token or per LLM call, passing costs through to users with a markup
Your profit is calculated as:
profit = (0.8 × revenue) - platform costs
If an Actor's net profit goes negative (for example, from free-tier users consuming LLM resources), the negative amount resets to $0 for aggregation purposes. Negative profit on one Actor doesn't affect earnings from your other Actors.
For detailed pricing guidance, see the pay-per-event documentation.
Deploy to Apify
When your agent is ready, deploy it to the Apify platform:
apify push
This builds and deploys your Actor. Once deployed, you can:
- Publish to Apify Store - make your agent available to other users and start earning with PPE pricing. See the publishing documentation.
- Run via API - trigger your agent programmatically through the Apify API.
- Set up schedules - run your agent on a recurring schedule.
For more deployment options, see the deployment documentation.