AI FRAMEWORKS

Mastra

The TypeScript framework for building production AI agents and workflows — with built-in tool calling, RAG pipelines, workflow orchestration, and 50+ third-party integrations that let developers ship intelligent applications in days, not months.

Why It Matters

Teams building AI agents typically glue together five or six libraries — model, workflows, memory, evaluation, tool calling, observability. The result is fragile and expensive. Mastra replaces that entire stack with one cohesive framework: model routing, visual workflow debugging, persistent memory, MCP support, and built-in evaluations — all in a single package.

What It Actually Does

Every capability explained in plain English — so you know exactly how Mastra translates into features your users see and value your business gains.

Autonomous AI Agents

Build agents with the Agent class — give them instructions, tools, sub-agents, and workflows. Agents support generate/stream, structured output with Zod, image analysis, multi-step reasoning with maxSteps, dynamic instructions, and a supervisor pattern for coordinating teams of specialised agents.

What This Means For Your Business

An agent is like a smart virtual employee you can give instructions to. Tell it what tools it can use — search the web, look up a customer record, send an email — and it figures out the steps on its own. You can even create a team of agents: one that researches, one that writes, and a supervisor that coordinates them. The agents keep working until the job is done.

Graph-Based Workflow Engine

Orchestrate complex multi-step processes with createWorkflow — using an intuitive .then(), .branch(), and .parallel() syntax. Workflows support typed steps with Zod schemas, shared state, suspend/resume for human-in-the-loop, nested workflows-as-steps, cloning, and real-time streaming of step progress.

What This Means For Your Business

When your AI needs to follow a strict process — first collect information, then validate it, then send it for approval, then execute — workflows let you define that process as a visual flow chart. Each step is guaranteed to run in the right order, and if human approval is needed, the workflow pauses, waits as long as necessary, and picks up exactly where it left off.

Extensible Tool System

Define tools with createTool — typed input/output schemas via Zod, async execution, and toModelOutput for shaping what the model sees. Agents can use tools, other agents as tools, and workflows as tools. Supports dynamic tool search for agents with large tool libraries.

What This Means For Your Business

Tools are the actions your AI can take — looking up weather data, querying a database, calling an API, or running a calculation. You define what each tool does and what inputs it needs, and the AI decides when to use which tool. You can even have one AI agent call another agent as a tool, or kick off an entire workflow as a single tool call.

Model Router — 600+ Models, One Interface

Connect to 40+ providers (OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, and more) through a single model string like 'openai/gpt-5.1'. The router auto-detects API keys from environment variables. Supports provider-specific options like OpenAI reasoning effort and Anthropic caching.

What This Means For Your Business

Instead of writing different code for every AI provider, you just change a single model name — like switching from 'openai/gpt-5.1' to 'anthropic/claude-opus-4'. Mastra automatically finds the right API key and connects. It's like a universal adapter: one plug, any AI model in the world.

MCP Client & Server

Full Model Context Protocol support — both consume tools from external MCP servers (MCPClient) and expose your Mastra agents, tools, and workflows as MCP servers (MCPServer) for any MCP-compatible system. Connect to registries like Klavis AI, Smithery, Composio, and more.

What This Means For Your Business

MCP is the emerging universal standard for AI tools — think of it as USB-C for AI. Mastra can both plug into existing MCP tool servers (to use tools others have built) and publish your own AI agents and tools so other systems can use them. This means your AI can instantly connect to thousands of pre-built integrations.

Advanced Memory System

Four types of memory: message history for conversation context, working memory as a persistent scratchpad (with Markdown templates or Zod schemas), semantic recall for meaning-based retrieval of past messages, and observational memory that automatically compresses long conversations into dense observations. Supports resource-scoped and thread-scoped persistence.

What This Means For Your Business

Your AI remembers things — not just the current conversation, but your preferences, past interactions, and important facts. Working memory acts like a sticky note the AI keeps updated about you. Semantic recall lets it search past conversations by meaning, not just keywords. And observational memory automatically summarises long conversations so the AI stays sharp without running out of context space.

RAG Pipeline (Retrieval-Augmented Generation)

Built-in document processing with multiple chunking strategies (recursive, sliding window), embedding generation via ModelRouterEmbeddingModel, and vector storage with PgVector, Pinecone, Qdrant, and MongoDB. Standardised APIs for the full ingest-embed-store-query cycle.

What This Means For Your Business

RAG lets your AI answer questions using your company's own documents — product manuals, knowledge bases, internal wikis. Mastra breaks documents into digestible pieces, converts them into searchable embeddings, stores them in a vector database, and retrieves the most relevant pieces when your AI needs to answer a question. The result is an AI that knows your business, not just generic internet knowledge.

Built-In Evaluations & Scorers

Automated scoring of agent outputs using model-graded, rule-based, and statistical methods. Built-in scorers for answer relevancy, toxicity, and more. Supports live evaluations (real-time scoring with sampling control), trace evaluations on historical data, and custom scorers. Scores are stored automatically for trend analysis.

What This Means For Your Business

How do you know if your AI is doing a good job? Mastra includes a built-in grading system. Every response can be automatically scored for quality, relevance, and safety. You can run these checks in real time on live conversations or batch-process historical data. It's like having a quality inspector watching every AI interaction and reporting back with scores and trends.

Deep Observability & Tracing

OpenTelemetry-native tracing with auto-instrumented spans for agent runs, LLM calls, tool executions, and workflow steps. Exporters for Langfuse, LangSmith, Braintrust, Datadog, Sentry, PostHog, Arize, Laminar, and any OTEL-compatible platform. Sensitive data filtering, custom metadata, sampling strategies, and serverless flush support.

What This Means For Your Business

When your AI is running in production, you need to see what's happening under the hood — which model was called, how long it took, what tools were used, and whether anything went wrong. Mastra automatically records all of this and sends it to your monitoring platform of choice (Datadog, Sentry, etc.). It's like having a flight recorder for your AI: if something goes wrong, you can replay exactly what happened.

Interactive Studio UI

A local development environment at localhost:4111 for testing agents interactively, visualising workflow graphs, running tools in isolation, exploring MCP servers, viewing traces, and testing scorers. Includes a full REST API with Swagger UI for programmatic access.

What This Means For Your Business

Studio is like a control room for your AI. Your developers open it in a browser and can chat with agents, step through workflows visually, test individual tools, view performance traces, and run quality checks — all without writing a single line of test code. Product managers can even use it to see how the AI behaves before it goes to customers.

Human-in-the-Loop

Suspend any agent or workflow at critical decision points to require human approval. Mastra uses storage to persist execution state, so workflows can pause indefinitely and resume exactly where they left off. Tool-level approval propagation through supervisor chains.

What This Means For Your Business

For important decisions — approving a refund, sending a client email, or making a purchase — the AI pauses and asks a human to approve before continuing. The workflow remembers exactly where it stopped, even if the human takes hours or days to respond. This keeps humans in control of the high-stakes moments while the AI handles the routine work.

Multi-Framework & Multi-Runtime

Integrates with Next.js, React (Vite), Astro, Express, SvelteKit, and Hono. Runs on Node.js v22.13+, Bun, Deno, and Cloudflare. Deploy as a standalone Mastra server, inside a monorepo, on cloud platforms (Vercel, Netlify, Cloudflare, AWS Lambda, Azure, Digital Ocean), or via Mastra Cloud.

What This Means For Your Business

Whatever technology stack your product is built on — whether it's a Next.js website, an Express API, or a serverless function — Mastra fits right in. It works with all popular web frameworks and can be deployed anywhere: your own servers, AWS, Vercel, Cloudflare, or Mastra's own managed cloud service.

Why Teams Choose Mastra

The key advantages that make Mastra the go-to choice for building AI-powered products.

All-in-One AI Framework — No Assembly Required

Agents, workflows, tools, memory, RAG, evaluations, observability, MCP, and a visual Studio — all in one cohesive package. Instead of stitching together 5-6 libraries, your team gets a single framework where everything works together out of the box. This dramatically reduces setup time, debugging complexity, and maintenance burden.

Best-in-Class Observability

Auto-instrumented OpenTelemetry tracing for every agent run, LLM call, tool execution, and workflow step — with 10+ exporter integrations (Langfuse, LangSmith, Datadog, Sentry, PostHog, Arize, and more). Custom sampling strategies, sensitive data filtering, and trace IDs in every response. Production debugging that actually works.

Production-Grade Memory System

Four distinct memory types — message history, working memory (template or schema-based), semantic recall, and observational memory — give agents genuine long-term context. Resource-scoped persistence means your agent remembers user preferences across conversations, not just within a single chat session.

Open Source with Enterprise Path (Apache 2.0)

The core framework is fully open source under Apache 2.0 — free to use, modify, and deploy commercially. Enterprise features (auth, RBAC) are source-available under the Mastra Enterprise License. This gives you transparency and control, with a clear upgrade path when you need enterprise governance.

Studio — Visual Development & Testing

The built-in Studio UI at localhost:4111 lets developers chat with agents, visualise workflow graphs, test tools in isolation, explore MCP servers, view traces, and run evaluation scorers — all without writing test code. It collapses the feedback loop from minutes to seconds.

Built by the Gatsby Team, Backed by YC

Mastra was created by the team behind Gatsby (one of the most successful open-source web frameworks) and is backed by Y Combinator (W25 batch). This means battle-tested engineering, strong open-source DNA, and the resources to support the framework long-term. 21.8K+ GitHub stars and 356 contributors demonstrate strong community traction.

Supported AI Providers

Mastra connects to 20+ AI providers through a single, unified interface. Switch between providers without changing your application code.

Official Providers (18)

Built-in
OpenAI
Anthropic
Google Generative AI
Google Vertex AI
xAI (Grok)
Amazon Bedrock
Azure OpenAI
Mistral
DeepSeek
Groq
Cohere
Meta (Llama)
Fireworks
Perplexity
Together.ai
Cerebras
DeepInfra
Replicate

Community Providers (2)

Community
Ollama
LM Studio

Use Case Fit

See how Mastra aligns with different AI product use cases — from chatbots and agents to content generation and workflow automation.

AI Chatbot
Strong Fit
Customer Support Agent
Strong Fit
Internal Knowledge Base
Strong Fit
Content Generation
Strong Fit
Code Assistant
Good Fit
Workflow Automation
Strong Fit
Generative UI
Possible Fit
Semantic Search
Strong Fit
Data Extraction
Strong Fit
Multi-Step Agent
Strong Fit

Companion Services

Official services that extend Mastra's capabilities in production.

Mastra Cloud

Official

Deploy, Observe, and Collaborate — Zero Infra Setup.

A managed hosting platform purpose-built for Mastra applications. Deploy agents and workflows with a single command, get automatic scaling, built-in observability dashboards, cloud storage, and team collaboration features — all without managing your own infrastructure.

What This Means For Your Business

Instead of setting up servers, databases, and monitoring tools yourself, Mastra Cloud handles all of it. You push your AI agents to the cloud, and they're instantly live — with automatic scaling when traffic spikes, built-in dashboards to see how your agents are performing, and team features so your whole organisation can collaborate. It's like having an IT department specifically for your AI workforce, included in one service.

Key Benefits

One-command deployment — push your Mastra agents to production instantly
Automatic scaling — handles traffic spikes without manual intervention
Built-in observability — traces, logs, and performance dashboards out of the box
Cloud storage — persistent memory and data without managing databases
Team collaboration — share agents, workflows, and traces across your organisation
Currently in beta — early adopters can start building today

Developer Experience

What your engineering team gets when they adopt Mastra — language support, tooling, documentation, and community.

Primary LanguageTypeScript
Documentationexcellent
Starter Templates10+
Community21.8K+ GitHub stars, 356 contributors, active Discord

Supported Frameworks

React
Next.js
Svelte
SvelteKit
Node.js
Install
npm create mastra@latest

Package managers: npm / pnpm / yarn / bun

Dev Features
TypeScript-First
CLI Tooling
Hot-Reload Compatible

Local Development

Scaffold a project with 'npm create mastra@latest', then run 'mastra dev' to launch Studio at localhost:4111. Hot-reload, interactive agent chat, workflow visualisation, tool playground, trace viewer, and Swagger API explorer — all built in. Local model support via Ollama.

Full Type Safety · Compile-Time Validation · IDE Autocomplete

Honest Trade-Offs

No technology is perfect. Here are the real limitations of Mastra — so you make an informed decision, not a surprised one.

TypeScript/JavaScript OnlyHigh

Mastra is built exclusively for the TypeScript/JavaScript ecosystem. Teams with Python, Go, Java, or Ruby backends would need to create a TypeScript service layer or use a different framework for their AI features.

Relatively New Framework (2024)Medium

Mastra launched in 2024 and is still in its early major versions. While it's backed by an experienced team and growing fast, the ecosystem of plugins, community resources, and third-party integrations is smaller than more established frameworks like LangChain.

Requires Node.js v22.13+Medium

Mastra requires a relatively recent Node.js version (v22.13.0 or later). Teams running older Node.js versions in production will need to upgrade before adopting Mastra, which may involve infrastructure changes and testing.

No Native Generative UIMedium

Unlike the Vercel AI SDK which can render interactive React components directly in AI responses, Mastra focuses on backend agent orchestration and doesn't include native generative UI capabilities. Teams needing chat-embedded widgets would pair Mastra with a frontend library like AI SDK UI or CopilotKit.

Mastra Cloud Is Still in BetaLow

The managed cloud hosting platform is currently in beta. Teams that want a fully managed deployment experience may need to wait for general availability, or self-host using the standalone Mastra server in the meantime.

Build with Mastra? Let's Talk.

Our team will help you architect, build, and ship AI-powered features using Mastra — tailored to your product and use case.