Adding Langfuse Tracing to Mastra AI Agents

In the previous article, we built an AI agent using Mastra and Deno that can search for npm packages and provide version information. Now, let's enhance our setup by adding Langfuse tracing to monitor and analyze our agent's performance.
Langfuse is an open-source observability platform for LLM applications. It helps you track, debug, and improve your AI applications by providing detailed insights into their behavior. While you can use the cloud version of Langfuse, we'll set up a local instance for more control and privacy.
Setting Up Local Langfuse
First, let's add a docker-compose.yml
file to our project root. This will set up all the necessary
services for Langfuse to run locally:
services:
langfuse-worker:
image: langfuse/langfuse-worker:3
restart: always
depends_on: &langfuse-depends-on
postgres:
condition: service_healthy
minio:
condition: service_healthy
redis:
condition: service_healthy
clickhouse:
condition: service_healthy
ports:
- 127.0.0.1:3030:3030
environment: &langfuse-worker-env
DATABASE_URL: postgresql://postgres:postgres@postgres:5432/postgres
SALT: 'mysalt'
ENCRYPTION_KEY: '4bc44298b37d77abcac27fc48271cf410c624e6bb48c9aaab5033e5a073a9b83'
TELEMETRY_ENABLED: ${TELEMETRY_ENABLED:-true}
LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES: ${LANGFUSE_ENABLE_EXPERIMENTAL_FEATURES:-true}
CLICKHOUSE_MIGRATION_URL: ${CLICKHOUSE_MIGRATION_URL:-clickhouse://clickhouse:9000}
CLICKHOUSE_URL: ${CLICKHOUSE_URL:-http://clickhouse:8123}
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-clickhouse}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-clickhouse}
CLICKHOUSE_CLUSTER_ENABLED: ${CLICKHOUSE_CLUSTER_ENABLED:-false}
LANGFUSE_S3_EVENT_UPLOAD_BUCKET: ${LANGFUSE_S3_EVENT_UPLOAD_BUCKET:-langfuse}
LANGFUSE_S3_EVENT_UPLOAD_REGION: ${LANGFUSE_S3_EVENT_UPLOAD_REGION:-auto}
LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_EVENT_UPLOAD_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT: ${LANGFUSE_S3_EVENT_UPLOAD_ENDPOINT:-http://minio:9000}
LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_EVENT_UPLOAD_FORCE_PATH_STYLE:-true}
LANGFUSE_S3_EVENT_UPLOAD_PREFIX: ${LANGFUSE_S3_EVENT_UPLOAD_PREFIX:-events/}
LANGFUSE_S3_MEDIA_UPLOAD_BUCKET: ${LANGFUSE_S3_MEDIA_UPLOAD_BUCKET:-langfuse}
LANGFUSE_S3_MEDIA_UPLOAD_REGION: ${LANGFUSE_S3_MEDIA_UPLOAD_REGION:-auto}
LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID: ${LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY: ${LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT: ${LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT:-http://localhost:9090}
LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE: ${LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE:-true}
LANGFUSE_S3_MEDIA_UPLOAD_PREFIX: ${LANGFUSE_S3_MEDIA_UPLOAD_PREFIX:-media/}
LANGFUSE_S3_BATCH_EXPORT_ENABLED: ${LANGFUSE_S3_BATCH_EXPORT_ENABLED:-false}
LANGFUSE_S3_BATCH_EXPORT_BUCKET: ${LANGFUSE_S3_BATCH_EXPORT_BUCKET:-langfuse}
LANGFUSE_S3_BATCH_EXPORT_PREFIX: ${LANGFUSE_S3_BATCH_EXPORT_PREFIX:-exports/}
LANGFUSE_S3_BATCH_EXPORT_REGION: ${LANGFUSE_S3_BATCH_EXPORT_REGION:-auto}
LANGFUSE_S3_BATCH_EXPORT_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_ENDPOINT:-http://minio:9000}
LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT: ${LANGFUSE_S3_BATCH_EXPORT_EXTERNAL_ENDPOINT:-http://localhost:9090}
LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID: ${LANGFUSE_S3_BATCH_EXPORT_ACCESS_KEY_ID:-minio}
LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY: ${LANGFUSE_S3_BATCH_EXPORT_SECRET_ACCESS_KEY:-miniosecret}
LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE: ${LANGFUSE_S3_BATCH_EXPORT_FORCE_PATH_STYLE:-true}
LANGFUSE_INGESTION_QUEUE_DELAY_MS: ${LANGFUSE_INGESTION_QUEUE_DELAY_MS:-}
LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS: ${LANGFUSE_INGESTION_CLICKHOUSE_WRITE_INTERVAL_MS:-}
REDIS_HOST: ${REDIS_HOST:-redis}
REDIS_PORT: ${REDIS_PORT:-6379}
REDIS_AUTH: ${REDIS_AUTH:-myredissecret}
REDIS_TLS_ENABLED: ${REDIS_TLS_ENABLED:-false}
REDIS_TLS_CA: ${REDIS_TLS_CA:-/certs/ca.crt}
REDIS_TLS_CERT: ${REDIS_TLS_CERT:-/certs/redis.crt}
REDIS_TLS_KEY: ${REDIS_TLS_KEY:-/certs/redis.key}
langfuse-web:
image: langfuse/langfuse:3
restart: always
depends_on: *langfuse-depends-on
ports:
- 3000:3000
environment:
<<: *langfuse-worker-env
NEXTAUTH_URL: http://localhost:3000
NEXTAUTH_SECRET: mysecret
LANGFUSE_INIT_ORG_ID: ${LANGFUSE_INIT_ORG_ID:-1234567890}
LANGFUSE_INIT_ORG_NAME: ${LANGFUSE_INIT_ORG_NAME:-deno_mastra}
LANGFUSE_INIT_PROJECT_ID: ${LANGFUSE_INIT_PROJECT_ID:-1234567890}
LANGFUSE_INIT_PROJECT_NAME: ${LANGFUSE_INIT_PROJECT_NAME:-deno_mastra}
LANGFUSE_INIT_PROJECT_PUBLIC_KEY: ${LANGFUSE_INIT_PROJECT_PUBLIC_KEY:-sk-pb-0123456789abcdef0123456789abcdef}
LANGFUSE_INIT_PROJECT_SECRET_KEY: ${LANGFUSE_INIT_PROJECT_SECRET_KEY:-sk-sc-0123456789abcdef0123456789abcdef}
LANGFUSE_INIT_USER_EMAIL: ${LANGFUSE_INIT_USER_EMAIL:-[email protected]}
LANGFUSE_INIT_USER_NAME: ${LANGFUSE_INIT_USER_NAME:-User}
LANGFUSE_INIT_USER_PASSWORD: ${LANGFUSE_INIT_USER_PASSWORD:-12345678}
clickhouse:
image: clickhouse/clickhouse-server
restart: always
user: '101:101'
environment:
CLICKHOUSE_DB: default
CLICKHOUSE_USER: clickhouse
CLICKHOUSE_PASSWORD: clickhouse
volumes:
- langfuse_clickhouse_data:/var/lib/clickhouse
- langfuse_clickhouse_logs:/var/log/clickhouse-server
ports:
- 127.0.0.1:8123:8123
- 127.0.0.1:9000:9000
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8123/ping || exit 1
interval: 5s
timeout: 5s
retries: 10
start_period: 1s
minio:
image: minio/minio
restart: always
entrypoint: sh
command:
-c 'mkdir -p /data/langfuse && minio server --address ":9000" --console-address ":9001" /data'
environment:
MINIO_ROOT_USER: minio
MINIO_ROOT_PASSWORD: miniosecret
ports:
- 9090:9000
- 127.0.0.1:9091:9001
volumes:
- langfuse_minio_data:/data
healthcheck:
test: ['CMD', 'mc', 'ready', 'local']
interval: 1s
timeout: 5s
retries: 5
start_period: 1s
redis:
image: redis:7
restart: always
command: >
--requirepass ${REDIS_AUTH:-myredissecret}
ports:
- 127.0.0.1:6379:6379
healthcheck:
test: ['CMD', 'redis-cli', 'ping']
interval: 3s
timeout: 10s
retries: 10
postgres:
image: postgres:${POSTGRES_VERSION:-latest}
restart: always
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U postgres']
interval: 3s
timeout: 3s
retries: 10
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
- 127.0.0.1:5432:5432
volumes:
- langfuse_postgres_data:/var/lib/postgresql/data
volumes:
langfuse_postgres_data:
driver: local
langfuse_clickhouse_data:
driver: local
langfuse_clickhouse_logs:
driver: local
langfuse_minio_data:
driver: local
Yes, the configuration file is quite extensive, but it sets up all the necessary components for Langfuse to run locally. Let's break down what each service does:
langfuse-worker
: Handles the background processing of traceslangfuse-web
: The main web interface for viewing tracesclickhouse
: Database for storing trace dataminio
: Object storage for media and eventsredis
: Queue for processing tracespostgres
: Main database for Langfuse
Configuring Environment Variables
Add the following environment variables to your .env
file:
OPENAI_API_KEY=...
LANGFUSE_PUBLIC_KEY=sk-pb-0123456789abcdef0123456789abcdef
LANGFUSE_SECRET_KEY=sk-sc-0123456789abcdef0123456789abcdef
LANGFUSE_HOST=http://localhost:3000
Integrating Langfuse with Mastra
Since Mastra is tightly integrated with the AI SDK, we can use the official langfuse-vercel
client. Let's modify our src/mastra/index.ts
file to include Langfuse tracing:
import { Mastra } from '@mastra/core';
import { npmAgent } from './agents/npm/agent.ts';
import { LangfuseExporter } from 'langfuse-vercel';
export const mastra = new Mastra({
agents: { npmAgent },
telemetry: {
serviceName: 'ai', // this must be set to "ai" so that the LangfuseExporter thinks it's an AI SDK trace
enabled: true,
export: {
type: 'custom',
exporter: new LangfuseExporter({
publicKey: process.env.LANGFUSE_PUBLIC_KEY,
secretKey: process.env.LANGFUSE_SECRET_KEY,
baseUrl: process.env.LANGFUSE_HOST,
}),
},
},
});
Starting the Services
First, start the Langfuse services:
docker-compose up -d
Then start your Mastra server as before:
deno task dev
Testing the Integration
After starting both servers, let's test our setup. First, open the Mastra playground at http://localhost:4111 and interact with our npm agent:

Let's ask the agent about the @angular/core
package to see how it handles the request and how this
interaction is traced.
Analyzing Traces in Langfuse
Now, let's check how this interaction was recorded in Langfuse. Visit http://localhost:3000 and log in using the credentials from our docker-compose file:
- Email: [email protected]
- Password: 12345678
Navigate to the Traces section, and you'll see a list of recent interactions. Let's analyze one of the traces:

The trace view provides a detailed breakdown of our agent's interaction:
-
At the top, we can see the overall metrics:
- Total latency: 8.97s
- Cost: $0.000386
- Token usage: 1,554 + 254 tokens
-
The trace is broken down into several steps:
ai.streamText
: The main interaction (8.97s, $0.000386)ai.streamText.doStream
: The streaming process (7.62s, $0.00317)ai.toolCall getPackageVersion
: Tool execution (0.95s)ai.generateText
: Text generation (1.38s, $0.00017)
-
On the right panel, we can see the actual conversation:
- System prompt defining the agent's behavior
- User query asking about "@angular/core"
- Assistant's response with package details including:
- Version: 19.2.7
- Description: Angular - the core framework
- Author: Angular
- License: MIT
- Homepage link to Angular GitHub
This detailed view helps us understand:
- How long each step takes
- Where costs are incurred
- The flow of information through our agent
- Any potential bottlenecks or areas for optimization
For example, we can see that most of the time is spent in the streaming process, while the actual tool call to fetch package information is relatively quick. This kind of insight is invaluable for optimizing our agent's performance.
Next Steps
With Langfuse integrated, you can now:
- Monitor your agent's performance in real-time
- Analyze patterns in user interactions
- Track token usage and costs
- Debug issues with detailed traces
- Optimize your agent's behavior based on real usage data
The combination of Mastra and Langfuse gives you a powerful platform for building and monitoring AI agents. You can use this setup to iterate on your agent's behavior and continuously improve its performance.
You can find the complete code for this tutorial in the GitHub repository. The repository includes all the necessary configuration files and code examples we've discussed in this article.
P.S. If you want to skip the manual setup of Langfuse and deploy it quickly to the cloud, you can use my Railway template:
This template provides a pre-configured Langfuse instance that you can deploy with just a few clicks.