Documentation Index
Fetch the complete documentation index at: https://docs.moda.dev/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Moda integrates with the Vercel AI SDK via its built-in telemetry support. The AI SDK’s experimental_telemetry option emits telemetry data that Moda parses automatically, giving you conversation tracking, token usage, and analytics across all AI SDK providers.
Installation
npm install moda-ai ai @ai-sdk/openai
Install additional provider packages as needed:
# Anthropic
npm install @ai-sdk/anthropic
# Google
npm install @ai-sdk/google
# Mistral
npm install @ai-sdk/mistral
Quick Start
import { Moda } from 'moda-ai';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
await Moda.init('YOUR_MODA_API_KEY');
Moda.conversationId = 'session_123';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Write a haiku about coding',
experimental_telemetry: Moda.getVercelAITelemetry(),
});
console.log(result.text);
await Moda.flush();
Streaming
streamText works the same way. Telemetry is captured after the stream completes:
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const result = streamText({
model: openai('gpt-4o'),
prompt: 'Explain TypeScript in 3 sentences',
experimental_telemetry: Moda.getVercelAITelemetry(),
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
await Moda.flush();
Structured Output
generateObject responses are captured as JSON in the assistant message:
import { generateObject } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const result = await generateObject({
model: openai('gpt-4o'),
schema: z.object({
name: z.string(),
ingredients: z.array(z.string()),
servings: z.number(),
}),
prompt: 'Generate a cookie recipe',
experimental_telemetry: Moda.getVercelAITelemetry(),
});
console.log(result.object);
await Moda.flush();
Tool calls made by the model are captured as structured content blocks:
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'What is the weather in Paris?',
tools: {
getWeather: tool({
description: 'Get the weather for a location',
parameters: z.object({ city: z.string() }),
execute: async ({ city }) => `Sunny, 72F in ${city}`,
}),
},
experimental_telemetry: Moda.getVercelAITelemetry(),
});
await Moda.flush();
Conversation Threading
When you set Moda.conversationId or Moda.userId before calling an AI SDK function, those values are automatically included in the telemetry metadata:
import { Moda } from 'moda-ai';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
await Moda.init('YOUR_MODA_API_KEY');
// Set conversation context
Moda.conversationId = 'support_ticket_456';
Moda.userId = 'user_789';
// First turn
await generateText({
model: openai('gpt-4o'),
prompt: 'I need help with my order',
experimental_telemetry: Moda.getVercelAITelemetry(),
});
// Second turn - same conversation
await generateText({
model: openai('gpt-4o'),
prompt: 'Order number is #12345',
experimental_telemetry: Moda.getVercelAITelemetry(),
});
// Both calls are grouped under the same conversation
await Moda.flush();
Moda.conversationId = null;
Moda.userId = null;
The getVercelAITelemetry() helper automatically includes moda.conversation_id and moda.user_id in the telemetry metadata when they are set.
Options Reference
Moda.getVercelAITelemetry({
recordInputs: true, // Record prompt messages (default: true)
recordOutputs: true, // Record response content (default: true)
functionId: 'my-chatbot', // Group telemetry by function name
metadata: { // Additional custom metadata
feature: 'support-chat',
version: '2.0',
},
});
| Option | Type | Default | Description |
|---|
recordInputs | boolean | true | Whether to record prompt messages in telemetry |
recordOutputs | boolean | true | Whether to record response content in telemetry |
functionId | string | - | Identifier for grouping telemetry by function |
metadata | Record<string, string> | - | Custom metadata attached to spans |
Supported Providers
Any provider supported by the Vercel AI SDK works with Moda. The model and provider are automatically captured.
| Provider | Package | Example |
|---|
| OpenAI | @ai-sdk/openai | openai('gpt-4o') |
| Anthropic | @ai-sdk/anthropic | anthropic('claude-3-5-sonnet-20241022') |
| Google | @ai-sdk/google | google('gemini-1.5-pro') |
| Mistral | @ai-sdk/mistral | mistral('mistral-large-latest') |
| Amazon Bedrock | @ai-sdk/amazon-bedrock | bedrock('anthropic.claude-3-sonnet') |
| Azure OpenAI | @ai-sdk/azure | azure('gpt-4o') |
Troubleshooting
Data not appearing in Moda?
- Ensure
Moda.init() is called with await before your first AI SDK call
- Call
await Moda.flush() before your program exits
- Verify your API key is correct
- Enable debug mode:
Moda.init('key', { debug: true })
Conversation IDs not grouping?
- Make sure
Moda.conversationId is set before calling Moda.getVercelAITelemetry()
- The telemetry config is created at call time, so set the conversation ID first
Using with other Moda instrumentations?
- If you also use Moda’s native OpenAI/Anthropic instrumentation, both will capture data. The AI SDK telemetry captures the high-level AI SDK call, while native instrumentation captures the underlying provider API call. This is safe but may result in duplicate entries. To avoid this, you can disable native instrumentation for providers used through the AI SDK.