AI Integration
Multi-provider AI integration built on Vercel AI SDK, supporting 12 providers and 100+ models
The AI integration layer provides a unified interface for text generation (LLM) across multiple AI providers. It uses Vercel AI SDK and supports dynamic API key configuration via the admin panel.
Architecture
Module Structure
| File | Purpose |
|---|---|
src/integrations/ai/index.ts | Entry point, getAIProvider(), config extraction |
src/integrations/ai/providers.ts | Provider instances and model mapping |
src/integrations/ai/models.ts | Model metadata (id, label, capabilities, creditCost) |
src/integrations/ai/types.ts | Type definitions |
src/integrations/ai/utils.ts | Helper functions |
Quick Start
Configure API Keys
Go to Admin Panel → AI Config and fill in the API keys for the providers you need. Keys can also be set via environment variables.
Obtain Provider in API Route
import { getAIProvider } from "@/integrations/ai"
import { streamText } from "ai"
export async function POST(req: Request) {
const provider = await getAIProvider()
const model = provider.languageModel("openai/gpt-4o")
const result = await streamText({
model,
messages: [{ role: "user", content: "Hello" }],
})
return result.toDataStreamResponse()
}Use Model Metadata
import {
getModelConfig,
canUseModel,
hasVisionSupport,
getCreditCost,
getModelsByProvider,
} from "@/integrations/ai"
const model = getModelConfig("openai/gpt-4o")
const { canUse } = canUseModel("openai/gpt-4o", isProUser)
const hasVision = hasVisionSupport("openai/gpt-4o")
const cost = getCreditCost("openai/gpt-4o")
const openaiModels = getModelsByProvider("openai")Model Capabilities
Each model has metadata describing its capabilities:
| Capability | Description |
|---|---|
vision | Can process images (multimodal) |
reasoning | Extended thinking / chain-of-thought |
pdf | Can process PDF documents |
Utils Reference
| Function | Description |
|---|---|
getModelConfig(modelId) | Get model metadata by ID |
canUseModel(modelId, isProUser) | Check if user can use model (tier check) |
hasVisionSupport(modelId) | Check vision capability |
hasReasoningSupport(modelId) | Check reasoning capability |
hasPdfSupport(modelId) | Check PDF capability |
getMaxOutputTokens(modelId) | Get max output tokens |
getModelParameters(modelId) | Get default parameters (temperature, topP, etc.) |
getCreditCost(modelId) | Get credit cost per call |
getModelsByProvider(provider) | List models for a provider |
getModelsByTier(tier) | List models by tier (free/pro) |
getAvailableProviders() | List all providers with models |
getAllModels() | List all models |
Model Tier
- free: Available to all users
- pro: Requires Pro subscription (
canUseModelenforces this)
Supported Providers
| Provider | Models | Config Key |
|---|---|---|
| OpenAI | 22 | ai_openai_api_key, ai_openai_base_url |
| Anthropic | 5 | ai_anthropic_api_key |
| 10 | ai_google_api_key | |
| xAI | 9 | ai_xai_api_key |
| Groq | 6 | ai_groq_api_key |
| Mistral | 11 | ai_mistral_api_key |
| Cohere | 3 | ai_cohere_api_key |
| DeepSeek | 2 | ai_deepseek_api_key, ai_deepseek_base_url |
| HuggingFace | 11 | ai_huggingface_api_key |
| Novita | 21 | ai_novita_api_key |
| SiliconFlow | 2 | ai_siliconflow_api_key |
| Baseten | 6 | ai_baseten_api_key |
Model List (Summary)
| Provider | Count | Examples |
|---|---|---|
| OpenAI | 22 | gpt-4o, gpt-5, o3, o4-mini, gpt-5.1-codex, gpt-oss-20b |
| Anthropic | 5 | claude-haiku-4-5, claude-sonnet-4-5, claude-opus-4-5 |
| 10 | gemini-2.0-flash, gemini-2.5-pro, gemini-3-flash | |
| xAI | 9 | grok-3-mini, grok-4, grok-4-fast-thinking, grok-code |
| Groq | 6 | llama-3.3-70b, qwen3-32b, kimi-k2 |
| Mistral | 11 | ministral-3b, mistral-large, codestral, devstral |
| Cohere | 3 | command-a, command-a-thinking, command-r-plus |
| DeepSeek | 2 | deepseek-chat, deepseek-reasoner |
| HuggingFace | 11 | qwen3-4b, qwen3-30b, qwen3-235b |
| Novita | 21 | deepseek-v3.2, qwen3-coder-30b, glm-4.5, minimax-m2 |
| SiliconFlow | 2 | deepseek-v3, qwen-2.5-72b |
| Baseten | 6 | deepseek-v3, qwen3-coder-480b, glm-4.7, kimi-k2 |
Reasoning Models
Models with capabilities.reasoning: true use extractReasoningMiddleware. Reasoning content is wrapped in <think> tags and stripped from the final output. Use getModelParameters for recommended settings when available.
Model ID Format
Model IDs follow the pattern provider/model-name, e.g. openai/gpt-4o, anthropic/claude-sonnet-4-5, deepseek/deepseek-v3.2. Use these IDs when calling provider.languageModel(id).
Extend
Add a New Provider
1. Types (types.ts)
Add the provider to AIProviderKey:
export type AIProviderKey =
| "openai"
| "newprovider" // add
| ...AIProviderConfigs already uses Partial<Record<AIProviderKey, AIProviderConfig>>, so no change needed there.
2. Provider Instance (providers.ts)
In createProviderInstances:
const newprovider = configs.newprovider?.apiKey
? createOpenAICompatible({
name: "newprovider",
baseURL: configs.newprovider.baseUrl || "https://api.newprovider.com/v1",
apiKey: configs.newprovider.apiKey,
})
: nullIn languageModels:
...(p.newprovider && {
"newprovider/model-id": p.newprovider.chatModel("actual-model-id"),
}),3. Config (dynamic-config.ts)
Add config keys and sub-group in aiConfigGroups:
ai_newprovider_api_key: {
type: "string",
default: "",
env: "NEWPROVIDER_API_KEY",
labelKey: "aiNewproviderApiKey",
descriptionKey: "aiNewproviderApiKey",
}4. i18n + Extraction
Add i18n entries in config.content.ts and add extractProviderConfigs mapping in index.ts.
Add a New Model (Existing Provider)
1. Register in providers.ts
"openai/new-model": p.openai("openai-actual-model-name")2. Add metadata in models.ts
{
id: "openai/new-model",
label: "New Model Name",
provider: "openai",
providerModelId: "openai-actual-model-name",
capabilities: { vision: false, reasoning: false, pdf: false },
tier: "free",
maxOutputTokens: 16384,
creditCost: 1,
}Checklist
| Step | New Provider | New Model |
|---|---|---|
types.ts | Add to AIProviderKey | — |
providers.ts | Instance + languageModels | Add to languageModels |
models.ts | — | Add metadata |
dynamic-config.ts | Config keys + sub-group | — |
config.content.ts | i18n | — |
index.ts | extractProviderConfigs | — |