Documentation

AI Integration

Multi-provider AI integration built on Vercel AI SDK, supporting 12 providers and 100+ models

The AI integration layer provides a unified interface for text generation (LLM) across multiple AI providers. It uses Vercel AI SDK and supports dynamic API key configuration via the admin panel.

Architecture

Module Structure

FilePurpose
src/integrations/ai/index.tsEntry point, getAIProvider(), config extraction
src/integrations/ai/providers.tsProvider instances and model mapping
src/integrations/ai/models.tsModel metadata (id, label, capabilities, creditCost)
src/integrations/ai/types.tsType definitions
src/integrations/ai/utils.tsHelper functions

Quick Start

Configure API Keys

Go to Admin Panel → AI Config and fill in the API keys for the providers you need. Keys can also be set via environment variables.

Obtain Provider in API Route

import { getAIProvider } from "@/integrations/ai"
import { streamText } from "ai"

export async function POST(req: Request) {
  const provider = await getAIProvider()
  const model = provider.languageModel("openai/gpt-4o")

  const result = await streamText({
    model,
    messages: [{ role: "user", content: "Hello" }],
  })

  return result.toDataStreamResponse()
}

Use Model Metadata

import {
  getModelConfig,
  canUseModel,
  hasVisionSupport,
  getCreditCost,
  getModelsByProvider,
} from "@/integrations/ai"

const model = getModelConfig("openai/gpt-4o")
const { canUse } = canUseModel("openai/gpt-4o", isProUser)
const hasVision = hasVisionSupport("openai/gpt-4o")
const cost = getCreditCost("openai/gpt-4o")
const openaiModels = getModelsByProvider("openai")

Model Capabilities

Each model has metadata describing its capabilities:

CapabilityDescription
visionCan process images (multimodal)
reasoningExtended thinking / chain-of-thought
pdfCan process PDF documents

Utils Reference

FunctionDescription
getModelConfig(modelId)Get model metadata by ID
canUseModel(modelId, isProUser)Check if user can use model (tier check)
hasVisionSupport(modelId)Check vision capability
hasReasoningSupport(modelId)Check reasoning capability
hasPdfSupport(modelId)Check PDF capability
getMaxOutputTokens(modelId)Get max output tokens
getModelParameters(modelId)Get default parameters (temperature, topP, etc.)
getCreditCost(modelId)Get credit cost per call
getModelsByProvider(provider)List models for a provider
getModelsByTier(tier)List models by tier (free/pro)
getAvailableProviders()List all providers with models
getAllModels()List all models

Model Tier

  • free: Available to all users
  • pro: Requires Pro subscription (canUseModel enforces this)

Supported Providers

ProviderModelsConfig Key
OpenAI22ai_openai_api_key, ai_openai_base_url
Anthropic5ai_anthropic_api_key
Google10ai_google_api_key
xAI9ai_xai_api_key
Groq6ai_groq_api_key
Mistral11ai_mistral_api_key
Cohere3ai_cohere_api_key
DeepSeek2ai_deepseek_api_key, ai_deepseek_base_url
HuggingFace11ai_huggingface_api_key
Novita21ai_novita_api_key
SiliconFlow2ai_siliconflow_api_key
Baseten6ai_baseten_api_key

Model List (Summary)

ProviderCountExamples
OpenAI22gpt-4o, gpt-5, o3, o4-mini, gpt-5.1-codex, gpt-oss-20b
Anthropic5claude-haiku-4-5, claude-sonnet-4-5, claude-opus-4-5
Google10gemini-2.0-flash, gemini-2.5-pro, gemini-3-flash
xAI9grok-3-mini, grok-4, grok-4-fast-thinking, grok-code
Groq6llama-3.3-70b, qwen3-32b, kimi-k2
Mistral11ministral-3b, mistral-large, codestral, devstral
Cohere3command-a, command-a-thinking, command-r-plus
DeepSeek2deepseek-chat, deepseek-reasoner
HuggingFace11qwen3-4b, qwen3-30b, qwen3-235b
Novita21deepseek-v3.2, qwen3-coder-30b, glm-4.5, minimax-m2
SiliconFlow2deepseek-v3, qwen-2.5-72b
Baseten6deepseek-v3, qwen3-coder-480b, glm-4.7, kimi-k2

Reasoning Models

Models with capabilities.reasoning: true use extractReasoningMiddleware. Reasoning content is wrapped in <think> tags and stripped from the final output. Use getModelParameters for recommended settings when available.

Model ID Format

Model IDs follow the pattern provider/model-name, e.g. openai/gpt-4o, anthropic/claude-sonnet-4-5, deepseek/deepseek-v3.2. Use these IDs when calling provider.languageModel(id).

Extend

Add a New Provider

1. Types (types.ts)

Add the provider to AIProviderKey:

export type AIProviderKey =
  | "openai"
  | "newprovider"  // add
  | ...

AIProviderConfigs already uses Partial<Record<AIProviderKey, AIProviderConfig>>, so no change needed there.

2. Provider Instance (providers.ts)

In createProviderInstances:

const newprovider = configs.newprovider?.apiKey
  ? createOpenAICompatible({
      name: "newprovider",
      baseURL: configs.newprovider.baseUrl || "https://api.newprovider.com/v1",
      apiKey: configs.newprovider.apiKey,
    })
  : null

In languageModels:

...(p.newprovider && {
  "newprovider/model-id": p.newprovider.chatModel("actual-model-id"),
}),

3. Config (dynamic-config.ts)

Add config keys and sub-group in aiConfigGroups:

ai_newprovider_api_key: {
  type: "string",
  default: "",
  env: "NEWPROVIDER_API_KEY",
  labelKey: "aiNewproviderApiKey",
  descriptionKey: "aiNewproviderApiKey",
}

4. i18n + Extraction

Add i18n entries in config.content.ts and add extractProviderConfigs mapping in index.ts.

Add a New Model (Existing Provider)

1. Register in providers.ts

"openai/new-model": p.openai("openai-actual-model-name")

2. Add metadata in models.ts

{
  id: "openai/new-model",
  label: "New Model Name",
  provider: "openai",
  providerModelId: "openai-actual-model-name",
  capabilities: { vision: false, reasoning: false, pdf: false },
  tier: "free",
  maxOutputTokens: 16384,
  creditCost: 1,
}

Checklist

StepNew ProviderNew Model
types.tsAdd to AIProviderKey
providers.tsInstance + languageModelsAdd to languageModels
models.tsAdd metadata
dynamic-config.tsConfig keys + sub-group
config.content.tsi18n
index.tsextractProviderConfigs

On this page