Adding AI Providers & Models
This guide will show you how to extend your Enterprise AI Chatbot Platform with additional AI providers and models. The platform is built on the Vercel AI SDK, making it easy to add virtually any AI provider.
Provider Support: The platform supports any AI provider that can be integrated with the Vercel AI SDK. You can add custom providers as needed for your specific use case.
Adding a New Provider
1 Install Provider SDK
First, install the AI SDK package for your desired provider:
# Example: Adding Google Gemini
pnpm add @ai-sdk/google
# Example: Adding Mistral AI
pnpm add @ai-sdk/mistral
# Example: Adding Cohere
pnpm add @ai-sdk/cohere
2 Update Environment Variables
Add the API key for your new provider to .env.local
:
# Add to your existing .env.local file
GOOGLE_API_KEY=your-google-api-key-here
MISTRAL_API_KEY=your-mistral-api-key-here
COHERE_API_KEY=your-cohere-api-key-here
3 Configure Provider in Code
Edit lib/ai/providers.ts
to add your new provider:
import { createOpenAI } from '@ai-sdk/openai';
import { createAnthropic } from '@ai-sdk/anthropic';
import { createGoogleGenerativeAI } from '@ai-sdk/google'; // New import
import { mistral } from '@ai-sdk/mistral'; // New import
import { xai } from '@ai-sdk/xai';
// Existing providers...
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
compatibility: 'strict',
});
const anthropic = createAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// Add new providers
const google = createGoogleGenerativeAI({
apiKey: process.env.GOOGLE_API_KEY,
});
const mistralProvider = mistral({
apiKey: process.env.MISTRAL_API_KEY,
});
export const myProvider = customProvider({
languageModels: {
// Existing models...
'openai-gpt-4.1': openai.chat('gpt-4.1'),
'claude-3-7-sonnet': anthropic('claude-3-7-sonnet-20250219'),
// Add new models
'google-gemini-pro': google('gemini-pro'),
'google-gemini-pro-vision': google('gemini-pro-vision'),
'mistral-large': mistralProvider('mistral-large-latest'),
'mistral-medium': mistralProvider('mistral-medium-latest'),
},
imageModels: {
'grok-2-image': xai.image('grok-2-image'),
// Add image models if supported
'google-imagen': google.image('imagen-2'),
},
});
4 Update Model Configuration
Add your new models to the model selector by editing lib/ai/models.tsx
:
export const models = [
// Existing models...
{
id: 'openai-gpt-4.1',
name: 'GPT-4.1',
provider: 'OpenAI',
description: 'Most capable GPT-4 model',
},
// Add new models
{
id: 'google-gemini-pro',
name: 'Gemini Pro',
provider: 'Google',
description: 'Google\'s most capable multimodal model',
},
{
id: 'mistral-large',
name: 'Mistral Large',
provider: 'Mistral AI',
description: 'Mistral\'s largest and most capable model',
},
];
Provider-Specific Configuration
OpenAI Configuration
const openai = createOpenAI({
apiKey: process.env.OPENAI_API_KEY,
compatibility: 'strict', // For better compatibility
baseURL: 'https://api.openai.com/v1', // Custom endpoint if needed
organization: 'your-org-id', // Optional organization ID
});
Anthropic Configuration
const anthropic = createAnthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: 'https://api.anthropic.com', // Custom endpoint
});
Google Configuration
const google = createGoogleGenerativeAI({
apiKey: process.env.GOOGLE_API_KEY,
baseURL: 'https://generativelanguage.googleapis.com/v1beta',
});
Troubleshooting
API Key Issues:
- Verify API key is correct and has proper permissions
- Check if billing is set up for the provider
- Ensure rate limits haven't been exceeded
Model Not Available:
- Check if the model ID is correct
- Verify your account has access to the model
- Some models may be in limited beta
Debugging Steps
- Check Environment Variables:
echo $GOOGLE_API_KEY
- Test Provider Connection:
npx tsx scripts/test-provider.ts
- Check Application Logs:
pnpm dev # Check console for errors
Best Practices
Security
- Never commit API keys to version control
- Use environment variables for all sensitive configuration
- Rotate API keys regularly in production
- Set up monitoring for API usage and costs
Performance
- Cache model responses when appropriate
- Set reasonable timeouts for API calls
- Implement retry logic for failed requests
- Monitor token usage to prevent budget overruns
User Experience
- Provide clear model descriptions to help users choose
- Implement graceful fallbacks when models are unavailable
Provider Added Successfully! Your new AI provider should now be available in the model selector. Users can start using it immediately based on their token budgets.
At end - Do not forgot to update Docker and related file to pass on new environment variables.