|
# AI Configuration Guide
|
|
|
|
## Environment Variables
|
|
|
|
Add these to your `.env` file:
|
|
|
|
```env
|
|
# AI Configuration
|
|
# Choose between 'ollama' or 'openrouter'
|
|
AI_PROVIDER=openrouter
|
|
|
|
# Ollama Configuration (if AI_PROVIDER=ollama)
|
|
AI_PORT=11434
|
|
AI_MODEL=gpt-oss:20b
|
|
|
|
# OpenRouter Configuration (if AI_PROVIDER=openrouter)
|
|
OPENROUTER_API_KEY=sk-or-your-api-key-here
|
|
OPENROUTER_MODEL=gemma
|
|
OPENROUTER_BASE_URL=openrouter.ai
|
|
OPENROUTER_REL_PATH=/api
|
|
OPENROUTER_TEMPERATURE=0.7
|
|
```
|
|
|
|
## Available OpenRouter Models
|
|
|
|
Based on your C# implementation, these models are available:
|
|
|
|
- `gemma` - google/gemma-3-12b-it
|
|
- `dolphin` - cognitivecomputations/dolphin-mixtral-8x22b
|
|
- `dolphin_free` - cognitivecomputations/dolphin3.0-mistral-24b:free
|
|
- `gpt-4o-mini` - openai/gpt-4o-mini
|
|
- `gpt-4.1-nano` - openai/gpt-4.1-nano
|
|
- `qwen` - qwen/qwen3-30b-a3b
|
|
- `unslop` - thedrummer/unslopnemo-12b
|
|
- `euryale` - sao10k/l3.3-euryale-70b
|
|
- `wizard` - microsoft/wizardlm-2-8x22b
|
|
- `deepseek` - deepseek/deepseek-chat-v3-0324
|
|
- `dobby` - sentientagi/dobby-mini-unhinged-plus-llama-3.1-8b
|
|
|
|
## Testing
|
|
|
|
1. Set `AI_PROVIDER=openrouter` in your `.env`
|
|
2. Add your OpenRouter API key
|
|
3. Test the connection: `GET http://localhost:8083/rest/ai/test-ai`
|
|
4. Start an interview to test the full flow
|
|
|
|
## Switching Back to Ollama
|
|
|
|
To switch back to Ollama:
|
|
1. Set `AI_PROVIDER=ollama` in your `.env`
|
|
2. Make sure Ollama is running on the specified port
|
|
3. Test the connection: `GET http://localhost:8083/rest/ai/test-ai`
|