AI Configuration Guide
Environment Variables
Add these to your .env file:
# AI Configuration
# Choose between 'ollama' or 'openrouter'
AI_PROVIDER=openrouter
# Ollama Configuration (if AI_PROVIDER=ollama)
AI_PORT=11434
AI_MODEL=gpt-oss:20b
# OpenRouter Configuration (if AI_PROVIDER=openrouter)
OPENROUTER_API_KEY=sk-or-your-api-key-here
OPENROUTER_MODEL=gemma
OPENROUTER_BASE_URL=openrouter.ai
OPENROUTER_REL_PATH=/api
OPENROUTER_TEMPERATURE=0.7
Available OpenRouter Models
Based on your C# implementation, these models are available:
gemma- google/gemma-3-12b-itdolphin- cognitivecomputations/dolphin-mixtral-8x22bdolphin_free- cognitivecomputations/dolphin3.0-mistral-24b:freegpt-4o-mini- openai/gpt-4o-minigpt-4.1-nano- openai/gpt-4.1-nanoqwen- qwen/qwen3-30b-a3bunslop- thedrummer/unslopnemo-12beuryale- sao10k/l3.3-euryale-70bwizard- microsoft/wizardlm-2-8x22bdeepseek- deepseek/deepseek-chat-v3-0324dobby- sentientagi/dobby-mini-unhinged-plus-llama-3.1-8b
Testing
- Set
AI_PROVIDER=openrouterin your.env - Add your OpenRouter API key
- Test the connection:
GET http://localhost:8083/rest/ai/test-ai - Start an interview to test the full flow
Switching Back to Ollama
To switch back to Ollama:
- Set
AI_PROVIDER=ollamain your.env - Make sure Ollama is running on the specified port
- Test the connection:
GET http://localhost:8083/rest/ai/test-ai