SECURITY LAB Mode: VULNERABLE CSP: off AI: vulnerable LLM: openrouter VULNS ACTIVE

Documentation

LLM provider configuration and deployment

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  β”‚    β”‚                  β”‚    β”‚                  β”‚
β”‚  Tosche Station Chatbot │───▢│  AI Firewall (Imperva) │───▢│  LLM Provider     β”‚
β”‚                  │◀───│                  │◀───│                  β”‚
β”‚  POST /chatbot   β”‚    β”‚  Inspect + Block β”‚    β”‚  OpenAI / Claude β”‚
β”‚                  β”‚    β”‚  β€’ Injection     β”‚    β”‚  Ollama / Custom β”‚
β”‚                  β”‚    β”‚  β€’ Data Leak     β”‚    β”‚                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

LLM Providers

Custom Provider (via AI Firewall)

Recommended

Allows inserting a security proxy (AI Firewall) between the app and the LLM to detect/block prompt injection, data leakage, etc.

docker run -d --name tosche-station \ -e LLM_PROVIDER=custom \ -e LLM_BASE_URL=https://ai-firewall.yourcompany.com/v1 \ -e LLM_API_KEY=your-api-key \ -e LLM_MODEL=gpt-4o \ -e AI_MODE=vulnerable \ -p 8080:8080 tosche-station:2.1
OpenAI (Direct)
docker run -d --name tosche-station \ -e LLM_PROVIDER=openai \ -e LLM_API_KEY=sk-your-openai-key \ -e LLM_MODEL=gpt-4o-mini \ -p 8080:8080 tosche-station:2.1
Anthropic (Direct)
docker run -d --name tosche-station \ -e LLM_PROVIDER=anthropic \ -e LLM_API_KEY=sk-ant-your-anthropic-key \ -e LLM_MODEL=claude-sonnet-4-20250514 \ -p 8080:8080 tosche-station:2.1
OpenRouter
docker run -d --name tosche-station \ -e LLM_PROVIDER=openrouter \ -e LLM_API_KEY=sk-or-your-openrouter-key \ -e LLM_MODEL=meta-llama/llama-3-70b-instruct \ -p 8080:8080 tosche-station:2.1
Ollama (Local)
# Start Ollama first ollama serve & ollama pull llama3 # Then run Tosche Station docker run -d --name tosche-station \ -e LLM_PROVIDER=ollama \ -e LLM_BASE_URL=http://host.docker.internal:11434 \ -e LLM_MODEL=llama3 \ -p 8080:8080 tosche-station:2.1
None (rule-based)

Default mode. No LLM β€” chatbot uses pattern matching rules.

docker run -d --name tosche-station \ -e LLM_PROVIDER=none \ -p 8080:8080 tosche-station:2.1

Why use Custom Provider?

The custom provider lets you route all LLM traffic through a security proxy like Imperva AI Firewall, which inspects requests and responses for threats.

Prompt Injection
Detects & blocks injection attempts before reaching the LLM
Data Leakage
Prevents sensitive data from being returned in LLM responses
Audit Trail
Full logging of all LLM interactions for compliance

Environment Variables

Variable Default Description
LLM_PROVIDERnonenone | openai | anthropic | openrouter | ollama | custom
LLM_API_KEY(empty)API key for the LLM provider
LLM_MODEL(auto)Model name (auto-defaults per provider)
LLM_BASE_URL(auto)Override API base URL (required for custom/ollama)
LAB_MODEvulnerablevulnerable | baseline
AI_MODEvulnerablevulnerable | guarded
CSP_MODEoffoff | report | enforce
FORCE_LANG(empty)Force language: es | en | pt
APP_PORT8080Server port

Docker Example

# Full example with AI Firewall
docker run -d \
  --name tosche-station \
  -e LAB_MODE=vulnerable \
  -e AI_MODE=vulnerable \
  -e CSP_MODE=off \
  -e LLM_PROVIDER=custom \
  -e LLM_BASE_URL=https://ai-firewall.yourcompany.com/v1 \
  -e LLM_API_KEY=your-api-key \
  -e LLM_MODEL=gpt-4o \
  -e FORCE_LANG=es \
  -p 8080:8080 \
  tosche-station:2.1

# Verify
curl http://localhost:8080/health