SECURITY LAB Modo: VULNERABLE CSP: off AI: vulnerable LLM: openrouter VULNS ACTIVAS

DocumentaciΓ³n

ConfiguraciΓ³n del proveedor LLM y despliegue

Arquitectura

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  β”‚    β”‚                  β”‚    β”‚                  β”‚
β”‚  Tosche Station Chatbot │───▢│  AI Firewall (Imperva) │───▢│  Proveedor LLM    β”‚
β”‚                  │◀───│                  │◀───│                  β”‚
β”‚  POST /chatbot   β”‚    β”‚  Inspect + Block β”‚    β”‚  OpenAI / Claude β”‚
β”‚                  β”‚    β”‚  β€’ Injection     β”‚    β”‚  Ollama / Custom β”‚
β”‚                  β”‚    β”‚  β€’ Data Leak     β”‚    β”‚                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

Proveedores LLM

Proveedor Custom (via AI Firewall)

Recomendado

Permite insertar un proxy de seguridad (AI Firewall) entre la app y el LLM para detectar/bloquear prompt injection, data leakage, etc.

docker run -d --name tosche-station \ -e LLM_PROVIDER=custom \ -e LLM_BASE_URL=https://ai-firewall.yourcompany.com/v1 \ -e LLM_API_KEY=your-api-key \ -e LLM_MODEL=gpt-4o \ -e AI_MODE=vulnerable \ -p 8080:8080 tosche-station:2.1
OpenAI (Directo)
docker run -d --name tosche-station \ -e LLM_PROVIDER=openai \ -e LLM_API_KEY=sk-your-openai-key \ -e LLM_MODEL=gpt-4o-mini \ -p 8080:8080 tosche-station:2.1
Anthropic (Directo)
docker run -d --name tosche-station \ -e LLM_PROVIDER=anthropic \ -e LLM_API_KEY=sk-ant-your-anthropic-key \ -e LLM_MODEL=claude-sonnet-4-20250514 \ -p 8080:8080 tosche-station:2.1
OpenRouter
docker run -d --name tosche-station \ -e LLM_PROVIDER=openrouter \ -e LLM_API_KEY=sk-or-your-openrouter-key \ -e LLM_MODEL=meta-llama/llama-3-70b-instruct \ -p 8080:8080 tosche-station:2.1
Ollama (Local)
# Start Ollama first ollama serve & ollama pull llama3 # Then run Tosche Station docker run -d --name tosche-station \ -e LLM_PROVIDER=ollama \ -e LLM_BASE_URL=http://host.docker.internal:11434 \ -e LLM_MODEL=llama3 \ -p 8080:8080 tosche-station:2.1
None (basado en reglas)

Default mode. No LLM β€” chatbot uses pattern matching rules.

docker run -d --name tosche-station \ -e LLM_PROVIDER=none \ -p 8080:8080 tosche-station:2.1

ΒΏPor quΓ© usar Custom Provider?

El proveedor custom te permite enrutar todo el trΓ‘fico LLM a travΓ©s de un proxy de seguridad como Imperva AI Firewall, que inspecciona las solicitudes y respuestas en busca de amenazas.

Prompt Injection
Detects & blocks injection attempts before reaching the LLM
Data Leakage
Prevents sensitive data from being returned in LLM responses
Audit Trail
Full logging of all LLM interactions for compliance

Variables de Entorno

Variable Default Description
LLM_PROVIDERnonenone | openai | anthropic | openrouter | ollama | custom
LLM_API_KEY(empty)API key for the LLM provider
LLM_MODEL(auto)Model name (auto-defaults per provider)
LLM_BASE_URL(auto)Override API base URL (required for custom/ollama)
LAB_MODEvulnerablevulnerable | baseline
AI_MODEvulnerablevulnerable | guarded
CSP_MODEoffoff | report | enforce
FORCE_LANG(empty)Force language: es | en | pt
APP_PORT8080Server port

Ejemplo Docker

# Full example with AI Firewall
docker run -d \
  --name tosche-station \
  -e LAB_MODE=vulnerable \
  -e AI_MODE=vulnerable \
  -e CSP_MODE=off \
  -e LLM_PROVIDER=custom \
  -e LLM_BASE_URL=https://ai-firewall.yourcompany.com/v1 \
  -e LLM_API_KEY=your-api-key \
  -e LLM_MODEL=gpt-4o \
  -e FORCE_LANG=es \
  -p 8080:8080 \
  tosche-station:2.1

# Verify
curl http://localhost:8080/health