SECURITY LAB Modo: VULNERABLE CSP: off AI: vulnerable LLM: openrouter VULNS ATIVAS

DocumentaΓ§Γ£o

ConfiguraΓ§Γ£o do provedor LLM e implantaΓ§Γ£o

Arquitetura

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  β”‚    β”‚                  β”‚    β”‚                  β”‚
β”‚  Tosche Station Chatbot │───▢│  AI Firewall (Imperva) │───▢│  Provedor LLM     β”‚
β”‚                  │◀───│                  │◀───│                  β”‚
β”‚  POST /chatbot   β”‚    β”‚  Inspect + Block β”‚    β”‚  OpenAI / Claude β”‚
β”‚                  β”‚    β”‚  β€’ Injection     β”‚    β”‚  Ollama / Custom β”‚
β”‚                  β”‚    β”‚  β€’ Data Leak     β”‚    β”‚                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      

Provedores LLM

Provedor Custom (via AI Firewall)

Recomendado

Permite inserir um proxy de seguranΓ§a (AI Firewall) entre o app e o LLM para detectar/bloquear prompt injection, data leakage, etc.

docker run -d --name tosche-station \ -e LLM_PROVIDER=custom \ -e LLM_BASE_URL=https://ai-firewall.yourcompany.com/v1 \ -e LLM_API_KEY=your-api-key \ -e LLM_MODEL=gpt-4o \ -e AI_MODE=vulnerable \ -p 8080:8080 tosche-station:2.1
OpenAI (Direto)
docker run -d --name tosche-station \ -e LLM_PROVIDER=openai \ -e LLM_API_KEY=sk-your-openai-key \ -e LLM_MODEL=gpt-4o-mini \ -p 8080:8080 tosche-station:2.1
Anthropic (Direto)
docker run -d --name tosche-station \ -e LLM_PROVIDER=anthropic \ -e LLM_API_KEY=sk-ant-your-anthropic-key \ -e LLM_MODEL=claude-sonnet-4-20250514 \ -p 8080:8080 tosche-station:2.1
OpenRouter
docker run -d --name tosche-station \ -e LLM_PROVIDER=openrouter \ -e LLM_API_KEY=sk-or-your-openrouter-key \ -e LLM_MODEL=meta-llama/llama-3-70b-instruct \ -p 8080:8080 tosche-station:2.1
Ollama (Local)
# Start Ollama first ollama serve & ollama pull llama3 # Then run Tosche Station docker run -d --name tosche-station \ -e LLM_PROVIDER=ollama \ -e LLM_BASE_URL=http://host.docker.internal:11434 \ -e LLM_MODEL=llama3 \ -p 8080:8080 tosche-station:2.1
None (baseado em regras)

Default mode. No LLM β€” chatbot uses pattern matching rules.

docker run -d --name tosche-station \ -e LLM_PROVIDER=none \ -p 8080:8080 tosche-station:2.1

Por que usar Custom Provider?

O provedor custom permite rotear todo o trΓ‘fego LLM atravΓ©s de um proxy de seguranΓ§a como Imperva AI Firewall, que inspeciona as solicitaΓ§Γ΅es e respostas em busca de ameaΓ§as.

Prompt Injection
Detects & blocks injection attempts before reaching the LLM
Data Leakage
Prevents sensitive data from being returned in LLM responses
Audit Trail
Full logging of all LLM interactions for compliance

VariΓ‘veis de Ambiente

Variable Default Description
LLM_PROVIDERnonenone | openai | anthropic | openrouter | ollama | custom
LLM_API_KEY(empty)API key for the LLM provider
LLM_MODEL(auto)Model name (auto-defaults per provider)
LLM_BASE_URL(auto)Override API base URL (required for custom/ollama)
LAB_MODEvulnerablevulnerable | baseline
AI_MODEvulnerablevulnerable | guarded
CSP_MODEoffoff | report | enforce
FORCE_LANG(empty)Force language: es | en | pt
APP_PORT8080Server port

Exemplo Docker

# Full example with AI Firewall
docker run -d \
  --name tosche-station \
  -e LAB_MODE=vulnerable \
  -e AI_MODE=vulnerable \
  -e CSP_MODE=off \
  -e LLM_PROVIDER=custom \
  -e LLM_BASE_URL=https://ai-firewall.yourcompany.com/v1 \
  -e LLM_API_KEY=your-api-key \
  -e LLM_MODEL=gpt-4o \
  -e FORCE_LANG=es \
  -p 8080:8080 \
  tosche-station:2.1

# Verify
curl http://localhost:8080/health