API Keys & Security
The number one question we get: "Does ModelTrack see my API keys?" Here's the full answer.
Yes, the proxy sees your API keys
ModelTrack is a reverse proxy. It forwards your LLM requests to the upstream provider (Anthropic, OpenAI, etc.). To do that, it needs to read the authorization header — which contains your API key.
This is the same as any reverse proxy: nginx, Envoy, Traefik, AWS ALB. If you already use a reverse proxy or API gateway, your API keys pass through it too.
What the proxy does (and does not) do with keys
Forwards the key to the upstream provider
The proxy reads the Authorization header, forwards the request (with the key) to the real API, and returns the response.
Never stores API keys
Keys exist only in memory during the request lifecycle. They are never written to disk, logged, or persisted in any database.
Never logs API keys
The cost event logs contain model, tokens, cost, headers, and timing — but never the Authorization header or API key.
Never sends keys to third parties
ModelTrack is fully self-hosted. There is no telemetry, no phone-home, no external service that receives your data.
Never modifies the API key
The proxy adds X-ModelTrack-* headers for attribution but never changes the Authorization header or any other auth-related data.
Self-hosted: keys never leave your infrastructure
ModelTrack runs entirely within your infrastructure. Unlike SaaS observability tools that require you to send data to an external service, ModelTrack processes everything locally:
Your App → ModelTrack Proxy (your VPC) → Anthropic/OpenAI API
↓
Cost events (your disk)
↓
Dashboard (your browser)Your API keys travel from your app to the proxy (within your network) and from the proxy to the LLM provider (over HTTPS). They never touch any ModelTrack-hosted service because there is no ModelTrack-hosted service.
Security recommendations
- Deploy within your VPC or internal network. The proxy should be accessible from your application services but not from the public internet. Use a
ClusterIPservice in Kubernetes, or a private subnet in AWS/GCP. - Use TLS between your app and the proxy. If the proxy is on a different host than your app, terminate TLS at the proxy or use a service mesh.
- Rotate API keys regularly. ModelTrack doesn't store keys, so rotation has zero impact on the proxy — your new key will be forwarded automatically.
- Restrict access to the data directory. The
data/directory contains cost event logs and the SQLite database. These include request metadata (model, tokens, team, feature) but never API keys or request/response bodies. - Audit the code yourself. ModelTrack is open source. The proxy is a single Go binary with no external dependencies beyond the standard library. You can read the entire codebase in an afternoon.
What gets logged
Here is an example of a cost event that the proxy writes to data/events.jsonl:
{
"timestamp": "2026-03-26T10:30:00Z",
"request_id": "req_abc123",
"provider": "anthropic",
"model": "claude-sonnet-4-6",
"input_tokens": 150,
"output_tokens": 320,
"cost_usd": 0.00234,
"latency_ms": 1200,
"team": "ml-research",
"app": "chatbot",
"feature": "customer-support",
"session_id": "user-query-123",
"cache_hit": false,
"routed": false
}Notice what is not in the log: no API key, no Authorization header, no request body, no response body. Only metadata needed for cost tracking and attribution.
For a full overview of our security architecture, data handling, and infrastructure, see the Security page.
How this compares
| Approach | Sees API key? | Data leaves your infra? |
|---|---|---|
| ModelTrack (self-hosted proxy) | In memory only | No |
| SaaS LLM observability tools | Often yes | Yes (sent to their servers) |
| nginx / Envoy reverse proxy | In memory only | No |
| Cloud provider API gateway | In memory only | Stays in cloud provider |