Python SDK
The ModelTrack Python SDK auto-instruments Anthropic and OpenAI calls with a single import. No code changes required.
Installation
For now, copy the SDK file into your project. A pip install package is coming soon.
# Copy from the ModelTrack repo
cp modeltrack/sdks/python/modeltrack.py your-project/
# Coming soon:
# pip install modeltrackAuto-instrumentation
Import modeltrackat the top of your app. That's it. All Anthropic and OpenAI SDK instances will automatically route through the ModelTrack proxy and include attribution headers.
import modeltrack # Add this one line
import anthropic
# This client now routes through ModelTrack automatically
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
# OpenAI works the same way
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello!"}]
)How it works: On import, the SDK monkey-patches anthropic.Anthropic.__init__ and openai.OpenAI.__init__ to set base_url to the proxy and inject X-ModelTrack-* headers. If you explicitly set base_urlin the constructor, it won't be overridden.
Manual base URL
If you prefer not to use auto-instrumentation, set the base URL manually:
import anthropic
client = anthropic.Anthropic(base_url="http://localhost:8080")
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)
# For OpenAI:
import openai
client = openai.OpenAI(base_url="http://localhost:8080")Configuration
Configure the SDK either via environment variables or programmatically with modeltrack.configure().
Environment variables
| Variable | Default | Description |
|---|---|---|
MODELTRACK_PROXY_URL | http://localhost:8080 | Proxy URL |
MODELTRACK_TEAM | - | Team name for attribution |
MODELTRACK_APP | - | Application name |
MODELTRACK_FEATURE | - | Feature name |
MODELTRACK_CUSTOMER_TIER | - | Customer tier (e.g., enterprise, free) |
MODELTRACK_SESSION_ID | - | Session ID for multi-step workflows |
MODELTRACK_TRACE_ID | - | Trace ID for distributed tracing |
MODELTRACK_PROMPT_TEMPLATE | - | Prompt template ID for cost analysis |
Programmatic configuration
import modeltrack
modeltrack.configure(
proxy_url="http://localhost:8080",
team="ml-research",
app="chatbot",
feature="customer-support",
customer_tier="enterprise",
prompt_template="support-v2",
)Session tracking
Use the session() context manager to group multiple LLM calls into a single session. This is useful for multi-step agent workflows where you want to track the total cost of a conversation or task.
import modeltrack
import anthropic
client = anthropic.Anthropic()
# All calls within this block share the same session ID
with modeltrack.session("user-query-123"):
# Step 1: Classify the query
classification = client.messages.create(
model="claude-haiku-4-5",
max_tokens=100,
messages=[{"role": "user", "content": "Classify: refund request"}]
)
# Step 2: Generate response
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Handle refund for order #123"}]
)
# With optional trace ID for distributed tracing
with modeltrack.session("session-123", trace_id="trace-456"):
response = client.messages.create(...)Sessions are thread-safe — each thread gets its own session context via thread-local storage.
Setting team, app, and feature
You can set attribution at three levels:
# 1. Environment variables (set once, applies globally)
# MODELTRACK_TEAM=ml-research
# MODELTRACK_APP=chatbot
# MODELTRACK_FEATURE=customer-support
# 2. Programmatic (set once in your app startup)
import modeltrack
modeltrack.configure(team="ml-research", app="chatbot")
# 3. Per-request headers (maximum flexibility)
client = anthropic.Anthropic(
default_headers={
"X-ModelTrack-Team": "ml-research",
"X-ModelTrack-App": "chatbot",
"X-ModelTrack-Feature": "customer-support",
}
)Compatibility
The Python SDK auto-patches these libraries:
- ✓
anthropic— Anthropic Python SDK - ✓
openai— OpenAI Python SDK
For frameworks like LangChain, CrewAI, and LlamaIndex, see the Frameworks guide.