LangChain & Frameworks
ModelTrack works with any AI framework that lets you set a custom base URL. Here's how to integrate with the most popular ones.
The universal approach: ModelTrack is a reverse proxy. Any library that talks to Anthropic or OpenAI can be routed through it by changing the base URL to http://localhost:8080. The examples below show how to do this for each framework.
LangChain with Anthropic
Use the anthropic_api_url parameter to route through ModelTrack.
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(
model="claude-sonnet-4-6",
anthropic_api_url="http://localhost:8080",
default_headers={
"X-ModelTrack-Team": "ml-research",
"X-ModelTrack-App": "langchain-bot",
}
)
response = llm.invoke("What is the meaning of life?")LangChain with OpenAI
Use the openai_api_base parameter.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
openai_api_base="http://localhost:8080",
default_headers={
"X-ModelTrack-Team": "product",
"X-ModelTrack-App": "langchain-bot",
}
)
response = llm.invoke("Summarize this document.")CrewAI
CrewAI uses the Anthropic and OpenAI SDKs under the hood. Set the base URL via environment variables:
# Set the base URL for all LLM calls in CrewAI
export ANTHROPIC_BASE_URL=http://localhost:8080
export OPENAI_BASE_URL=http://localhost:8080
# Attribution headers
export MODELTRACK_TEAM=ml-research
export MODELTRACK_APP=crew-agent
# Then run your CrewAI app normally
python crew_app.pyOr use the ModelTrack SDK for auto-instrumentation:
import modeltrack # Auto-patches before CrewAI loads the SDK
modeltrack.configure(team="ml-research", app="crew-agent")
from crewai import Agent, Task, Crew
# ... your CrewAI code works as normalLlamaIndex
Set api_base in the LLM constructor:
from llama_index.llms.anthropic import Anthropic
llm = Anthropic(
model="claude-sonnet-4-6",
api_base="http://localhost:8080",
additional_kwargs={
"extra_headers": {
"X-ModelTrack-Team": "ml-research",
"X-ModelTrack-App": "llamaindex-bot",
}
}
)
from llama_index.llms.openai import OpenAI
llm = OpenAI(
model="gpt-4o",
api_base="http://localhost:8080",
additional_kwargs={
"extra_headers": {
"X-ModelTrack-Team": "ml-research",
"X-ModelTrack-App": "llamaindex-bot",
}
}
)Vercel AI SDK (Node.js)
Use the baseURL option when creating the provider:
import { createAnthropic } from '@ai-sdk/anthropic'
import { generateText } from 'ai'
const anthropic = createAnthropic({
baseURL: 'http://localhost:8080',
headers: {
'X-ModelTrack-Team': 'product',
'X-ModelTrack-App': 'vercel-app',
}
})
const { text } = await generateText({
model: anthropic('claude-sonnet-4-6'),
prompt: 'Hello!',
})import { createOpenAI } from '@ai-sdk/openai'
import { generateText } from 'ai'
const openai = createOpenAI({
baseURL: 'http://localhost:8080',
headers: {
'X-ModelTrack-Team': 'product',
'X-ModelTrack-App': 'vercel-app',
}
})
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: 'Hello!',
})Any other framework
If your framework uses the Anthropic or OpenAI Python/Node SDK under the hood, you have two options:
- 1. Auto-instrumentation: Import
modeltrackbefore importing the framework. The SDK will patch the underlying LLM clients automatically. - 2. Environment variables: Set
ANTHROPIC_BASE_URLorOPENAI_BASE_URLtohttp://localhost:8080. Many SDKs read these automatically. - 3. Manual base URL: Look for a
base_url,api_base, orbaseURLparameter in the framework's LLM constructor.
Need help integrating with a specific framework? Open an issue on GitHub.