Most AI Agent frameworks are born tied to a specific model vendor. LangChain was originally built around OpenAI, and Claude Code is naturally Anthropic-exclusive. But in practice, we often need to switch between models — for cost, latency, capability matching, or simply to avoid lock-in.
Model-agnostic means your Agent's core logic doesn't depend on any specific model's API format. Switching models requires changing only one line of configuration — no rewriting tool definitions, prompt templates, or control loops.
1. Unified Tool Description Format — Use JSON Schema or the Function Calling standard to define tools. This is a protocol supported by virtually all major models.
2. Adapter Pattern — Write a lightweight adapter for each model provider, responsible for converting the unified message format to that model's expected format.
3. Prompts Decoupled from Models — System prompts contain no model-specific instructions (like "You are Claude"). Keep them generic.
class ModelAgnosticAgent:
def __init__(self, model_adapter, tools, system_prompt):
self.model = model_adapter
self.tools = tools
self.messages = [{"role": "system", "content": system_prompt}]
def run(self, user_input):
self.messages.append({"role": "user", "content": user_input})
while True:
response = self.model.chat(self.messages, self.tools)
if response.is_final:
return response.content
result = self.execute_tool(response.tool_call)
self.messages.append({"role": "tool", "content": result})
Switching models is just swapping adapters:
# Use Claude
agent = ModelAgnosticAgent(ClaudeAdapter(), tools, prompt)
# Switch to DeepSeek
agent = ModelAgnosticAgent(DeepSeekAdapter(), tools, prompt)
smolagents (HuggingFace) — lightweight, supports any HuggingFace model or external API.
DSPy — declarative programming, auto-optimizes prompts, with models as replaceable parameters.
Hermes Agent — multi-provider configuration, one agent with multiple model backends.