Tools are the Agent's hands and feet. Well-designed tools make the model unstoppable; poorly designed ones lead to wrong tool calls, bad parameters, and infinite loops. Here's what we learned from real projects.
Don't just describe what a tool does — describe when to use it. A good description includes: trigger conditions, applicable scenarios, example inputs and outputs.
# ❌ Too vague
"description": "Search the web"
# ✅ With trigger conditions
"description": "Search the web for current information. Use when the answer requires real-time or recent data beyond training cutoff. Returns top 10 results with titles and URLs."
Parameter names are hints for the model. Use search_query instead of q, file_path instead of fp. Every parameter needs a clear description.
Too fine: one tool does one-tenth of a task, requiring 10 sequential calls → model gets lost.
Too coarse: one tool does everything, parameter explosion → model doesn't know what to pass.
Golden rule: one tool completes one complete operation. Read file, write file, search — separate but complete.
Tool return values become the model's input. Plain text works, but JSON is better — the model extracts key information more accurately. Return structured errors instead of empty strings on failure.
# ✅ Helpful error
{"success": false, "error": "File not found: /data/report.csv",
"suggestion": "Try listing /data/ to see available files"}
Beyond ~20 tools, model selection accuracy drops significantly. If you have many tools, consider tiered exposure: core tools first, advanced tools only for complex tasks.