Write Your First AI Agent — 50 Lines of Code

In the last article, we covered the concepts behind AI Agents and the ReAct loop. Now let's build one — under 50 lines of Python, an Agent that searches the web and executes code on its own.

You'll need: Python 3.10+, and an API that supports Function Calling. We use an OpenAI-compatible interface (any model supporting /v1/chat/completions works, including locally deployed ones).

The Agent's Two Tools

One for searching, one for computing — these two already cover the vast majority of real-world needs.

Tool 1: Web Search

def search_web(query: str) -> str:
    """Search the web, return top 5 results with titles and links."""
    import urllib.request, urllib.parse, json
    # DuckDuckGo Instant Answer API (free, no API key needed)
    url = "https://api.duckduckgo.com/?" + urllib.parse.urlencode({
        "q": query, "format": "json", "no_html": 1, "skip_disambig": 1
    })
    with urllib.request.urlopen(url, timeout=10) as resp:
        data = json.loads(resp.read())
    results = []
    if data.get("AbstractText"):
        results.append(f"Abstract: {data['AbstractText']}")
    for item in data.get("RelatedTopics", [])[:5]:
        if isinstance(item, dict) and item.get("Text"):
            results.append(f"- {item['Text']}")
    return "\n".join(results) if results else "No results found"

Tool 2: Python Execution

def run_python(code: str) -> str:
    """Execute Python code, return stdout output."""
    import subprocess
    try:
        result = subprocess.run(
            ["python3", "-c", code],
            capture_output=True, text=True, timeout=30
        )
        if result.returncode == 0:
            return result.stdout or "(no output)"
        return f"Error: {result.stderr}"
    except subprocess.TimeoutExpired:
        return "Error: code execution timed out"
⚡ Security note: In production, run_python must execute inside a sandbox (Docker/VM) to prevent malicious code from compromising the system. Simplified here for demonstration.

The Agent Main Loop

The core is the ReAct loop, plus tool definitions and message management:

import json
from openai import OpenAI

# Initialize client (replace with your API endpoint and key)
client = OpenAI(
    base_url="https://api.openai.com/v1",  # or any compatible endpoint
    api_key="your-api-key"
)

# Tool definitions (JSON Schema — the model needs this to understand tools)
TOOLS = [
    {
        "type": "function",
        "function": {
            "name": "search_web",
            "description": "Search the web for current information. Use when real-time data or knowledge beyond training cutoff is needed.",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string", "description": "Search query"}
                },
                "required": ["query"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "run_python",
            "description": "Execute Python code for calculations or data processing.",
            "parameters": {
                "type": "object",
                "properties": {
                    "code": {"type": "string", "description": "Python code to execute"}
                },
                "required": ["code"]
            }
        }
    }
]

def run_agent(user_input: str, max_turns: int = 10) -> str:
    """Main Agent loop."""
    messages = [
        {"role": "system", "content": "You are a helpful assistant. You can search the web for current info and run Python for calculations. Prefer using tools for accuracy — don't guess from memory."},
        {"role": "user", "content": user_input}
    ]

    for turn in range(max_turns):
        # Call the model
        response = client.chat.completions.create(
            model="gpt-4o",  # or deepseek-chat, etc.
            messages=messages,
            tools=TOOLS,
            tool_choice="auto"
        )

        msg = response.choices[0].message

        # If model responds directly (no more tool calls), we're done
        if not msg.tool_calls:
            return msg.content or ""

        # Execute each tool call
        for tool_call in msg.tool_calls:
            fn_name = tool_call.function.name
            fn_args = json.loads(tool_call.function.arguments)

            print(f"🔧 Calling tool: {fn_name}({fn_args})")

            # Execute the tool
            if fn_name == "search_web":
                result = search_web(**fn_args)
            elif fn_name == "run_python":
                result = run_python(**fn_args)
            else:
                result = f"Unknown tool: {fn_name}"

            # Add tool result to messages
            messages.append({
                "role": "tool",
                "tool_call_id": tool_call.id,
                "content": result
            })

        # Also add the model's tool-call message
        messages.append(msg)

    return "Max turns reached. Task may be incomplete."

# Run
if __name__ == "__main__":
    answer = run_agent(
        "What's the time difference between Beijing and Shanghai? "
        "Also compute 2^20 using Python."
    )
    print(f"\n✅ Final answer:\n{answer}")

What Happens When You Run It

The Agent will:

  1. Understand there are two sub-questions
  2. First call run_python("print(2**20)") → 1048576
  3. Then call search_web("Beijing Shanghai time difference") → both in UTC+8
  4. Combine both results into a final answer

You'll see something like:

🔧 Calling tool: run_python({'code': 'print(2**20)'})
🔧 Calling tool: search_web({'query': 'Beijing Shanghai time difference'})

✅ Final answer:
2^20 = 1,048,576.
Beijing and Shanghai are both in China Standard Time (UTC+8), so there is no time difference.

What These 50 Lines Really Are

This isn't a toy. These 50 lines contain the core of every production Agent framework:

  1. Tool abstraction — JSON Schema describes tools, the model auto-understands them
  2. ReAct loop — Think → Act → Observe → Think again
  3. Message management — Four roles (system/user/assistant/tool) precisely control context
  4. Safety boundary — Turn limit prevents infinite loops

Understand these 50 lines, and you understand what LangChain, AutoGPT, CrewAI, and other frameworks are doing under the hood — they just add more tools, better error handling, and more complex orchestration on top of this same loop.

Three Ways to Extend

From here, you can go in three directions:

  1. Add more tools — file reading, email, API calls… anything with an interface becomes a tool
  2. Add memory — vector databases or simple file storage for cross-session knowledge
  3. Add error recovery — when a tool fails, let the model see the error and retry

Future articles will expand on each of these.

📖 Next: Agent Tool Design Best Practices — how to write tool descriptions that models actually understand