Make sure your virtual environment is still activated (you should see (.venv) in your prompt). If commands aren’t working as expected, try prefixing them with uv run.
Using your text editor, create a file named my_agent.py in your project directory:
Copy
import platformfrom datetime import datetimefrom gaia.agents.base.agent import Agentfrom gaia.agents.base.tools import toolclass MyAgent(Agent): """A simple agent that can report system information.""" def _get_system_prompt(self) -> str: return """You are a system monitoring assistant.When users ask about time or system details, use the get_system_info tool.""" def _register_tools(self): @tool def get_system_info() -> dict: """Get current time, date, platform, and Python version.""" return { "time": datetime.now().strftime("%H:%M:%S"), "date": datetime.now().strftime("%Y-%m-%d"), "platform": platform.system(), "python": platform.python_version() }# Use the agentagent = MyAgent()result = agent.process_query("What time is it and what system am I on?")print(result.get("result"))
View full source:agent.py · tools.pyRun it (in your terminal/PowerShell):
Copy
python my_agent.py
First run may take a moment while GAIA starts Lemonade Server and loads the LLM.
You’ll see the agent thinking, creating a plan, and executing the tool:
Copy
🤖 Processing: 'What time is it and what system am I on?'...🔧 Executing operation Tool: get_system_info✅ Tool execution complete{ "time": "15:03:26", "date": "2025-12-17", "platform": "Windows", "python": "3.12.12"}...✨ Processing complete!
Final output (will vary based on your system):
Copy
The current time is 15:03:26 and you are on a Windows system running Python 3.12.12.
Tip: The tool’s docstring is how the LLM knows what the tool does. Be descriptive!
"""Get current time, date, platform, and Python version.""" tells the LLM this tool can answer time-related questions.
The system prompt tells your agent who it is and how to make decisions. You define it by returning a string:
Copy
def _get_system_prompt(self) -> str: return """You are a system monitoring assistant.When users ask about time or system details, use the get_system_info tool."""
For agents, a good prompt includes:
Role: What the agent specializes in — “You are a code review assistant…”
Tool guidance: When to use tools vs. respond directly — “Use the search tool for questions about files…”
Style: Tone and boundaries — “Be concise. Only answer questions about this codebase.”
The system prompt and tools work together: the prompt shapes how the agent reasons, while tools define what it can do.
Tools are just Python functions with the @tool decorator:
Copy
@tooldef get_system_info() -> dict: """Get current time, date, platform, and Python version.""" return {"time": "14:32:05", "platform": "Windows", ...}
The LLM automatically sees all registered tools and their docstrings. When you ask a question, it decides which tools (if any) to call based on their descriptions. That’s it — no configuration, no routing logic. Just write functions and the agent knows what it can do.
You’ve built a simple agent. Now let’s build something practical: an agent that analyzes your system hardware and recommends which LLMs you can run locally.