Skip to main content
First time here? Complete the Setup guide first to install Lemonade Server and uv.

Install GAIA

Choose your platform and installation type, then follow the steps below.
Installs amd-gaia from PyPI. Recommended for most users on Windows.

Step 1: Create Project Directory

Open PowerShell and run:
mkdir my-gaia-project
cd my-gaia-project

Step 2: Create Virtual Environment

uv venv .venv --python 3.12
uv will automatically download Python 3.12 if not already installed.

Step 3: Activate the Environment

Windows users: run:
.\.venv\Scripts\Activate.ps1
Linux users: source .venv/bin/activate
If you see a script execution error, run this once:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Then retry the activation command.
You should see (.venv) in your terminal prompt when activated.

Step 4: Install GAIA

uv pip install amd-gaia
Optional extras: uv pip install "amd-gaia[talk,rag]" for voice and document Q&A features.

Step 5: Verify Installation

gaia -v
Having issues? Check the Troubleshooting guide, create an issue on GitHub, or contact us at [email protected].

Build Your First Agent

Make sure your virtual environment is still activated (you should see (.venv) in your prompt). If commands aren’t working as expected, try prefixing them with uv run.
Using your text editor, create a file named my_agent.py in your project directory:
import platform
from datetime import datetime
from gaia.agents.base.agent import Agent
from gaia.agents.base.tools import tool

class MyAgent(Agent):
    """A simple agent that can report system information."""

    def _get_system_prompt(self) -> str:
        return """You are a system monitoring assistant.
When users ask about time or system details, use the get_system_info tool."""

    def _register_tools(self):
        @tool
        def get_system_info() -> dict:
            """Get current time, date, platform, and Python version."""
            return {
                "time": datetime.now().strftime("%H:%M:%S"),
                "date": datetime.now().strftime("%Y-%m-%d"),
                "platform": platform.system(),
                "python": platform.python_version()
            }

# Use the agent
agent = MyAgent()
result = agent.process_query("What time is it and what system am I on?")
print(result.get("result"))
View full source: agent.py · tools.py Run it (in your terminal/PowerShell):
python my_agent.py
First run may take a moment while GAIA starts Lemonade Server and loads the LLM.
You’ll see the agent thinking, creating a plan, and executing the tool:
🤖 Processing: 'What time is it and what system am I on?'
...
🔧 Executing operation
  Tool: get_system_info

✅ Tool execution complete
{
  "time": "15:03:26",
  "date": "2025-12-17",
  "platform": "Windows",
  "python": "3.12.12"
}
...
✨ Processing complete!
Final output (will vary based on your system):
The current time is 15:03:26 and you are on a Windows system running Python 3.12.12.
Tip: The tool’s docstring is how the LLM knows what the tool does. Be descriptive! """Get current time, date, platform, and Python version.""" tells the LLM this tool can answer time-related questions.

How It Works

The Agent Base Class

The Agent class handles the core loop: receiving queries, calling the LLM, executing tools, and returning responses. You extend it by defining:
  • _get_system_prompt() — Instructions that shape the agent’s behavior
  • _register_tools() — Functions the agent can call to take actions

System Prompt

The system prompt tells your agent who it is and how to make decisions. You define it by returning a string:
def _get_system_prompt(self) -> str:
    return """You are a system monitoring assistant.
When users ask about time or system details, use the get_system_info tool."""
For agents, a good prompt includes:
  • Role: What the agent specializes in — “You are a code review assistant…”
  • Tool guidance: When to use tools vs. respond directly — “Use the search tool for questions about files…”
  • Style: Tone and boundaries — “Be concise. Only answer questions about this codebase.”
The system prompt and tools work together: the prompt shapes how the agent reasons, while tools define what it can do.

Tools

Tools are just Python functions with the @tool decorator:
@tool
def get_system_info() -> dict:
    """Get current time, date, platform, and Python version."""
    return {"time": "14:32:05", "platform": "Windows", ...}
The LLM automatically sees all registered tools and their docstrings. When you ask a question, it decides which tools (if any) to call based on their descriptions. That’s it — no configuration, no routing logic. Just write functions and the agent knows what it can do.

The Agent Loop

When you call agent.process_query("What time is it?"), GAIA runs an iterative loop:
1

Think

The LLM receives your query plus the system prompt and available tools. It decides what to do next.
2

Act

If the LLM decides to use a tool, GAIA executes it and captures the result.
3

Observe

The tool result is sent back to the LLM, which can then decide to call another tool or respond.
4

Respond

When the LLM has enough information, it generates a natural language response for the user.
This loop continues until the LLM decides it has a complete answer. Complex tasks may involve multiple tool calls before responding.

What’s Next?

You’ve built a simple agent. Now let’s build something practical: an agent that analyzes your system hardware and recommends which LLMs you can run locally.

Hardware Advisor Playbook

Build an agent that detects your hardware and recommends which LLMs you can run locally.

More Playbooks

Guides & Reference

Stuck? Join our Discord or create an issue on GitHub.