Skip to main content
Not a developer? Start with the Desktop Installer — one download, one double-click, and you’re chatting with a GAIA agent in under 10 minutes. No terminal required.
Run AI agents 100% locally on your AMD hardware — analyze documents, generate code, answer questions, and accomplish tasks on your PC without sending data to the cloud. The GAIA Agent UI desktop app is the primary install path for end users. It ships as a native installer for Windows, macOS, and Linux, handles the Python backend setup automatically on first launch, and auto-updates.

Windows

Download the .exe NSIS installer

macOS

Download the .dmg (Apple Silicon)

Linux

Download the .deb or .AppImage
See the full Installation guide for step-by-step instructions per platform, first-launch setup details, update and uninstall instructions, and privacy information. If something goes wrong, the installation troubleshooting guide covers every common failure mode.
After installing the desktop app, skip ahead to Build Your First Agent below if you want to dive straight into coding — the app takes care of the rest.

For developers

The rest of this page covers the developer install paths (npm CLI, pip, clone-and-install) for people who want to build agents, extend GAIA, or run it from source. End users should use the desktop installer above.

Agent UI (npm)

For developers who prefer npm and Node.js tooling. Ships the same Electron app as the desktop installer but driven from the command line.
npm install -g @amd-gaia/agent-ui
Then run:
gaia-ui
On first run, GAIA automatically installs the Python backend and all dependencies.
Requires Node.js 20+. If you don’t have it:
  • Windows: winget install OpenJS.NodeJS.LTS
  • macOS: brew install node@20
  • Linux: curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - && sudo apt install -y nodejs
See the Agent UI guide for more details.

Update

npm install -g @amd-gaia/agent-ui@latest
On next run, GAIA automatically updates the Python backend to match. To install a specific version (current release is v0.17.2):
npm install -g @amd-gaia/[email protected]

Uninstall

npm uninstall -g @amd-gaia/agent-ui
rm -rf ~/.gaia
On Windows (PowerShell):
npm uninstall -g @amd-gaia/agent-ui
Remove-Item -Recurse -Force "$env:USERPROFILE\.gaia"

CLI Install

First time here? Complete the Setup guide first to install uv (Python package manager).
Recommended for users wanting to try the GAIA CLI. Install GAIA globally with a single command:
Open PowerShell and run:
irm https://amd-gaia.ai/install.ps1 | iex
This will:
  • ✅ Install uv (if not already installed)
  • ✅ Download Python 3.12 (if needed)
  • ✅ Create %USERPROFILE%\.gaia\venv virtual environment
  • ✅ Install GAIA CLI (accessible globally via PATH)
  • ✅ Add GAIA to your PATH
After installation, close and reopen your terminal, then run:
gaia init --profile minimal

Manual Install

Recommended for developers integrating GAIA into their projects. Choose your platform and installation type:
Installs amd-gaia from PyPI in a project-specific virtual environment.

Step 1: Create Project Directory

Open PowerShell and run:
mkdir my-gaia-project
cd my-gaia-project

Step 2: Create Virtual Environment

uv venv .venv --python 3.12
uv will automatically download Python 3.12 if not already installed.

Step 3: Activate the Environment

Windows users: run:
.\.venv\Scripts\Activate.ps1
Linux users: source .venv/bin/activate
If you see a script execution error, run this once:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Then retry the activation command.
You should see (.venv) in your terminal prompt when activated.

Step 4: Install GAIA

uv pip install amd-gaia
Optional extras: uv pip install "amd-gaia[talk,rag]" for voice and document Q&A features.

Step 5: Verify Installation

gaia -v

Step 6: Initialize GAIA

Install Lemonade Server and download models with a single command:
gaia init --profile minimal
Use --profile chat for the full experience (~25GB), --profile vlm for vision/document extraction (~3GB), or --profile minimal for a quick start (~400MB). See CLI Reference for all profiles.
Having issues? Check the Troubleshooting guide, create an issue on GitHub, or contact us at [email protected].

Build Your First Agent

Make sure your virtual environment is still activated (you should see (.venv) in your prompt). If commands aren’t working as expected, try prefixing them with uv run.
Using your text editor, create a file named my_agent.py in your project directory:
import platform
from datetime import datetime
from gaia.agents.base.agent import Agent
from gaia.agents.base.tools import tool

class MyAgent(Agent):
    """A simple agent that can report system information."""

    def _get_system_prompt(self) -> str:
        return """You are a system monitoring assistant.
When users ask about time or system details, use the get_system_info tool."""

    def _register_tools(self):
        @tool
        def get_system_info() -> dict:
            """Get current time, date, platform, and Python version."""
            return {
                "time": datetime.now().strftime("%H:%M:%S"),
                "date": datetime.now().strftime("%Y-%m-%d"),
                "platform": platform.system(),
                "python": platform.python_version()
            }

# Use the agent
agent = MyAgent()
result = agent.process_query("What time is it and what system am I on?")
print(result.get("result"))
View full source: agent.py · tools.py Run it (in your terminal/PowerShell):
python my_agent.py
First run may take a moment while GAIA starts Lemonade Server and loads the LLM.
You’ll see the agent thinking, creating a plan, and executing the tool:
🤖 Processing: 'What time is it and what system am I on?'
...
🔧 Executing operation
  Tool: get_system_info

✅ Tool execution complete
{
  "time": "15:03:26",
  "date": "2025-12-17",
  "platform": "Windows",
  "python": "3.12.12"
}
...
✨ Processing complete!
Final output (will vary based on your system):
The current time is 15:03:26 and you are on a Windows system running Python 3.12.12.
Tip: The tool’s docstring is how the LLM knows what the tool does. Be descriptive! """Get current time, date, platform, and Python version.""" tells the LLM this tool can answer time-related questions.

How It Works

The Agent Base Class

The Agent class handles the core loop: receiving queries, calling the LLM, executing tools, and returning responses. You extend it by defining:
  • _get_system_prompt() — Instructions that shape the agent’s behavior
  • _register_tools() — Functions the agent can call to take actions

System Prompt

The system prompt tells your agent who it is and how to make decisions. You define it by returning a string:
def _get_system_prompt(self) -> str:
    return """You are a system monitoring assistant.
When users ask about time or system details, use the get_system_info tool."""
For agents, a good prompt includes:
  • Role: What the agent specializes in — “You are a code review assistant…”
  • Tool guidance: When to use tools vs. respond directly — “Use the search tool for questions about files…”
  • Style: Tone and boundaries — “Be concise. Only answer questions about this codebase.”
The system prompt and tools work together: the prompt shapes how the agent reasons, while tools define what it can do.

Tools

Tools are just Python functions with the @tool decorator:
@tool
def get_system_info() -> dict:
    """Get current time, date, platform, and Python version."""
    return {"time": "14:32:05", "platform": "Windows", ...}
The LLM automatically sees all registered tools and their docstrings. When you ask a question, it decides which tools (if any) to call based on their descriptions. That’s it — no configuration, no routing logic. Just write functions and the agent knows what it can do.

The Agent Loop

When you call agent.process_query("What time is it?"), GAIA runs an iterative loop:
1

Think

The LLM receives your query plus the system prompt and available tools. It decides what to do next.
2

Act

If the LLM decides to use a tool, GAIA executes it and captures the result.
3

Observe

The tool result is sent back to the LLM, which can then decide to call another tool or respond.
4

Respond

When the LLM has enough information, it generates a natural language response for the user.
This loop continues until the LLM decides it has a complete answer. Complex tasks may involve multiple tool calls before responding.

What’s Next?

You’ve built a simple agent. Now let’s build something practical: an agent that analyzes your system hardware and recommends which LLMs you can run locally.

Hardware Advisor Playbook

Build an agent that detects your hardware and recommends which LLMs you can run locally.

More Playbooks

Chat Agent

Build a document Q&A agent with RAG capabilities

Code Agent

Build an agent that generates and validates code projects

All Playbooks

Step-by-step tutorials for building real-world agents

Guides & Reference

All User Guides

Pre-built agents for chat, voice, code, Jira, Docker, and more

Connect to External Tools

Use MCP to connect your agent to GitHub, databases, filesystems, and hundreds more

SDK Reference

Complete API documentation for all components

CLI Reference

Command-line tools for chat, voice, RAG, and more

Glossary

Learn GAIA terminology: agents, tools, RAG, NPU, and more

Developer Guide

Testing, linting, and contributing to GAIA
Stuck? Join our Discord or create an issue on GitHub.