Skip to main content

Installation and Setup

Yes, you can reinstall GAIA. The installer provides an option to remove your existing installation before reinstalling.
Run the installer from command-line with parameters for CI/CD or silent installations:
gaia-windows-setup.exe /S
Available parameters:
  • /S - Silent installation (no UI)
  • /D=<path> - Set installation directory (must be last parameter)
GAIA is designed for AMD Ryzen AI systems:
ComponentRequirement
ProcessorAMD Ryzen AI 300-series (for optimal performance)
RAM16GB minimum, 64GB recommended
Storage20GB free space
OSWindows 11 Pro 24H2 or Ubuntu 22.04+
Driver Requirements:
  • Radeon iGPU: 32.0.22029.1019 or later
  • NPU: 32.0.203.314 or later
  • Windows 11: ✅ Fully supported with complete UI and CLI functionality
  • Linux (Ubuntu/Debian): ✅ Fully supported with complete UI and CLI functionality
Additional models can be installed through Lemonade Server’s model management interface:
  • System Tray Icon: Access the Lemonade model manager from the system tray
  • Web UI: Manage models through the Lemonade web interface
→ Installing Additional Models→ Lemonade Model Management

Demo and Capabilities

Discover the capabilities of Ryzen AI with GAIA - an innovative generative AI application that runs private, local LLMs on the Neural Processing Unit (NPU).
GAIA is AMD’s generative AI application that runs local, private LLMs on Ryzen AI’s NPU hardware. It leverages the power of the NPU for faster, more efficient processing, allowing users to keep their data local without relying on cloud infrastructure.GAIA uses the Lemonade Server to load and run LLM inference optimally on AMD hardware.Key Benefits:
  • Local execution (no cloud dependency)
  • Enhanced privacy
  • Optimized for AMD NPU hardware
  • Lower power consumption
The RAG (Retrieval-Augmented Generation) pipeline combines an LLM with a knowledge base:Components:
  • Agent: Capable of retrieving relevant information
  • Reasoning: Plans and executes multi-step tasks
  • Tool Use: Accesses external tools and APIs
  • Interactive Chat: Real-time conversation interface
This enables more accurate and contextually aware responses by grounding the LLM in your specific documents and data.
GAIA supports various local LLMs optimized for Ryzen AI NPU hardware, including:Hybrid Mode (NPU + iGPU) - Ryzen AI 300 series:
  • Phi-3.5 Mini Instruct
  • Phi-3 Mini Instruct
  • Llama-2 7B Chat
  • Llama-3.2 1B/3B Instruct
  • Qwen 1.5 7B Chat
  • Mistral 7B Instruct
These models are tailored for different use cases like Q&A, summarization, and complex reasoning tasks.→ Full Model List
The NPU (Neural Processing Unit) in Ryzen AI is specialized for AI workloads, specifically the matrix multiplications (GEMMs) in the model:Benefits:
  • Faster Inference: Optimized hardware acceleration
  • Lower Power: More efficient than CPU/iGPU
  • System Offload: Reduces load on main processor
This results in significant performance gains for local AI processing.
Absolutely. While GAIA showcases local, private implementation:
  • Architecture scales to larger models
  • NPU optimization ensures efficient scaling
  • Enterprise deployments supported
  • Hybrid cloud/local configurations possible
The same architecture works for both small-scale and enterprise-level deployments.
Privacy:
  • No data leaves your machine
  • Complete control over sensitive information
Performance:
  • Reduced latency (no cloud communication)
  • Faster response times with NPU acceleration
Cost:
  • No ongoing cloud API costs
  • Lower power consumption
NPU (Neural Processing Unit):
  • Optimized specifically for AI inference
  • Lower power consumption
  • Faster for LLM workloads
  • AI-focused architecture
iGPU (Integrated GPU):
  • General-purpose graphics/compute
  • Higher power consumption for AI
  • Slower inference for LLMs
  • Graphics-focused architecture
The NPU provides better efficiency and speed for generative AI tasks.

Demo Components and Workflow

The GAIA demo consists of two main components working together to provide powerful local AI capabilities.

System Architecture

1. GAIA Backend Powered by the Ryzen AI platform through Lemonade Server:
  • NPU/iGPU Acceleration: Leverages Ryzen AI hardware
  • Multiple Models: Supports various LLMs including Llama-3.2-3B-Instruct-Hybrid
  • Agent System: Specialized tasks and workflows
  • WebSocket Streaming: Real-time response delivery
  • Dual Interfaces: Both CLI and GUI available
2. Agent Interface Works with the Lemonade SDK:
  • Repository Vectorization: Fetches and indexes code repositories
  • Local Vector Storage: Fast, local similarity search
  • Fast Indexing: ~10 seconds for 40,000 lines of code (typical laptop)
  • Ready for Queries: Instant access to indexed knowledge

Query Process

1

User Input

Query sent to GAIA (e.g., “How do I install dependencies?”)
2

Embedding Generation

For RAG queries, input transformed into embeddings
3

Context Retrieval

Relevant content retrieved from local repositories/documents
4

LLM Processing

Context passed to LLM via Lemonade Server for generation
5

Response Streaming

Generated response streamed back in real-time through GAIA interfaces

Use Cases and Applications

GAIA’s local AI capabilities are ideal for industries requiring high performance and privacy:Healthcare:
  • Patient data privacy compliance
  • Medical record analysis
  • Clinical decision support
Finance:
  • Sensitive financial data processing
  • Compliance and audit trails
  • Risk analysis
Enterprise:
  • Internal document Q&A
  • Code analysis and generation
  • Knowledge management
Content Creation:
  • Writing assistance
  • Code generation
  • Document summarization
Customer Service:
  • Automated support systems
  • Intent classification
  • Response generation
GAIA emphasizes running LLMs locally, meaning:
  • All data remains on your device
  • No cloud transmission of sensitive information
  • Complete control over data and models
  • Compliance-friendly for regulated industries
  • High-performance AI without privacy trade-offs
This eliminates the need to send sensitive information to the cloud while still delivering powerful AI capabilities.
To run GAIA, you need:Hardware:
  • Ryzen AI processor (300-series recommended)
Software:
  • Lemonade Server for managing LLMs
  • GAIA framework (available on Windows and Linux)
Interfaces:
  • Command-line interface (CLI) for scripting and automation
  • Graphical user interface (GUI) for interactive use
Both Windows and Linux platforms are fully supported.

See Also