Installation and Setup
Where are my chats stored?
Where are my chats stored?
~/.gaia/chat/chat.db (a SQLite file). It is never sent off your machine. Documents you upload are at ~/.gaia/documents/.See the installation guide for the full list of where GAIA stores data on your machine.How do I uninstall GAIA?
How do I uninstall GAIA?
- Windows: Settings → Apps → Installed apps → GAIA → Uninstall
- macOS: Drag GAIA from
/Applicationsto the Trash - Linux (Debian/Ubuntu):
sudo apt remove gaia-desktop(orapt purgeto also remove data) - Linux (AppImage): Delete the
.AppImagefile
~/.gaia/venv/bin/gaia uninstall --purge from a terminal.Why is the first launch slow?
Why is the first launch slow?
uv, creates a Python 3.12 virtualenv at ~/.gaia/venv/, installs the amd-gaia[ui] package, downloads the Lemonade Server, and downloads a minimal model.This takes about 5-10 minutes depending on your internet speed. Subsequent launches are instant — the app reuses the installed environment. See the installation guide for a breakdown of each stage.Does GAIA send my data anywhere?
Does GAIA send my data anywhere?
uv distribution). Once installed, GAIA can run fully offline with the default Lemonade Server backend. This is a deliberate design choice — see §2 of the desktop installer plan for the rationale.Caveat: If you explicitly configure an external LLM provider (Claude API or OpenAI) via the provider settings, prompts and responses for those sessions are sent to the provider you selected. Local-first is the default; cloud is opt-in.How do I install GAIA on a machine without internet?
How do I install GAIA on a machine without internet?
How do I update GAIA?
How do I update GAIA?
GAIA_DISABLE_UPDATE=1 before launching. To force a manual check, relaunch the app (the in-app updater runs on startup). See the installation guide for more.Can I reinstall GAIA?
Can I reinstall GAIA?
How do I run GAIA in silent/headless mode?
How do I run GAIA in silent/headless mode?
/S- Silent installation (no UI)/D=<path>- Set installation directory (must be last parameter)
What are the system requirements?
What are the system requirements?
| Component | Requirement |
|---|---|
| Processor | AMD Ryzen AI 300-series (for optimal performance) |
| RAM | 16GB minimum, 64GB recommended |
| Storage | 20GB free space |
| OS | Windows 11 Pro 24H2 or Ubuntu 22.04+ |
- Radeon iGPU:
32.0.22029.1019or later - NPU:
32.0.203.314or later
What platforms does GAIA support?
What platforms does GAIA support?
- Windows 11: ✅ Fully supported with complete UI and CLI functionality
- Linux (Ubuntu/Debian): ✅ Fully supported with complete UI and CLI functionality
How do I install additional models?
How do I install additional models?
- System Tray Icon: Access the Lemonade model manager from the system tray
- Web UI: Manage models through the Lemonade web interface
Demo and Capabilities
What is GAIA, and how does it integrate with Ryzen AI?
What is GAIA, and how does it integrate with Ryzen AI?
- Local execution (no cloud dependency)
- Enhanced privacy
- Optimized for AMD NPU hardware
- Lower power consumption
How does the agent RAG pipeline work?
How does the agent RAG pipeline work?
- Agent: Capable of retrieving relevant information
- Reasoning: Plans and executes multi-step tasks
- Tool Use: Accesses external tools and APIs
- Interactive Chat: Real-time conversation interface
What LLMs are supported?
What LLMs are supported?
- Phi-3.5 Mini Instruct
- Phi-3 Mini Instruct
- Llama-2 7B Chat
- Llama-3.2 1B/3B Instruct
- Qwen 1.5 7B Chat
- Mistral 7B Instruct
How does the NPU enhance LLM performance?
How does the NPU enhance LLM performance?
- Faster Inference: Optimized hardware acceleration
- Lower Power: More efficient than CPU/iGPU
- System Offload: Reduces load on main processor
Can this scale to larger LLMs or enterprise applications?
Can this scale to larger LLMs or enterprise applications?
- Architecture scales to larger models
- NPU optimization ensures efficient scaling
- Enterprise deployments supported
- Hybrid cloud/local configurations possible
What are the benefits of running LLMs locally on the NPU?
What are the benefits of running LLMs locally on the NPU?
- No data leaves your machine
- Complete control over sensitive information
- Reduced latency (no cloud communication)
- Faster response times with NPU acceleration
- No ongoing cloud API costs
- Lower power consumption
NPU vs iGPU: What's the difference?
NPU vs iGPU: What's the difference?
- Optimized specifically for AI inference
- Lower power consumption
- Faster for LLM workloads
- AI-focused architecture
- General-purpose graphics/compute
- Higher power consumption for AI
- Slower inference for LLMs
- Graphics-focused architecture
Demo Components and Workflow
System Architecture
1. GAIA Backend Powered by the Ryzen AI platform through Lemonade Server:- NPU/iGPU Acceleration: Leverages Ryzen AI hardware
- Multiple Models: Supports various LLMs including Llama-3.2-3B-Instruct-Hybrid
- Agent System: Specialized tasks and workflows
- WebSocket Streaming: Real-time response delivery
- Dual Interfaces: Both CLI and GUI available
- Repository Vectorization: Fetches and indexes code repositories
- Local Vector Storage: Fast, local similarity search
- Fast Indexing: ~10 seconds for 40,000 lines of code (typical laptop)
- Ready for Queries: Instant access to indexed knowledge
Query Process
Use Cases and Applications
What applications or industries could benefit?
What applications or industries could benefit?
- Patient data privacy compliance
- Medical record analysis
- Clinical decision support
- Sensitive financial data processing
- Compliance and audit trails
- Risk analysis
- Internal document Q&A
- Code analysis and generation
- Knowledge management
- Writing assistance
- Code generation
- Document summarization
- Automated support systems
- Intent classification
- Response generation
How does this address data privacy concerns?
How does this address data privacy concerns?
- All data remains on your device
- No cloud transmission of sensitive information
- Complete control over data and models
- Compliance-friendly for regulated industries
- High-performance AI without privacy trade-offs
What toolset do I need to replicate this?
What toolset do I need to replicate this?
- Ryzen AI processor (300-series recommended)
- Lemonade Server for managing LLMs
- GAIA framework (available on Windows and Linux)
- Command-line interface (CLI) for scripting and automation
- Graphical user interface (GUI) for interactive use