Skip to main content
Source Code: src/gaia/cli.py
GAIA provides a comprehensive command-line interface (CLI) for interacting with AI models and agents. The CLI allows you to query models directly, manage chat sessions, and access various utilities without writing code.

Platform Support

Windows 11

Full GUI and CLI support with installer and desktop shortcuts

Linux

Full GUI and CLI support via source installation (Ubuntu/Debian)

Quick Start

  1. Follow the Getting Started Guide to install gaia CLI and lemonade LLM server
  2. Double click the GAIA-CLI desktop icon to launch the command-line shell
  3. GAIA automatically starts Lemonade Server when needed, or start manually:
lemonade-server serve

Initialization

Init Command

New users start here! The gaia init command is the easiest way to get GAIA running.
Initialize GAIA with a single command: installs Lemonade Server and downloads required models.
gaia init [OPTIONS]
Options:
OptionTypeDefaultDescription
--profile, -pstringchatProfile to initialize (minimal, chat, code, rag, all)
--minimalflagfalseShortcut for --profile minimal
--skip-modelsflagfalseSkip model downloads (only install Lemonade)
--skip-lemonadeflagfalseSkip Lemonade installation check (for CI with pre-installed Lemonade)
--force-reinstallflagfalseForce reinstall even if compatible version exists
--force-modelsflagfalseForce re-download models (deletes then re-downloads each model)
--yes, -yflagfalseSkip confirmation prompts (non-interactive)
--verboseflagfalseEnable verbose output
--remoteflagfalseUse remote Lemonade server (skip local install/start, still checks version)
Available Profiles:
ProfileModelsDescriptionApprox Size
minimalQwen3-0.6BFast setup with lightweight model~400 MB
chatQwen3-Coder-30B, nomic-embed, Qwen3-VL-4BInteractive chat with RAG and vision~25 GB
codeQwen3-Coder-30BAutonomous coding assistant~18 GB
ragQwen3-Coder-30B, nomic-embed, Qwen3-VL-4BDocument Q&A with retrieval and vision~25 GB
allAll modelsAll models for all agents~26 GB
All profiles also include the lightweight Qwen3-0.6B model used by gaia llm for quick queries.
Examples:
gaia init
What It Does:
  1. Checks Lemonade Server - Detects if installed and verifies version compatibility
  2. Installs/Upgrades Lemonade - Downloads and installs from GitHub releases (Windows/Linux only). Automatically uninstalls old version if version mismatch detected.
  3. Checks Server - Ensures Lemonade server is running (prompts to start if not)
  4. Downloads Models - Pulls required models for the selected profile
  5. Verifies Setup - Tests each model with inference to detect corrupted downloads
Platform Support: Automatic installation supports Windows (MSI) and Linux (DEB) only. macOS users should install Lemonade Server manually from lemonade-server.ai.
Automatic Upgrade: If your installed Lemonade version doesn’t match the expected version, gaia init will offer to automatically uninstall the old version and install the correct one.
Corrupted Model Detection: gaia init verifies each model with a quick inference test. If a model fails verification (e.g., corrupted download), you’ll see instructions to manually delete and re-download it, or use gaia init --force-models to force re-download all models.

Install Command

Install individual GAIA components.
gaia install [OPTIONS]
Options:
OptionTypeDescription
--lemonadeflagInstall Lemonade Server
--yes, -yflagSkip confirmation prompts
Examples:
gaia install --lemonade
If a different version of Lemonade is already installed, you’ll be prompted to uninstall first.

Uninstall Command

Uninstall GAIA components.
gaia uninstall [OPTIONS]
Options:
OptionTypeDescription
--lemonadeflagUninstall Lemonade Server
--modelsflagDelete all downloaded models from HuggingFace cache
--yes, -yflagSkip confirmation prompts
Examples:
gaia uninstall --lemonade
The uninstall command automatically downloads the correct MSI version matching your installed Lemonade to ensure clean removal.
--models permanently deletes all models from ~/.cache/huggingface/hub/. Use with caution - you’ll need to re-download models after.

Kill Command

Stop running GAIA services.
gaia kill [OPTIONS]
Options:
OptionTypeDescription
--lemonadeflagKill Lemonade Server and child processes
--portintegerKill process on specific port
Examples:
gaia kill --lemonade
On Windows, --lemonade also kills orphaned llama-server.exe and lemonade-tray.exe processes.

Core Commands

LLM Direct Query

The fastest way to interact with AI models - no server management required.
gaia llm QUERY [OPTIONS]
Options:
OptionTypeDefaultDescription
--modelstringClient defaultSpecify the model to use
--max-tokensinteger512Maximum tokens to generate
--no-streamflagfalseDisable streaming response
Examples:
gaia llm "What is machine learning?"
The lemonade server must be running. If not available, the command will provide instructions on how to start it.

Chat Command

Start an interactive conversation or send a single message with conversation history.
gaia chat [MESSAGE] [OPTIONS]
Modes:
  • No message: Starts interactive chat session
  • Message provided: Sends single message and exits
Options:
OptionTypeDefaultDescription
--query, -qstring-Single query to execute
--modelstringQwen3-Coder-30B-A3B-Instruct-GGUFModel name to use
--max-stepsinteger10Maximum conversation steps
--index, -ipath(s)-PDF document(s) to index for RAG
--watch, -wpath(s)-Directories to monitor for new documents
--chunk-sizeinteger500Document chunk size for RAG
--max-chunksinteger3Maximum chunks to retrieve for RAG
--statsflagfalseShow performance statistics
--streamingflagfalseEnable streaming responses
--show-promptsflagfalseDisplay prompts sent to LLM
--debugflagfalseEnable debug output
--list-toolsflagfalseList available tools and exit
Examples:
gaia chat
Interactive Commands: During a chat session, use these special commands:
CommandDescription
/clearClear conversation history
/historyShow conversation history
/systemShow current system prompt configuration
/modelShow current model information
/promptShow complete formatted prompt sent to LLM
/statsShow performance statistics (tokens/sec, latency, token counts)
/helpShow available commands
quit, exit, byeEnd the chat session

Prompt Command

Send a single prompt to a GAIA agent.
gaia prompt "MESSAGE" [OPTIONS]
Options:
OptionTypeDefaultDescription
--modelstringQwen3-0.6B-GGUFModel to use for the agent
--max-tokensinteger512Maximum tokens to generate
--statsflagfalseShow performance statistics
Examples:
gaia prompt "What is the weather like today?"

Specialized Agents

Code Agent

Code Development

AI-powered code generation, analysis, and linting for Python/TypeScript
The Code Agent requires extended context. Start Lemonade with:
lemonade-server serve --ctx-size 32768
Features:
  • Intelligent Language Detection (Python/TypeScript)
  • Code Generation (functions, classes, unit tests)
  • Autonomous Workflow (planning → implementation → testing → verification)
  • Automatic Test Generation
  • Iterative Error Correction
  • Code Analysis with AST
  • Linting & Formatting
Quick Examples: Routing detects “Express” and uses TypeScript:
gaia-code "Create a REST API with Express and SQLite for managing products"
Routing detects “Django” and uses Python:
gaia-code "Create a Django REST API with authentication"
Routing detects “React” and uses TypeScript frontend:
gaia-code "Create a React dashboard with user management"
gaia-code --interactive
→ Full Code Agent Documentation

Blender Agent

3D Scene Creation

Natural language 3D modeling and scene manipulation
Features:
  • Natural Language 3D Modeling
  • Interactive Planning
  • Object Management
  • Material Assignment
  • MCP Integration
Examples: Interactive Blender mode:
gaia blender --interactive
Create specific objects:
gaia blender --query "Create a red cube and blue sphere arranged in a line"
Run built-in examples:
gaia blender --example 2
→ Full Blender Agent Documentation

SD Command

Image Generation

Generate images using Stable Diffusion on Ryzen AI
gaia sd <prompt> [OPTIONS]
Options:
OptionTypeDefaultDescription
promptstring-Text description of the image to generate
-i, --interactiveflagfalseRun in interactive mode
--sd-modelstringSD-TurboModel: SD-Turbo (fast, default), SDXL-Turbo, SDXL-Base-1.0 (photorealistic), SD-1.5
--sizestringautoImage size: 512x512, 768x768, 1024x1024 (auto-selected per model)
--stepsintegerautoInference steps (auto: 4 for Turbo, 20 for Base)
--cfg-scalefloatautoCFG scale (auto: 1.0 for Turbo, 7.5 for Base)
--output-dirpath.gaia/cache/sd/imagesDirectory to save images
--seedintegerrandomSeed for reproducibility
--no-openflagfalseSkip prompt to open image in viewer
Examples: Fast generation with default (SD-Turbo, ~13s):
gaia sd "a sunset over mountains"
Better quality with SDXL-Turbo (~17s):
gaia sd "cyberpunk city at night" --sd-model SDXL-Turbo
Photorealistic with SDXL-Base-1.0 (slow, ~9min):
gaia sd "portrait, photorealistic, detailed" --sd-model SDXL-Base-1.0
For automation (no prompts):
gaia sd "test image" --no-open
Interactive mode:
gaia sd -i
→ Full Image Generation Documentation

Talk Command

Voice Interaction

Speech-to-speech conversation with optional document Q&A
gaia talk [OPTIONS]
Options:
OptionTypeDefaultDescription
--modelstringQwen3-0.6B-GGUFModel to use
--max-tokensinteger512Maximum tokens to generate
--no-ttsflagfalseDisable text-to-speech
--audio-device-indexintegerauto-detectAudio input device index
--whisper-model-sizestringbaseWhisper model [tiny, base, small, medium, large]
--silence-thresholdfloat0.5Silence threshold in seconds
--statsflagfalseShow performance statistics
--index, -ipath-PDF document for voice Q&A
Examples:
gaia talk
→ Full Voice Interaction Guide

API Server

API Server

OpenAI-compatible REST API for VSCode and IDE integrations

Quick Start

  1. Start Lemonade with extended context:
lemonade-server serve --ctx-size 32768
  1. Start GAIA API server:
gaia api start
  1. Test the server:
curl http://localhost:8080/health

Commands

gaia api start [OPTIONS]
Options:
  • --host - Server host (default: localhost)
  • --port - Server port (default: 8080)
  • --background - Run in background
  • --debug - Enable debug logging
Examples:Foreground:
gaia api start
Background with debug:
gaia api start --background --debug
Custom host/port:
gaia api start --host 0.0.0.0 --port 8888
→ Full API Server Documentation

MCP Client

MCP Client

Connect GAIA agents to external MCP servers
Configure MCP servers that your agents can connect to. Servers are saved to ~/.gaia/mcp_servers.json by default, or to a custom config file using --config.

Commands

gaia mcp add

Add an MCP server to configuration.
gaia mcp add <server-name> "<command>" [--config PATH]
Arguments:
  • <server-name> - Unique identifier for the server (e.g., “time”, “memory”)
  • "<command>" - Shell command to start the MCP server (must be quoted)
Options:
  • --config PATH - Custom config file path (default: ~/.gaia/mcp_servers.json)
Examples:
# Add to user config (default)
gaia mcp add time "uvx mcp-server-time"
gaia mcp add memory "npx -y @modelcontextprotocol/server-memory"

# Add to project config (can be committed to git)
gaia mcp add time "uvx mcp-server-time" --config ./mcp_servers.json

gaia mcp list

List all configured MCP servers.
gaia mcp list [--config PATH]
Options:
  • --config PATH - Custom config file path (default: ~/.gaia/mcp_servers.json)
Example:
# List from user config
gaia mcp list

# List from project config
gaia mcp list --config ./mcp_servers.json

gaia mcp remove

Remove an MCP server from configuration.
gaia mcp remove <server-name> [--config PATH]
Arguments:
  • <server-name> - Name of the server to remove
Options:
  • --config PATH - Custom config file path (default: ~/.gaia/mcp_servers.json)
Example:
# Remove from user config
gaia mcp remove time

# Remove from project config
gaia mcp remove memory --config ./mcp_servers.json

gaia mcp tools

List tools available from a configured MCP server.
gaia mcp tools <server-name> [--config PATH]
Arguments:
  • <server-name> - Name of the server to query
Options:
  • --config PATH - Custom config file path (default: ~/.gaia/mcp_servers.json)
Example:
# List tools from time server
gaia mcp tools time

# List tools using project config
gaia mcp tools memory --config ./mcp_servers.json

gaia mcp test-client

Test connection to a configured MCP server.
gaia mcp test-client <server-name> [--config PATH]
Arguments:
  • <server-name> - Name of the server to test
Options:
  • --config PATH - Custom config file path (default: ~/.gaia/mcp_servers.json)
Example:
gaia mcp test-client time
→ Full MCP Client Guide

MCP Bridge

MCP Bridge

Expose GAIA agents as MCP servers
The MCP Bridge allows other applications to use GAIA agents as MCP servers.

Quick Start

Install MCP support:
uv pip install -e ".[mcp]"
Start MCP bridge:
gaia mcp start
Test basic functionality:
gaia mcp test --query "Hello from GAIA MCP!"

Commands

CommandDescription
startStart the MCP bridge server
statusCheck MCP server status
stopStop background MCP bridge server
testTest MCP bridge functionality
agentTest MCP orchestrator agent
dockerStart Docker MCP server
→ Full MCP Integration Guide

Model Management

Download Command

Download all models required for GAIA agents with streaming progress.
gaia download [OPTIONS]
Options:
OptionTypeDefaultDescription
--agentstringallAgent to download models for
--listflagfalseList required models without downloading
--timeoutinteger1800Timeout per model in seconds
--hoststringlocalhostLemonade server host
--portinteger8000Lemonade server port
Available Agents: chat, code, talk, rag, blender, jira, docker, vlm, minimal, mcp Examples: List all models:
gaia download --list
List models for specific agent:
gaia download --list --agent chat
Download all models:
gaia download
Download for specific agent:
gaia download --agent code
Example Output:
📥 Downloading 3 model(s) for 'chat'...

📥 Qwen3-Coder-30B-A3B-Instruct-GGUF
   ⏳ [1/31] Qwen3-Coder-30B-A3B-Q4_K_M.gguf: 3.5 GB/17.7 GB (20%)
   ...
   ✅ Download complete

✅ nomic-embed-text-v2-moe-GGUF (already downloaded)

==================================================
📊 Download Summary:
   ✅ Downloaded: 2
   ⏭️  Skipped (already available): 1
==================================================

Pull Command

To download individual models, use the Lemonade Server CLI directly:
lemonade-server pull MODEL_NAME [OPTIONS]
Use lemonade-server list to see all available models and their download status.

Evaluation Commands

Evaluation Framework

Systematic testing, benchmarking, and model comparison
Tools for:
  • Ground Truth Generation
  • Automated Evaluation
  • Batch Experimentation
  • Performance Analysis
  • Transcript Testing
Quick Examples: Generate evaluation data:
gaia groundtruth -f ./data/document.html
Create sample experiment configuration:
gaia batch-experiment --create-sample-config experiments.json
Run systematic experiments:
gaia batch-experiment -c experiments.json -i ./data -o ./results
Evaluate results:
gaia eval -f ./results/experiment.json
Generate report:
gaia report -d ./eval_results
Launch visualizer:
gaia visualize
→ Full Evaluation Guide

Visualize Command

Launch interactive web-based visualizer for comparing evaluation results.
gaia visualize [OPTIONS]
Options:
OptionTypeDefaultDescription
--portinteger3000Visualizer server port
--experiments-dirpath./output/experimentsExperiments directory
--evaluations-dirpath./output/evaluationsEvaluations directory
--workspacepathcurrent directoryBase workspace directory
--no-browserflagfalseDon’t auto-open browser
--hoststringlocalhostHost address
Examples:
gaia visualize
Features:
  • Interactive Comparison (side-by-side)
  • Key Metrics Dashboard
  • Quality Analysis
  • Real-time Updates
  • Responsive Design
Node.js must be installed. Dependencies are automatically installed on first run.

Utility Commands

Stats Command

View performance statistics from the most recent model run.
gaia stats [OPTIONS]

Test Commands

Run various tests for development and troubleshooting.
gaia test --test-type TYPE [OPTIONS]
Test Types:
  • tts-preprocessing - Test TTS text preprocessing
  • tts-streaming - Test TTS streaming playback
  • tts-audio-file - Test TTS audio file generation
Options:
  • --test-text - Text to use for TTS tests
  • --output-audio-file - Output file path (default: output.wav)
Examples:Test preprocessing:
gaia test --test-type tts-preprocessing --test-text "Hello, world!"
Test streaming:
gaia test --test-type tts-streaming --test-text "Testing streaming"
Generate audio file:
gaia test --test-type tts-audio-file \
  --test-text "Save this as audio" \
  --output-audio-file speech.wav

YouTube Utilities

Download transcripts from YouTube videos.
gaia youtube --download-transcript URL [--output-path PATH]
Options:
  • --download-transcript - YouTube URL to download transcript from
  • --output-path - Output file path (defaults to transcript_.txt)
Example:
gaia youtube \
  --download-transcript "https://youtube.com/watch?v=..." \
  --output-path transcript.txt

Kill Command

Terminate processes running on specific ports.
gaia kill [OPTIONS]
Options:
OptionTypeDescription
--portintegerPort number to kill process on
--lemonadeflagKill Lemonade server (port 8000)
Examples:
gaia kill --lemonade
This command will:
  • Find the process ID (PID) bound to the specified port
  • Forcefully terminate that process
  • Provide feedback about success or failure

Global Options

All commands support these global options:
OptionTypeDefaultDescription
--logging-levelstringINFOLogging verbosity [DEBUG, INFO, WARNING, ERROR, CRITICAL]
-v, --versionflag-Show program’s version and exit

Troubleshooting

If you get connection errors, ensure Lemonade server is running:
lemonade-server serve
Check available system memory (16GB+ recommended)Verify model compatibility:
gaia download --list
Pre-download models:
gaia download
Install additional models: See Features Guide
List available devices:
gaia test --test-type asr-list-audio-devices
Verify microphone permissions in Windows settingsTry different audio device indices if default doesn’t work
For optimal NPU performance:
  • Disable discrete GPUs in Device Manager
  • Ensure NPU drivers are up to date
  • Monitor system resources during execution
For more help, see:

See Also