Skip to main content
Source Code: src/gaia/cli.py
GAIA provides a comprehensive command-line interface (CLI) for interacting with AI models and agents. The CLI allows you to query models directly, manage chat sessions, and access various utilities without writing code.

Platform Support

Windows 11

Full GUI and CLI support with installer and desktop shortcuts

Linux

Full GUI and CLI support via source installation (Ubuntu/Debian)

Quick Start

  1. Follow the Getting Started Guide to install gaia CLI and lemonade LLM server
  2. Double click the GAIA-CLI desktop icon to launch the command-line shell
  3. GAIA automatically starts Lemonade Server when needed, or start manually:
lemonade-server serve

Core Commands

LLM Direct Query

The fastest way to interact with AI models - no server management required.
gaia llm QUERY [OPTIONS]
Options:
OptionTypeDefaultDescription
--modelstringClient defaultSpecify the model to use
--max-tokensinteger512Maximum tokens to generate
--no-streamflagfalseDisable streaming response
Examples:
gaia llm "What is machine learning?"
The lemonade server must be running. If not available, the command will provide instructions on how to start it.

Chat Command

Start an interactive conversation or send a single message with conversation history.
gaia chat [MESSAGE] [OPTIONS]
Modes:
  • No message: Starts interactive chat session
  • Message provided: Sends single message and exits
Options:
OptionTypeDefaultDescription
--query, -qstring-Single query to execute
--modelstringQwen3-Coder-30B-A3B-Instruct-GGUFModel name to use
--max-stepsinteger10Maximum conversation steps
--index, -ipath(s)-PDF document(s) to index for RAG
--watch, -wpath(s)-Directories to monitor for new documents
--chunk-sizeinteger500Document chunk size for RAG
--max-chunksinteger3Maximum chunks to retrieve for RAG
--statsflagfalseShow performance statistics
--streamingflagfalseEnable streaming responses
--show-promptsflagfalseDisplay prompts sent to LLM
--debugflagfalseEnable debug output
--list-toolsflagfalseList available tools and exit
Examples:
gaia chat
Interactive Commands: During a chat session, use these special commands:
CommandDescription
/clearClear conversation history
/historyShow conversation history
/systemShow current system prompt configuration
/modelShow current model information
/promptShow complete formatted prompt sent to LLM
/statsShow performance statistics (tokens/sec, latency, token counts)
/helpShow available commands
quit, exit, byeEnd the chat session

Prompt Command

Send a single prompt to a GAIA agent.
gaia prompt "MESSAGE" [OPTIONS]
Options:
OptionTypeDefaultDescription
--modelstringQwen2.5-0.5B-Instruct-CPUModel to use for the agent
--max-tokensinteger512Maximum tokens to generate
--statsflagfalseShow performance statistics
Examples:
gaia prompt "What is the weather like today?"

Specialized Agents

Code Agent

Code Development

AI-powered code generation, analysis, and linting for Python/TypeScript
The Code Agent requires extended context. Start Lemonade with:
lemonade-server serve --ctx-size 32768
Features:
  • Intelligent Language Detection (Python/TypeScript)
  • Code Generation (functions, classes, unit tests)
  • Autonomous Workflow (planning → implementation → testing → verification)
  • Automatic Test Generation
  • Iterative Error Correction
  • Code Analysis with AST
  • Linting & Formatting
Quick Examples: Routing detects “Express” and uses TypeScript:
gaia code "Create a REST API with Express and SQLite for managing products"
Routing detects “Django” and uses Python:
gaia code "Create a Django REST API with authentication"
Routing detects “React” and uses TypeScript frontend:
gaia code "Create a React dashboard with user management"
gaia code --interactive
→ Full Code Agent Documentation

Blender Agent

3D Scene Creation

Natural language 3D modeling and scene manipulation
Features:
  • Natural Language 3D Modeling
  • Interactive Planning
  • Object Management
  • Material Assignment
  • MCP Integration
Examples: Interactive Blender mode:
gaia blender --interactive
Create specific objects:
gaia blender --query "Create a red cube and blue sphere arranged in a line"
Run built-in examples:
gaia blender --example 2
→ Full Blender Agent Documentation

Talk Command

Voice Interaction

Speech-to-speech conversation with optional document Q&A
gaia talk [OPTIONS]
Options:
OptionTypeDefaultDescription
--modelstringQwen2.5-0.5B-Instruct-CPUModel to use
--max-tokensinteger512Maximum tokens to generate
--no-ttsflagfalseDisable text-to-speech
--audio-device-indexintegerauto-detectAudio input device index
--whisper-model-sizestringbaseWhisper model [tiny, base, small, medium, large]
--silence-thresholdfloat0.5Silence threshold in seconds
--statsflagfalseShow performance statistics
--index, -ipath-PDF document for voice Q&A
Examples:
gaia talk
→ Full Voice Interaction Guide

API Server

API Server

OpenAI-compatible REST API for VSCode and IDE integrations

Quick Start

  1. Start Lemonade with extended context:
lemonade-server serve --ctx-size 32768
  1. Start GAIA API server:
gaia api start
  1. Test the server:
curl http://localhost:8080/health

Commands

gaia api start [OPTIONS]
Options:
  • --host - Server host (default: localhost)
  • --port - Server port (default: 8080)
  • --background - Run in background
  • --debug - Enable debug logging
Examples:Foreground:
gaia api start
Background with debug:
gaia api start --background --debug
Custom host/port:
gaia api start --host 0.0.0.0 --port 8888
→ Full API Server Documentation

MCP Bridge

MCP Integration

Model Context Protocol for external integrations

Quick Start

Install MCP support:
uv pip install -e ".[mcp]"
Start MCP bridge:
gaia mcp start
Test basic functionality:
gaia mcp test --query "Hello from GAIA MCP!"

Commands

CommandDescription
startStart the MCP bridge server
statusCheck MCP server status
stopStop background MCP bridge server
testTest MCP bridge functionality
agentTest MCP orchestrator agent
dockerStart Docker MCP server
→ Full MCP Integration Guide

Model Management

Download Command

Download all models required for GAIA agents with streaming progress.
gaia download [OPTIONS]
Options:
OptionTypeDefaultDescription
--agentstringallAgent to download models for
--listflagfalseList required models without downloading
--timeoutinteger1800Timeout per model in seconds
--hoststringlocalhostLemonade server host
--portinteger8000Lemonade server port
Available Agents: chat, code, talk, rag, blender, jira, docker, vlm, minimal, mcp Examples: List all models:
gaia download --list
List models for specific agent:
gaia download --list --agent chat
Download all models:
gaia download
Download for specific agent:
gaia download --agent code
Example Output:
📥 Downloading 3 model(s) for 'chat'...

📥 Qwen3-Coder-30B-A3B-Instruct-GGUF
   ⏳ [1/31] Qwen3-Coder-30B-A3B-Q4_K_M.gguf: 3.5 GB/17.7 GB (20%)
   ...
   ✅ Download complete

✅ nomic-embed-text-v2-moe-GGUF (already downloaded)

==================================================
📊 Download Summary:
   ✅ Downloaded: 2
   ⏭️  Skipped (already available): 1
==================================================

Pull Command

Download/install a specific model from the Lemonade Server registry.
gaia pull MODEL_NAME [OPTIONS]
Options:
OptionTypeDescription
--checkpointstringHuggingFace checkpoint (e.g., unsloth/Model-GGUF:Q4_K_M)
--recipestringLemonade recipe (e.g., llamacpp, oga-cpu)
--reasoningflagMark as reasoning model (like DeepSeek)
--visionflagMark as having vision capabilities
--embeddingflagMark as embedding model
--rerankingflagMark as reranking model
--mmprojstringMultimodal projector file for vision models
--timeoutintegerTimeout in seconds (default: 1200)
--hoststringLemonade server host (default: localhost)
--portintegerLemonade server port (default: 8000)
Examples:
gaia pull Qwen3-0.6B-GGUF
Use the user. prefix for custom models not in the official registry. Custom models require both --checkpoint and --recipe parameters.

Evaluation Commands

Evaluation Framework

Systematic testing, benchmarking, and model comparison
Tools for:
  • Ground Truth Generation
  • Automated Evaluation
  • Batch Experimentation
  • Performance Analysis
  • Transcript Testing
Quick Examples: Generate evaluation data:
gaia groundtruth -f ./data/document.html
Create sample experiment configuration:
gaia batch-experiment --create-sample-config experiments.json
Run systematic experiments:
gaia batch-experiment -c experiments.json -i ./data -o ./results
Evaluate results:
gaia eval -f ./results/experiment.json
Generate report:
gaia report -d ./eval_results
Launch visualizer:
gaia visualize
→ Full Evaluation Guide

Visualize Command

Launch interactive web-based visualizer for comparing evaluation results.
gaia visualize [OPTIONS]
Options:
OptionTypeDefaultDescription
--portinteger3000Visualizer server port
--experiments-dirpath./output/experimentsExperiments directory
--evaluations-dirpath./output/evaluationsEvaluations directory
--workspacepathcurrent directoryBase workspace directory
--no-browserflagfalseDon’t auto-open browser
--hoststringlocalhostHost address
Examples:
gaia visualize
Features:
  • Interactive Comparison (side-by-side)
  • Key Metrics Dashboard
  • Quality Analysis
  • Real-time Updates
  • Responsive Design
Node.js must be installed. Dependencies are automatically installed on first run.

Utility Commands

Stats Command

View performance statistics from the most recent model run.
gaia stats [OPTIONS]

Test Commands

Run various tests for development and troubleshooting.
gaia test --test-type TYPE [OPTIONS]
Test Types:
  • tts-preprocessing - Test TTS text preprocessing
  • tts-streaming - Test TTS streaming playback
  • tts-audio-file - Test TTS audio file generation
Options:
  • --test-text - Text to use for TTS tests
  • --output-audio-file - Output file path (default: output.wav)
Examples:Test preprocessing:
gaia test --test-type tts-preprocessing --test-text "Hello, world!"
Test streaming:
gaia test --test-type tts-streaming --test-text "Testing streaming"
Generate audio file:
gaia test --test-type tts-audio-file \
  --test-text "Save this as audio" \
  --output-audio-file speech.wav

YouTube Utilities

Download transcripts from YouTube videos.
gaia youtube --download-transcript URL [--output-path PATH]
Options:
  • --download-transcript - YouTube URL to download transcript from
  • --output-path - Output file path (defaults to transcript_.txt)
Example:
gaia youtube \
  --download-transcript "https://youtube.com/watch?v=..." \
  --output-path transcript.txt

Kill Command

Terminate processes running on specific ports.
gaia kill --port PORT_NUMBER
Example: Kill process on port 8000:
gaia kill --port 8000
This command will:
  • Find the process ID (PID) bound to the specified port
  • Forcefully terminate that process
  • Provide feedback about success or failure

Global Options

All commands support these global options:
OptionTypeDefaultDescription
--logging-levelstringINFOLogging verbosity [DEBUG, INFO, WARNING, ERROR, CRITICAL]
-v, --versionflag-Show program’s version and exit

Troubleshooting

If you get connection errors, ensure Lemonade server is running:
lemonade-server serve
Check available system memory (16GB+ recommended)Verify model compatibility:
gaia download --list
Pre-download models:
gaia download
Install additional models: See Features Guide
List available devices:
gaia test --test-type asr-list-audio-devices
Verify microphone permissions in Windows settingsTry different audio device indices if default doesn’t work
For optimal NPU performance:
  • Disable discrete GPUs in Device Manager
  • Ensure NPU drivers are up to date
  • Monitor system resources during execution
For more help, see:

See Also