Import: from gaia.llm import LLMClient
Detailed Spec: spec/llm-client
Purpose: Unified client for local and cloud LLM providers.
LLM Client
LLMClient is an abstract base class. Use create_client() to get the right provider:
from gaia.llm import create_client
# Local LLM (Lemonade server — default)
llm = create_client()
# Generate response
response = llm.generate(
prompt="What is AI?",
model="Qwen3-0.6B-GGUF"
)
print(response)
# Streaming response
for chunk in llm.generate(prompt="Tell me a story", stream=True):
print(chunk, end="", flush=True)
# Chat completions format
messages = [
{"role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hi! How can I help?"},
{"role": "user", "content": "Tell me about Python"}
]
response = llm.chat(
messages=messages,
model="Qwen3-0.6B-GGUF"
)
print(response)
Cloud Providers
from gaia.llm import create_client
# Claude API
llm_claude = create_client(use_claude=True)
# OpenAI API
llm_openai = create_client(use_openai=True)
# Use same interface
response = llm_claude.generate("Explain Python decorators")
Lemonade Client (AMD-Optimized)
Import: from gaia.llm.lemonade_client import DEFAULT_MODEL_NAME, DEFAULT_LEMONADE_URL
from gaia.llm.lemonade_client import DEFAULT_MODEL_NAME, DEFAULT_LEMONADE_URL
# Default configuration
MODEL = DEFAULT_MODEL_NAME # "Qwen3-0.6B-GGUF"
URL = DEFAULT_LEMONADE_URL # "http://localhost:8000/api/v1"
# Used internally by LLMClient when no provider specified