Skip to main content
Source Code: cpp/CMakeLists.txt — build configuration with install rules, export targets, and FetchContent support.
Prerequisites: Familiarity with CMake and C++17. See the C++ Framework Overview for build instructions and the AgentConfig reference.

Overview

gaia_core is designed to drop into any C++ project with minimal friction. The library is self-contained — all dependencies (nlohmann/json, cpp-httplib) are resolved automatically, so you never install or manage them by hand. Your project only needs CMake 3.14+ and a C++17 compiler. The shortest path — add three lines to your CMakeLists.txt:
FetchContent_Declare(gaia GIT_REPOSITORY https://github.com/amd/gaia.git GIT_TAG main SOURCE_SUBDIR cpp)
FetchContent_MakeAvailable(gaia)
target_link_libraries(my_app PRIVATE gaia::gaia_core)
That gives you #include <gaia/agent.h>, the full agent loop, tool registry, MCP client, and JSON utilities — no manual installs, no system packages, no dependency conflicts.

Integration Methods

MethodWhen to use
FetchContentDefault choice — no install step, works everywhere
Git submoduleYou want the source in your repo for offline builds or pinned versions
find_packageYou want a system-wide install or use a package manager
Shared libraryYou need a .so / .dll for plugin architectures


Subclassing Example

Here is a complete minimal agent that registers one tool and processes a query. This works with any of the three integration methods above.
time_agent.cpp
#include <chrono>
#include <ctime>
#include <iomanip>
#include <iostream>
#include <sstream>
#include <string>

#include <gaia/agent.h>
#include <gaia/types.h>

/// A minimal custom agent with one tool.
class TimeAgent : public gaia::Agent {
public:
    TimeAgent() : Agent(makeConfig()) {
        init();  // triggers registerTools() and composes system prompt
    }

protected:
    std::string getSystemPrompt() const override {
        return "You are a helpful assistant that can tell the current time. "
               "Use the get_current_time tool when the user asks about the time.";
    }

    void registerTools() override {
        toolRegistry().registerTool(
            "get_current_time",
            "Return the current local date and time as an ISO 8601 string.",
            [](const gaia::json& /*args*/) -> gaia::json {
                auto now = std::chrono::system_clock::now();
                auto time = std::chrono::system_clock::to_time_t(now);
                std::tm tm_buf{};
#ifdef _WIN32
                localtime_s(&tm_buf, &time);
#else
                localtime_r(&time, &tm_buf);
#endif
                std::ostringstream oss;
                oss << std::put_time(&tm_buf, "%Y-%m-%dT%H:%M:%S");
                return {{"current_time", oss.str()}};
            },
            {
                // This tool takes no parameters, but you could add them:
                // {"timezone", gaia::ToolParamType::STRING, false, "IANA timezone"}
            }
        );
    }

private:
    static gaia::AgentConfig makeConfig() {
        gaia::AgentConfig cfg;
        cfg.baseUrl = "http://localhost:8000/api/v1";
        cfg.modelId = "Qwen3-4B-GGUF";
        cfg.maxSteps = 10;
        return cfg;
    }
};

int main() {
    try {
        TimeAgent agent;
        auto result = agent.processQuery("What time is it right now?");

        if (result.contains("result")) {
            std::cout << result["result"].get<std::string>() << std::endl;
        }
    } catch (const std::exception& e) {
        std::cerr << "Error: " << e.what() << std::endl;
        return 1;
    }
    return 0;
}
Add it to your CMakeLists.txt:
add_executable(time_agent time_agent.cpp)
target_link_libraries(time_agent PRIVATE gaia::gaia_core)

Using Alternative LLM Backends

The GAIA C++ agent framework is not tied to Lemonade or any specific LLM provider. It talks to a standard HTTP endpoint — any server that implements the OpenAI chat completions API works out of the box. Switch backends by changing two fields:
gaia::AgentConfig cfg;
cfg.baseUrl = "http://localhost:8080/v1";   // your server's base URL
cfg.modelId = "my-model-name";              // model name your server expects

What “OpenAI-Compatible” Means

The agent uses a single HTTP endpoint: POST {baseUrl}/chat/completions. It sends a standard request body and expects a standard response:
// Request (what the agent sends)
{
  "model": "...",          // from AgentConfig::modelId
  "messages": [...],       // system prompt + conversation history
  "temperature": 0.7
}

// Response (what the agent expects)
{
  "choices": [{
    "message": { "content": "..." }
  }]
}
That is the entire API surface. No embeddings endpoint, no streaming (unless cfg.streaming = true), no fine-tuning API. Any server that handles this request/response format works — local or remote, open-source or commercial.

Local Inference Servers

llama.cpp includes a built-in server with OpenAI-compatible endpoints. This is the most direct way to run a GGUF model locally without any Python dependencies.
# Download a model and start the server
./llama-server -m qwen3-4b.gguf --port 8080
gaia::AgentConfig cfg;
cfg.baseUrl = "http://localhost:8080/v1";
cfg.modelId = "qwen3-4b.gguf";  // llama.cpp ignores this field, but it must be non-empty
llama.cpp runs entirely in C++ — no Python, no pip. If you want a fully native stack (C++ agent + C++ inference), this is the combination to use.

Cloud and Remote Providers

You can also point the agent at cloud-hosted LLM services. Build with SSL support first:
cmake -B build -S cpp -DGAIA_ENABLE_SSL=ON
This requires OpenSSL to be installed on your system.
gaia::AgentConfig cfg;
cfg.baseUrl = "https://api.openai.com/v1";
cfg.modelId = "gpt-4o";
Set the Authorization header via the OPENAI_API_KEY environment variable, or modify the HTTP request in a custom Agent subclass.
Model requirements: The agent needs a model that can produce structured JSON output with tool names and arguments. Most instruction-tuned models 4B+ work well. Smaller models (< 3B parameters) may struggle with the structured response format required for tool calling.

DLL Export Macros

All public classes and functions in gaia_core are annotated with the GAIA_API macro, which is generated automatically by CMake’s GenerateExportHeader. When building as a shared library:
  • Windows (MSVC): GAIA_API expands to __declspec(dllexport) when building the library, and __declspec(dllimport) when consuming it.
  • Linux: GAIA_API expands to __attribute__((visibility("default"))).
  • Static library: GAIA_API expands to nothing.
No manual annotation is needed in consumer code. Linking against the gaia::gaia_core CMake target sets all required compile definitions automatically.

Troubleshooting

The first configure downloads nlohmann/json, cpp-httplib, and (if tests are on) Google Test from GitHub. Subsequent builds use the CMake cache. To speed up repeated clean builds, consider using a local clone or a CMake dependency cache.
Ensure you ran cmake --install and that the install prefix is in your CMAKE_PREFIX_PATH:
cmake -B build -DCMAKE_PREFIX_PATH=/path/to/gaia_core/install
nlohmann/json is a public dependency. When using find_package(gaia_core), the installed config file calls find_dependency(nlohmann_json). Make sure nlohmann/json is installed system-wide (see the install steps above).
Verify that your consumer project uses the same MSVC version, platform toolset, and CRT runtime (/MD vs /MT) as the gaia_core DLL. Mismatched runtimes corrupt the heap when STL objects cross the DLL boundary.

Next Steps