1. Docs
  2. Build With Roxy
  3. Function Calling

Function calling, call Roxy from any LLM with tool use

Wire Roxy endpoints into OpenAI, Anthropic, or Gemini as function tools. Time to ship: 15 minutes.

Function calling is the manual, control-heavy path for letting an LLM call Roxy. You hand-define tool schemas, route tool calls to the right endpoint, and ship the results back. MCP does this automatically, which is why MCP is the default. Use function calling when your model does not support MCP, when you need tight control over which tools are exposed, or when you are gluing Roxy into an existing agent framework.

You probably want MCP instead

If you are on Claude Code, Cursor, Antigravity, Claude Desktop, or any MCP-compatible client, skip this guide. The MCP setup auto-discovers all 130+ Roxy endpoints and requires zero tool schemas. Function calling is the fallback when MCP is not available.

If you are an AI coding agent reading this guide to write function-calling code, also fetch the site-level AGENTS.md playbook at https://roxyapi.com/AGENTS.md for tight per-endpoint body shapes you will encode as tool schemas.

Function callingMCP
SetupYou write tool schemas, or auto-generate from OpenAPIZero config, agents discover tools at runtime
ControlYou pick exactly which tools to exposeAll tools available
CompatibilityAny model with tool-use supportRequires MCP-compatible client
Best forProduction apps with tight controlPrototyping, AI assistants, agent frameworks

What you can build with this

  • AI chatbots that call Roxy from OpenAI, Anthropic, or Gemini
  • Agents where only specific Roxy endpoints are exposed (least-privilege tool sets)
  • Serverless functions that route a user question to the right Roxy call
  • Custom agent frameworks (LangChain, LlamaIndex, AG2) with Roxy as a tool
  • Multi-provider apps that keep the same tool definitions across LLMs

What you need, 30 seconds

  1. A Roxy API key. Get one on the pricing page.
  2. An LLM key from Anthropic, OpenAI, or Google AI Studio.

Step 1, define your tools

Each tool maps to one Roxy endpoint. Define only the ones your app uses.

const tools = [
  {
    name: 'get_daily_horoscope',
    description: 'Get the daily horoscope for a zodiac sign',
    parameters: {
      type: 'object',
      properties: {
        sign: { type: 'string', enum: ['aries','taurus','gemini','cancer','leo','virgo','libra','scorpio','sagittarius','capricorn','aquarius','pisces'] }
      },
      required: ['sign'],
    },
  },
  {
    name: 'draw_tarot_cards',
    description: 'Draw tarot cards for a reading',
    parameters: {
      type: 'object',
      properties: {
        count: { type: 'number', description: 'Number of cards (1-78)' }
      },
      required: ['count'],
    },
  },
  {
    name: 'get_birth_chart',
    description: 'Generate a natal birth chart with planets, houses, and aspects',
    parameters: {
      type: 'object',
      properties: {
        date: { type: 'string', description: 'YYYY-MM-DD' },
        time: { type: 'string', description: 'HH:MM:SS (24h)' },
        latitude: { type: 'number' },
        longitude: { type: 'number' },
        timezone: { type: 'string', description: 'IANA id like America/New_York or decimal offset' },
      },
      required: ['date', 'time', 'latitude', 'longitude', 'timezone'],
    },
  },
];

Step 2, execute tool calls

When the model returns a tool call, route it to the right Roxy endpoint. Pick your preferred call style.

# For reference. The tool-call executor below uses fetch, but this is the underlying shape.
curl "https://roxyapi.com/api/v2/astrology/horoscope/aries/daily" \
  -H "X-API-Key: $ROXY_API_KEY"

curl -X POST https://roxyapi.com/api/v2/tarot/draw \
  -H "X-API-Key: $ROXY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"count": 3}'

If a request fails, the response body is { error, code } so your agent can report the issue to the user.

Step 3, wire it into your LLM

Same pattern everywhere. Send messages with tools, check for tool calls, execute, send the result back.

OpenAI

import OpenAI from 'openai';
const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is my horoscope for Aries today?' }],
  tools: tools.map(t => ({ type: 'function', function: t })),
});

const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
  const result = await executeTool(toolCall.function.name, JSON.parse(toolCall.function.arguments));
  const followUp = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'user', content: 'What is my horoscope for Aries today?' },
      response.choices[0].message,
      { role: 'tool', tool_call_id: toolCall.id, content: JSON.stringify(result) },
    ],
  });
}

Anthropic

import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();

const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-5-20250929',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Draw me 3 tarot cards' }],
  tools: tools.map(t => ({ name: t.name, description: t.description, input_schema: t.parameters })),
});

const toolUse = response.content.find(block => block.type === 'tool_use');
if (toolUse) {
  const result = await executeTool(toolUse.name, toolUse.input);
  const followUp = await anthropic.messages.create({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    messages: [
      { role: 'user', content: 'Draw me 3 tarot cards' },
      { role: 'assistant', content: response.content },
      { role: 'user', content: [{ type: 'tool_result', tool_use_id: toolUse.id, content: JSON.stringify(result) }] },
    ],
    tools: tools.map(t => ({ name: t.name, description: t.description, input_schema: t.parameters })),
  });
}

Gemini

import { GoogleGenAI, Type } from '@google/genai';
const genai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const geminiTools = [{
  functionDeclarations: tools.map(t => ({
    name: t.name,
    description: t.description,
    parameters: {
      type: Type.OBJECT,
      properties: Object.fromEntries(
        Object.entries(t.parameters.properties).map(([k, v]) => [k, {
          ...v,
          type: v.type === 'string' ? Type.STRING : Type.NUMBER,
        }])
      ),
      required: t.parameters.required,
    },
  })),
}];

const response = await genai.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: 'What is my birth chart for July 15 1990 at 2:30pm in New York?',
  config: { tools: geminiTools },
});

const call = response.functionCalls?.[0];
if (call) {
  const result = await executeTool(call.name, call.args);
  const followUp = await genai.models.generateContent({
    model: 'gemini-2.5-flash',
    contents: [
      { role: 'user', parts: [{ text: 'What is my birth chart?' }] },
      response.candidates[0].content,
      { role: 'user', parts: [{ functionResponse: { name: call.name, response: result, id: call.id } }] },
    ],
    config: { tools: geminiTools },
  });
}

Auto-generate tool definitions from OpenAPI

Every Roxy domain publishes an OpenAPI spec at https://roxyapi.com/api/v2/{domain}/openapi.json. Parse it to generate tool schemas instead of writing them by hand.

curl https://roxyapi.com/api/v2/astrology/openapi.json > astrology.openapi.json

Available domains: astrology, vedic-astrology, tarot, numerology, iching, dreams, crystals, angel-numbers, biorhythm, location. Use openapi-typescript or any OpenAPI-to-tool-schema generator.

Gotchas

  • Gemini uses Type enums. Type.STRING, Type.NUMBER, Type.OBJECT. Not string literals.
  • OpenAI flattens into {type: 'function', function: {...}}. Anthropic uses {name, description, input_schema} directly. Gemini wraps in functionDeclarations[]. Same intent, three syntaxes.
  • Timezone accepts both shapes. IANA string ("America/New_York") or decimal (-5). Prefer IANA for DST correctness.
  • Geocode before chart endpoints. Natal chart, kundli, panchang, synastry need latitude, longitude, timezone. Add a search_cities tool and tell the model to call it first.
  • Tell the model to call a tool, not to guess. System prompt: "For horoscope, birth chart, tarot, numerology, or any spiritual topic, always call a tool. Never answer from training data."

What to build next

  • The MCP setup is the zero-config alternative to this guide.
  • The AI chatbot tutorial is a complete working chatbot using both paths.
  • The SDK docs cover typed calls for TypeScript, Python, and PHP.
  • The API reference has a pre-filled test key to explore any endpoint before wiring a tool for it.