# Function calling, call Roxy from any LLM with tool use

> Wire Roxy endpoints into OpenAI, Anthropic, or Gemini as function tools. Time to ship: 15 minutes.

Function calling is the manual, control-heavy path for letting an LLM call Roxy. You hand-define tool schemas, route tool calls to the right endpoint, and ship the results back. MCP does this automatically, which is why MCP is the default. Use function calling when your model does not support MCP, when you need tight control over which tools are exposed, or when you are gluing Roxy into an existing agent framework.

## You probably want MCP instead

If you are on Claude Code, Cursor, Antigravity, Claude Desktop, or any MCP-compatible client, skip this guide. The [MCP setup](/docs/mcp) auto-discovers all 130+ Roxy endpoints and requires zero tool schemas. Function calling is the fallback when MCP is not available.

If you are an AI coding agent reading this guide to write function-calling code, also fetch the site-level <a href="/AGENTS.md" target="_blank" rel="noopener" title="AGENTS.md execution playbook - tight 120-line guide with Rule 0, common task body shapes, error contract, field gotchas">AGENTS.md</a> playbook at `https://roxyapi.com/AGENTS.md` for tight per-endpoint body shapes you will encode as tool schemas.

| | Function calling | MCP |
|---|---|---|
| Setup | You write tool schemas, or auto-generate from OpenAPI | Zero config, agents discover tools at runtime |
| Control | You pick exactly which tools to expose | All tools available |
| Compatibility | Any model with tool-use support | Requires MCP-compatible client |
| Best for | Production apps with tight control | Prototyping, AI assistants, agent frameworks |

## What you can build with this

- AI chatbots that call Roxy from OpenAI, Anthropic, or Gemini
- Agents where only specific Roxy endpoints are exposed (least-privilege tool sets)
- Serverless functions that route a user question to the right Roxy call
- Custom agent frameworks (LangChain, LlamaIndex, AG2) with Roxy as a tool
- Multi-provider apps that keep the same tool definitions across LLMs

## What you need, 30 seconds

1. A Roxy API key. Get one on the [pricing page](/pricing).
2. An LLM key from [Anthropic](https://console.anthropic.com/), [OpenAI](https://platform.openai.com/api-keys), or [Google AI Studio](https://aistudio.google.com/app/apikey).

## Step 1, define your tools

Each tool maps to one Roxy endpoint. Define only the ones your app uses.

```javascript
const tools = [
  {
    name: 'get_daily_horoscope',
    description: 'Get the daily horoscope for a zodiac sign',
    parameters: {
      type: 'object',
      properties: {
        sign: { type: 'string', enum: ['aries','taurus','gemini','cancer','leo','virgo','libra','scorpio','sagittarius','capricorn','aquarius','pisces'] }
      },
      required: ['sign'],
    },
  },
  {
    name: 'draw_tarot_cards',
    description: 'Draw tarot cards for a reading',
    parameters: {
      type: 'object',
      properties: {
        count: { type: 'number', description: 'Number of cards (1-78)' }
      },
      required: ['count'],
    },
  },
  {
    name: 'get_birth_chart',
    description: 'Generate a natal birth chart with planets, houses, and aspects',
    parameters: {
      type: 'object',
      properties: {
        date: { type: 'string', description: 'YYYY-MM-DD' },
        time: { type: 'string', description: 'HH:MM:SS (24h)' },
        latitude: { type: 'number' },
        longitude: { type: 'number' },
        timezone: { type: 'string', description: 'IANA id like America/New_York or decimal offset' },
      },
      required: ['date', 'time', 'latitude', 'longitude', 'timezone'],
    },
  },
];
```

## Step 2, execute tool calls

When the model returns a tool call, route it to the right Roxy endpoint. Pick your preferred call style.


### curl
```bash
# For reference. The tool-call executor below uses fetch, but this is the underlying shape.
curl "https://roxyapi.com/api/v2/astrology/horoscope/aries/daily" \
  -H "X-API-Key: $ROXY_API_KEY"

curl -X POST https://roxyapi.com/api/v2/tarot/draw \
  -H "X-API-Key: $ROXY_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"count": 3}'
```

### TypeScript SDK
```typescript
import { createRoxy } from '@roxyapi/sdk';

const roxy = createRoxy(process.env.ROXY_API_KEY!);

async function executeTool(name: string, args: any) {
  if (name === 'get_daily_horoscope') {
    const { data } = await roxy.astrology.getDailyHoroscope({ path: { sign: args.sign } });
    return data;
  }
  if (name === 'draw_tarot_cards') {
    const { data } = await roxy.tarot.drawCards({ body: { count: args.count } });
    return data;
  }
  if (name === 'get_birth_chart') {
    const { data } = await roxy.astrology.generateNatalChart({ body: args });
    return data;
  }
  throw new Error(`unknown tool: ${name}`);
}
```

### Python SDK
```python
import os
from roxy_sdk import create_roxy

roxy = create_roxy(os.environ['ROXY_API_KEY'])

def execute_tool(name, args):
    if name == 'get_daily_horoscope':
        return roxy.astrology.get_daily_horoscope(sign=args['sign'])
    if name == 'draw_tarot_cards':
        return roxy.tarot.draw_cards(count=args['count'])
    if name == 'get_birth_chart':
        return roxy.astrology.generate_natal_chart(**args)
    raise ValueError(f'unknown tool: {name}')
```

### Raw fetch
```javascript
const API_KEY = process.env.ROXY_API_KEY;

async function executeTool(name, args) {
  const routes = {
    get_daily_horoscope: () =>
      fetch(`https://roxyapi.com/api/v2/astrology/horoscope/${args.sign}/daily`, {
        headers: { 'X-API-Key': API_KEY },
      }),
    draw_tarot_cards: () =>
      fetch('https://roxyapi.com/api/v2/tarot/draw', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json', 'X-API-Key': API_KEY },
        body: JSON.stringify({ count: args.count }),
      }),
    get_birth_chart: () =>
      fetch('https://roxyapi.com/api/v2/astrology/natal-chart', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json', 'X-API-Key': API_KEY },
        body: JSON.stringify(args),
      }),
  };
  const res = await routes[name]();
  return res.json();
}
```

If a request fails, the response body is `{ error, code }` so your agent can report the issue to the user.

## Step 3, wire it into your LLM

Same pattern everywhere. Send messages with tools, check for tool calls, execute, send the result back.

### OpenAI

```javascript
import OpenAI from 'openai';
const openai = new OpenAI();

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'What is my horoscope for Aries today?' }],
  tools: tools.map(t => ({ type: 'function', function: t })),
});

const toolCall = response.choices[0].message.tool_calls?.[0];
if (toolCall) {
  const result = await executeTool(toolCall.function.name, JSON.parse(toolCall.function.arguments));
  const followUp = await openai.chat.completions.create({
    model: 'gpt-4o',
    messages: [
      { role: 'user', content: 'What is my horoscope for Aries today?' },
      response.choices[0].message,
      { role: 'tool', tool_call_id: toolCall.id, content: JSON.stringify(result) },
    ],
  });
}
```

### Anthropic

```javascript
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic();

const response = await anthropic.messages.create({
  model: 'claude-sonnet-4-5-20250929',
  max_tokens: 1024,
  messages: [{ role: 'user', content: 'Draw me 3 tarot cards' }],
  tools: tools.map(t => ({ name: t.name, description: t.description, input_schema: t.parameters })),
});

const toolUse = response.content.find(block => block.type === 'tool_use');
if (toolUse) {
  const result = await executeTool(toolUse.name, toolUse.input);
  const followUp = await anthropic.messages.create({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    messages: [
      { role: 'user', content: 'Draw me 3 tarot cards' },
      { role: 'assistant', content: response.content },
      { role: 'user', content: [{ type: 'tool_result', tool_use_id: toolUse.id, content: JSON.stringify(result) }] },
    ],
    tools: tools.map(t => ({ name: t.name, description: t.description, input_schema: t.parameters })),
  });
}
```

### Gemini

```javascript
import { GoogleGenAI, Type } from '@google/genai';
const genai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const geminiTools = [{
  functionDeclarations: tools.map(t => ({
    name: t.name,
    description: t.description,
    parameters: {
      type: Type.OBJECT,
      properties: Object.fromEntries(
        Object.entries(t.parameters.properties).map(([k, v]) => [k, {
          ...v,
          type: v.type === 'string' ? Type.STRING : Type.NUMBER,
        }])
      ),
      required: t.parameters.required,
    },
  })),
}];

const response = await genai.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: 'What is my birth chart for July 15 1990 at 2:30pm in New York?',
  config: { tools: geminiTools },
});

const call = response.functionCalls?.[0];
if (call) {
  const result = await executeTool(call.name, call.args);
  const followUp = await genai.models.generateContent({
    model: 'gemini-2.5-flash',
    contents: [
      { role: 'user', parts: [{ text: 'What is my birth chart?' }] },
      response.candidates[0].content,
      { role: 'user', parts: [{ functionResponse: { name: call.name, response: result, id: call.id } }] },
    ],
    config: { tools: geminiTools },
  });
}
```

## Auto-generate tool definitions from OpenAPI

Every Roxy domain publishes an OpenAPI spec at `https://roxyapi.com/api/v2/{domain}/openapi.json`. Parse it to generate tool schemas instead of writing them by hand.

```bash
curl https://roxyapi.com/api/v2/astrology/openapi.json > astrology.openapi.json
```

Available domains: `astrology`, `vedic-astrology`, `tarot`, `numerology`, `iching`, `dreams`, `crystals`, `angel-numbers`, `biorhythm`, `location`. Use `openapi-typescript` or any OpenAPI-to-tool-schema generator.

## Gotchas

- **Gemini uses `Type` enums.** `Type.STRING`, `Type.NUMBER`, `Type.OBJECT`. Not string literals.
- **OpenAI flattens into `{type: 'function', function: {...}}`.** Anthropic uses `{name, description, input_schema}` directly. Gemini wraps in `functionDeclarations[]`. Same intent, three syntaxes.
- **Timezone accepts both shapes.** IANA string (`"America/New_York"`) or decimal (`-5`). Prefer IANA for DST correctness.
- **Geocode before chart endpoints.** Natal chart, kundli, panchang, synastry need `latitude`, `longitude`, `timezone`. Add a `search_cities` tool and tell the model to call it first.
- **Tell the model to call a tool, not to guess.** System prompt: "For horoscope, birth chart, tarot, numerology, or any spiritual topic, always call a tool. Never answer from training data."

## What to build next

- The [MCP setup](/docs/mcp) is the zero-config alternative to this guide.
- The [AI chatbot tutorial](/docs/tutorials/ai-chatbot) is a complete working chatbot using both paths.
- The [SDK docs](/docs/sdk) cover typed calls for TypeScript, Python, and PHP.
- The [API reference](/api-reference) has a pre-filled test key to explore any endpoint before wiring a tool for it.
