- Docs
- Build With Roxy
- Function Calling
Function Calling
If you are building an AI chatbot or agent that needs to call RoxyAPI, you have two options: MCP (zero config) or function calling (manual tool definitions). This guide covers function calling for OpenAI, Anthropic, and Gemini. For MCP, see the MCP setup guide.
When to use function calling vs MCP
| Function calling | MCP | |
|---|---|---|
| Setup | You write tool definitions (or auto-generate from OpenAPI) | Zero config, agents discover 110+ tools automatically |
| Control | You pick exactly which tools to expose | All tools available |
| Compatibility | Any model that supports tools | Requires MCP-compatible client |
| Best for | Production apps with tight control | Prototyping, AI assistants, agent frameworks |
Define your tools
Each tool maps to one RoxyAPI endpoint. Define only the ones your app uses:
const tools = [
{
name: "get_daily_horoscope",
description: "Get the daily horoscope for a zodiac sign",
parameters: {
type: "object",
properties: {
sign: { type: "string", enum: ["aries","taurus","gemini","cancer","leo","virgo",
"libra","scorpio","sagittarius","capricorn","aquarius","pisces"] }
},
required: ["sign"]
}
},
{
name: "draw_tarot_cards",
description: "Draw tarot cards for a reading",
parameters: {
type: "object",
properties: {
count: { type: "number", description: "Number of cards to draw (1-10)" }
},
required: ["count"]
}
},
{
name: "get_birth_chart",
description: "Generate a natal birth chart with planets, houses, and aspects",
parameters: {
type: "object",
properties: {
date: { type: "string", description: "Birth date YYYY-MM-DD" },
time: { type: "string", description: "Birth time HH:MM:SS (24h)" },
latitude: { type: "number" },
longitude: { type: "number" },
timezone: { type: "number", description: "UTC offset in decimal hours" }
},
required: ["date", "time", "latitude", "longitude", "timezone"]
}
}
]
Execute tool calls
When the model returns a tool call, route it to the correct RoxyAPI endpoint:
const API_KEY = process.env.ROXY_API_KEY
async function executeTool(name, args) {
const routes = {
get_daily_horoscope: () =>
fetch(`https://roxyapi.com/api/v2/astrology/horoscope/${args.sign}/daily`, {
headers: { "X-API-Key": API_KEY }
}),
draw_tarot_cards: () =>
fetch("https://roxyapi.com/api/v2/tarot/draw", {
method: "POST",
headers: { "Content-Type": "application/json", "X-API-Key": API_KEY },
body: JSON.stringify({ count: args.count })
}),
get_birth_chart: () =>
fetch("https://roxyapi.com/api/v2/astrology/natal-chart", {
method: "POST",
headers: { "Content-Type": "application/json", "X-API-Key": API_KEY },
body: JSON.stringify(args)
}),
}
const res = await routes[name]()
return res.json()
}
If a request fails, the response body contains { error, code } so your agent can report the issue to the user.
Platform examples
The pattern is the same everywhere: send messages with tools, check for tool calls, execute them, send the result back.
OpenAI
import OpenAI from "openai"
const openai = new OpenAI()
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What is my horoscope for Aries today?" }],
tools: tools.map(t => ({ type: "function", function: t }))
})
const toolCall = response.choices[0].message.tool_calls?.[0]
if (toolCall) {
const result = await executeTool(toolCall.function.name, JSON.parse(toolCall.function.arguments))
// Send result back as a tool message to get a natural language response
const followUp = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "user", content: "What is my horoscope for Aries today?" },
response.choices[0].message,
{ role: "tool", tool_call_id: toolCall.id, content: JSON.stringify(result) }
]
})
}
Anthropic
import Anthropic from "@anthropic-ai/sdk"
const anthropic = new Anthropic()
const response = await anthropic.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Draw me 3 tarot cards" }],
tools: tools.map(t => ({ name: t.name, description: t.description, input_schema: t.parameters }))
})
const toolUse = response.content.find(block => block.type === "tool_use")
if (toolUse) {
const result = await executeTool(toolUse.name, toolUse.input)
const followUp = await anthropic.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [
{ role: "user", content: "Draw me 3 tarot cards" },
{ role: "assistant", content: response.content },
{ role: "user", content: [{ type: "tool_result", tool_use_id: toolUse.id, content: JSON.stringify(result) }] }
],
tools: tools.map(t => ({ name: t.name, description: t.description, input_schema: t.parameters }))
})
}
Gemini
import { GoogleGenAI } from "@google/genai"
const genai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY })
const response = await genai.models.generateContent({
model: "gemini-2.5-flash",
contents: "What is my birth chart for July 15, 1990 at 2:30pm in New York?",
config: { tools: [{ functionDeclarations: tools }] }
})
const call = response.functionCalls?.[0]
if (call) {
const result = await executeTool(call.name, call.args)
const followUp = await genai.models.generateContent({
model: "gemini-2.5-flash",
contents: [
{ role: "user", parts: [{ text: "What is my birth chart?" }] },
{ role: "model", parts: [{ functionCall: call }] },
{ role: "user", parts: [{ functionResponse: { name: call.name, response: result } }] }
],
config: { tools: [{ functionDeclarations: tools }] }
})
}
Tips
Use the SDK for typed calls. The @roxyapi/sdk gives typed methods for every endpoint. For function calling, raw fetch is fine since you control exactly what to expose.
Auto-generate tool definitions. Every RoxyAPI domain publishes a public OpenAPI spec at /api/v2/{domain}/openapi.json (e.g. /api/v2/astrology/openapi.json). Parse it to generate tool definitions instead of writing them by hand.
What is next
- MCP Setup for zero-config agent integration
- AI Chatbot Tutorial for a complete working chatbot example
- AI Prompts for building with Cursor or Claude Code
- API Reference for all endpoints and response schemas