MCP vs Function Calling vs LangChain: AI Agent Guide

9 min read
Torsten Brinkmann
astrologyMCPfunction callingLangChainAI agentsintegration patterns

Comparing MCP, OpenAI function calling, and LangChain tools for AI agents. Schema, transport, auth, and when to choose each. By RoxyAPI.

TL;DR

  • MCP is the zero-config default for spiritual AI agents. The agent connects to a Streamable HTTP server and discovers every tool at runtime.
  • Function calling is the manual control path. You define each tool schema, route the call, and return the result inside the LLM message loop.
  • LangChain Tools wraps either approach in a class so the agent runs inside a framework with retry, memory, and orchestration baked in.
  • Pick MCP for prototypes and AI assistants. Pick function calling when you need least-privilege tool exposure. Pick LangChain when the agent is part of a larger graph.
  • Build with the astrology API and ship in 30 minutes.

Add astrology, tarot, or numerology data to an AI agent in 2026 and you face a three-way choice. Use Model Context Protocol and the agent discovers the tools automatically. Hand-write OpenAI or Anthropic function calls and you keep precise control over which endpoints are exposed. Wire in LangChain or LangGraph and you inherit a framework with retry, memory, and chain-of-thought orchestration. The three patterns wrap the same RoxyAPI HTTP endpoint, but the JSON schema, transport, auth boundary, and failure mode are different. This guide walks the same POST /astrology/natal-chart request expressed three ways so the tradeoffs land in code, not abstractions.

What does each integration pattern actually look like in code

Each pattern wraps the same RoxyAPI HTTP endpoint, but the wrapper shape differs. MCP exposes a Streamable HTTP server at https://roxyapi.com/mcp/{product}. The agent connects, lists tools, and calls them. Function calling defines a JSON tool schema sent inside every LLM request. The model picks a tool, the app dispatches the call, and the result returns to the loop. LangChain Tools wraps the same logic in a class with name, description, and a _run method.

PatternTool sourceTransportWhere the schema livesWhat you writeWhat you skip
MCPRemote serverStreamable HTTPRoxyAPIDomain logic onlyTool schemas, dispatcher
Function callingLocal codeInside LLM requestYouEach tool schema, dispatcherFramework boilerplate
LangChain ToolsLocal codeLangChain agentTS or Python classTool class, agent wiringCross-LLM portability

The mechanical difference is where the JSON schema lives. With MCP, the server publishes it once and every client sees the same shape. With function calling, each app maintains its own copy. With LangChain Tools, the framework regenerates it from the class definition.

Ready to wire this up? Astrology API gives POST /astrology/natal-chart plus 23 other endpoints under one key. See pricing.

How does the auth boundary differ across the three patterns

The auth model changes across patterns and that changes who holds the key. With Remote MCP the agent client sends a Bearer token or X-API-Key header in the transport config. The November 2025 MCP specification adds OAuth 2.1 with PKCE for public servers, plus refresh-token rotation for production deployments. With function calling, the LLM never sees the key. The dispatcher in your code reads process.env.ROXY_API_KEY and adds the header server-side. With LangChain Tools, the key sits inside the tool class as an instance variable, scoped to the agent process.

Never put the API key in the JSON tool schema or the LLM system prompt. The model will dutifully echo it inside tool calls and conversation logs. Server-side dispatcher only.

OAuth surfaces in MCP earlier than in the other two patterns because Remote MCP servers are public endpoints. Function calling and LangChain stay inside the backend, so a bearer key is enough.

Which pattern handles long multi-step readings best

LangChain wins for multi-step orchestration because the framework was built for it. A natal chart reading that calls /location/search, then /astrology/natal-chart, then /astrology/transits, then summarises into prose maps cleanly to a LangGraph state machine. Each node is a tool call, the graph holds the state, and retries, fallbacks, and parallel branches are first-class.

MCP can chain calls but the orchestration sits inside the LLM agent loop, not in code. The agent decides what to call next based on the previous result. This is fine for chat assistants but harder to test and observe than a graph you wrote.

Function calling is the most explicit. The application drives the loop in code, decides which tool runs after which, and inspects the LLM message at every step. Slowest to write, easiest to debug.

What are the tradeoffs at scale

Latency, retry, and cost-per-call diverge once traffic grows. MCP runs over Streamable HTTP, so each tool call is one network round-trip from the agent client to https://roxyapi.com/mcp/astrology plus the LLM round-trip for the decision. Function calling adds one HTTP call inside the dispatcher, but the LLM still pays the schema-token cost on every request.

ConcernMCPFunction callingLangChain Tools
Token cost per requestSchema fetched once per sessionFull schema in every LLM requestSame as function calling
Retry controlClient-side or server-sideYou write itFramework built-in
ObservabilityMCP server logsApp logsLangSmith and LangGraph traces
Cold startTool list cachedNoneFramework boot time

For high-volume production agents, function calling with a hand-tuned schema set is cheapest in tokens. For prototype velocity, MCP wins. For multi-step graphs, LangChain pays for itself.

When to choose each integration pattern

Use this decision flow when adding a spiritual data tool to an agent:

  1. MCP-compatible client and prototype velocity matters. Pick MCP. Claude Code, Cursor, Antigravity, Claude Desktop, and any other MCP-compatible client need zero schema work.
  2. Production agent with strict tool exposure. Pick function calling. Define only the endpoints the agent needs and ship them as JSON tool schemas.
  3. Multi-step graph or part of a larger LangGraph workflow. Pick LangChain Tools. The framework owns retry, memory, and orchestration.
  4. Both at once. Ship MCP first for the assistant integration, then expose a curated subset as function calls for the production app.

Geocode first. Every coordinate-dependent endpoint needs latitude, longitude, and timezone, so the agent calls GET /location/search?q={city} before any chart endpoint.

curl -s "https://roxyapi.com/api/v2/location/search?q=mumbai" \
  -H "X-API-Key: $ROXY_API_KEY"
# Returns latitude, longitude, and timezone for the city.

Then the same POST /astrology/natal-chart call in all three patterns:

{
  "mcpServers": {
    "roxyapi-astrology": {
      "url": "https://roxyapi.com/mcp/astrology",
      "headers": { "X-API-Key": "${ROXY_API_KEY}" }
    }
  }
}

The agent lists every astrology tool from the server, including generateNatalChart. No schema work, no dispatcher.

FAQ

Is MCP just function calling with extra steps?

No. Function calling sends the tool schema inside every LLM request and the application code dispatches the call. MCP runs the schema and the dispatcher on a separate server, and the agent client connects over Streamable HTTP. The agent never sees the underlying HTTP API. Different separation of concerns, different deployment shape.

Which pattern works with Claude, ChatGPT, and Gemini?

Function calling works with all three because OpenAI Tools, Anthropic Tools, and Gemini function declarations all accept JSON Schema. MCP works with any MCP-compatible client: Claude Code, Cursor, Claude Desktop, Antigravity, plus a growing list. LangChain Tools wraps any LLM that supports tool use.

Can the same agent use MCP and function calling together?

Yes. Many production agents expose a small set of high-traffic tools as function calls for token efficiency, then attach an MCP server for the long tail. The LLM sees both pools as one tool list. RoxyAPI ships an MCP server per product so a team can attach only the domains the agent needs.

How does authentication work for Remote MCP servers in 2026?

The November 2025 MCP specification mandates OAuth 2.1 with PKCE for public servers and refresh-token rotation for production deployments. Bearer API keys still work for personal-tier MCP and team servers. RoxyAPI accepts both, with the bearer flow as the default for individual developers.

Is LangChain Tools worth it for a single-domain agent?

For a single tool call, no. Function calling is shorter. LangChain pays off when the agent runs three or more tool calls in sequence with retry, memory, or branching. A natal chart reading that geocodes, calculates, then summarises is the threshold where the framework starts saving time.

Conclusion

Pick the pattern that matches the client: MCP for prototypes and AI assistants, function calling for production agents that need strict tool exposure, LangChain when the workflow is a multi-step graph. Wire the astrology API in once, expose ten domains under one subscription. The deeper architectural framing lives in REST APIs vs MCP.