Build an AI Astrology Chatbot with Next.js: Free Open Source Starter Template
Step-by-step guide to building an AI-powered astrology, tarot, and numerology chatbot using Next.js, MCP, and RoxyAPI. Free open source template included.
Build an AI Astrology Chatbot with Next.js: Free Open Source Starter Template
I wanted to build an astrology chatbot that did not hallucinate planet positions.
That sounds obvious, but if you have tried asking any LLM "Where is my Saturn right now?", you know the problem. It makes up a position. Confidently. And then interprets that made-up position as if it were real. Tarot readings are random words strung together. Numerology calculations are wrong half the time. The LLM has no idea it is lying to your user.
The fix is straightforward: compute the real data with an actual ephemeris engine, then hand the structured result to the LLM for interpretation. The LLM does what it is good at (natural language, tone, personality) and the API does what it is good at (math).
I built an open source starter template that does exactly this. It covers 8 spiritual domains, supports 3 LLM providers, auto-discovers 110+ API tools via MCP, and deploys to Vercel in under 30 minutes.
Here is how it works, how to set it up, and how to customize it for your own product.
Clone the free starter on GitHub or browse all starter apps.
The Problem with AI-Only Astrology Chatbots
Most astrology chatbot implementations fall into one of two traps.
Trap 1: Pure LLM. You send the user question directly to GPT or Claude. The model generates a response based on its training data. It sounds plausible. But the planetary positions are wrong, the tarot card descriptions are generic, and the numerology math is off. Your users who actually know astrology will notice immediately.
Trap 2: Wrapper APIs. Some API providers bundle their own AI interpretation layer. You send birth data and a question, and you get back a pre-written natural language response. This works, but you are locked into their AI voice, their interpretation style, their LLM, and their single domain (usually just astrology). You cannot customize the personality, switch models, or add tarot and numerology to the same chatbot.
The approach that actually works is separation of concerns. Let the API handle calculations. Let the LLM handle language. You control both layers.
Architecture: How the Chatbot Works
User message
-> LLM picks the right tool (e.g. "get birth chart")
-> MCP calls RoxyAPI endpoint
-> API returns computed data (planet positions, card draws, number calculations)
-> LLM interprets the structured data
-> Streams natural language response to user
The key piece is MCP (Model Context Protocol). Instead of manually defining every API endpoint as a tool for the LLM, the chatbot connects to RoxyAPI MCP servers at startup and auto-discovers all available tools. That is 110+ endpoints across 8 domains. The LLM sees them as callable functions and picks the right one based on the user question.
When RoxyAPI adds new endpoints, the chatbot picks them up automatically. No code changes needed.
What the user can ask
| Domain | Example questions |
|---|---|
| Western Astrology | "What is in my birth chart?" "What are this week planetary transits?" |
| Vedic Astrology | "Generate my Kundli" "What is my current Mahadasha?" "Check gun milan compatibility" |
| Tarot | "Draw me a three-card spread" "Daily tarot card" "Yes or no reading" |
| Numerology | "What is my life path number?" "Are these two names compatible?" |
| I-Ching | "Cast a hexagram about my career" |
| Dreams | "I dreamed about flying over water" |
| Crystals | "What crystal is good for anxiety?" |
| Angel Numbers | "I keep seeing 444 everywhere" |
The chatbot also responds in the user language. If someone asks in Hindi, it responds in Hindi. Spanish, French, Japanese, same thing. The LLM handles translation naturally.
Setting It Up (Under 30 Minutes)
Prerequisites
- Node.js 18+
- A RoxyAPI key (powers all calculations)
- An LLM API key (Google, Anthropic, or OpenAI)
Clone and install
git clone https://github.com/RoxyAPI/astrology-ai-chatbot.git
cd astrology-ai-chatbot
npm install
cp env.example .env.local
Add your keys
Open .env.local and add two keys:
ROXYAPI_KEY=your_roxyapi_key_here
LLM_PROVIDER=gemini
GOOGLE_GENERATIVE_AI_API_KEY=your_google_key_here
That is it. Run npm run dev and open localhost:3000.
Choosing an LLM provider
The starter ships with support for all three major providers. You swap with one environment variable:
| Provider | Env var | Model | Cost per 1M tokens (in/out) |
|---|---|---|---|
| Google Gemini (default) | GOOGLE_GENERATIVE_AI_API_KEY |
Gemini 2.0 Flash | $0.10 / $0.40 |
| Anthropic | ANTHROPIC_API_KEY |
Claude Haiku 4.5 | $1.00 / $5.00 |
| OpenAI | OPENAI_API_KEY |
GPT-4o Mini | $0.15 / $0.60 |
Gemini is the default because it has a free tier and is the cheapest at scale. Claude gives the best interpretation quality in my testing. GPT is what most developers are already familiar with.
To switch, change LLM_PROVIDER in your .env.local:
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your_anthropic_key
The Vercel AI SDK abstracts all three behind the same interface. Same code, different model.
Inside the Code
The starter is a standard Next.js 16 app. Here is the full structure:
src/
app/
api/chat/route.ts # Chat endpoint with streaming + MCP tools
layout.tsx # Root layout, metadata, JSON-LD
page.tsx # Home page
components/
chat/
ChatPanel.tsx # Main chat container (useChat hook)
MessageList.tsx # Message rendering + empty state + suggestions
MessageBubble.tsx # Individual message styling
MessageInput.tsx # Input field
StarField.tsx # Background animation (CSS only)
lib/
ai.ts # Multi-provider LLM configuration
mcp.ts # MCP client setup and tool discovery
prompts.ts # System prompt (personality, capabilities)
The MCP client (tool auto-discovery)
This is the interesting part. The file src/lib/mcp.ts connects to 8 RoxyAPI MCP servers and merges all their tools into one object:
import { createMCPClient } from "@ai-sdk/mcp";
const PRODUCTS = [
"astrology-api",
"vedic-astrology-api",
"tarot-api",
"numerology-api",
"crystals-api",
"angel-numbers-api",
"iching-api",
"dreams-api",
];
export async function getMCPTools() {
const clients = await Promise.all(
PRODUCTS.map((slug) =>
createMCPClient({
transport: {
type: "http",
url: `https://roxyapi.com/mcp/${slug}`,
headers: { "X-API-Key": API_KEY },
},
})
)
);
const toolSets = await Promise.all(clients.map((c) => c.tools()));
const tools = Object.assign({}, ...toolSets);
return { tools, close: () => Promise.all(clients.map((c) => c.close())) };
}
No manual tool definitions. No function schemas to maintain. The MCP protocol handles discovery, input validation, and response parsing. When RoxyAPI ships a new endpoint, it shows up as a new tool automatically.
The chat endpoint
The API route at src/app/api/chat/route.ts is 30 lines total:
import { streamText, convertToModelMessages } from "ai";
import { getModel } from "@/lib/ai";
import { getMCPTools } from "@/lib/mcp";
import { SYSTEM_PROMPT } from "@/lib/prompts";
export async function POST(req: Request) {
const { messages } = await req.json();
const { tools, close } = await getMCPTools();
const result = streamText({
model: getModel(),
system: SYSTEM_PROMPT,
messages: await convertToModelMessages(messages),
tools,
onFinish: close,
});
return result.toUIMessageStreamResponse();
}
The LLM receives the user message plus all available tools. It decides which tool to call (if any), calls it via MCP, gets structured data back, and interprets it as natural language. All streamed to the client in real time.
The system prompt
src/lib/prompts.ts controls the chatbot personality. The default is a warm, direct spiritual advisor. But this is your product, so customize it:
export const SYSTEM_PROMPT = `You are a warm, knowledgeable spiritual advisor...
PERSONALITY:
- Warm but direct. Not overly mystical or vague.
- Explain concepts clearly for people new to these domains.
- Always ground interpretations in the actual data from tool results.
...
Want a sarcastic astrologer? A clinical numerologist? A Gen Z tarot reader? Change the prompt. The API data stays the same, only the interpretation voice changes.
Customizing for Your Product
Pick your domains
Do you only need tarot and numerology? Edit the PRODUCTS array in src/lib/mcp.ts:
const PRODUCTS = [
"tarot-api",
"numerology-api",
];
The chatbot will only discover and use tools from those domains.
Rebrand the UI
All components are in src/components/chat/. The UI uses Tailwind CSS and shadcn/ui. The space theme, colors, and glass effects are in src/app/globals.css. No CSS-in-JS, no styled-components. Just utility classes you can search and replace.
The footer says "Powered by RoxyAPI" with a link to clone the repo. Keep it, change it, remove it, whatever fits your brand.
Generate TypeScript types
The starter includes a script to generate type-safe API types from the RoxyAPI OpenAPI spec:
npm run generate:types
This pulls the latest spec from roxyapi.com/docs and generates TypeScript interfaces for every request and response. Useful if you want to build custom UI components that render structured astrology data.
Deploy to Production
Vercel (one click)
The README includes a one-click Vercel deploy button. Click it, add your two API keys as environment variables, and you have a production chatbot in about 2 minutes.
Self-hosted
npm run build
npm start
Runs anywhere Node.js runs. Docker, Railway, Fly.io, your own VPS.
Security
All API keys stay server-side. The Next.js API route handles MCP connections and LLM calls. Nothing leaks to the client bundle. The browser only sees the streamed chat response.
Why "Bring Your Own AI" Matters
Some astrology API providers bundle their own AI interpretation layer. You send a question, you get back a canned response. It is simple, but there are real trade-offs:
- You cannot pick your LLM. Stuck with whatever model they chose. If a better model comes out next month, too bad.
- You cannot customize the voice. The interpretation style is fixed. Your brand sounds like their brand.
- You pay for their AI margin. They pass through LLM costs plus their markup. At scale this adds up fast.
- Single domain. Most AI wrapper services only cover astrology. Want to add tarot, numerology, or dream interpretation to the same chatbot? You need a different provider.
With the data + MCP approach, you own the AI layer entirely. Pick the cheapest model for prototyping (Gemini Flash, free tier). Switch to Claude for production quality. Fine-tune the personality for your audience. Cover 8 domains in one chatbot. Your LLM costs are transparent because you are paying the provider directly.
What This Costs to Run
Real numbers for a chatbot handling 100 conversations per day, average 5 messages each:
| Component | Monthly cost |
|---|---|
| RoxyAPI Starter (5,000 requests) | $39 |
| Gemini 2.0 Flash (500 conversations/day) | ~$2-5 |
| Vercel hosting (hobby tier) | $0 |
| Total | ~$41-44/month |
At the Professional tier (50,000 requests), you can handle 1,000+ conversations per day for $149/month plus LLM costs. Check RoxyAPI pricing for the full breakdown.
Frequently Asked Questions
Q: Do I need to know astrology to build an astrology chatbot? A: No. The API handles all calculations and the LLM handles interpretation. You do not need to know what a Vimshottari Dasha is. The system prompt tells the LLM how to interpret the structured data it receives. Your job is building the product, not learning Vedic astrology.
Q: Can I use this for a commercial product? A: Yes. The starter template is open source. RoxyAPI plans are designed for commercial use. Add your branding, customize the UI, deploy under your domain. The footer credit is optional.
Q: How accurate are the astrology calculations? A: RoxyAPI uses astronomy-engine, an open source ephemeris library validated against NASA JPL DE405 data. Planetary positions are computed from actual astronomical models, not generated by AI. The accuracy is the same you would get from professional astrology software.
Q: Can I add this to an existing Next.js app?
A: Yes. The chat functionality is self-contained in the src/components/chat/ directory and the API route. Copy those files into your existing project, install the dependencies (ai, @ai-sdk/mcp, @ai-sdk/google), add your env vars, and you have a chat feature embedded in your app.
Q: What is MCP and why does this chatbot use it? A: MCP (Model Context Protocol) is the open standard for connecting AI agents to external tools. Instead of manually writing function definitions for every API endpoint, MCP lets the chatbot auto-discover all available tools at runtime. This means when RoxyAPI adds new endpoints (new tarot spreads, new astrology calculations), your chatbot can use them immediately without any code changes. Read more about MCP integration.
Q: Which LLM gives the best astrology interpretations? A: In my testing, Claude Haiku produces the most thoughtful interpretations. Gemini Flash is faster and cheaper, good for high-volume production use. GPT-4o Mini is solid all around. The starter lets you swap with one env var, so try all three and decide based on your quality and cost requirements.
Q: Can the chatbot handle multiple languages? A: Yes. The system prompt instructs the LLM to detect the user language and respond in kind. If someone asks in Hindi, the response comes back in Hindi. This works across all 8 domains. Domain-specific terms (nakshatra names, tarot card names) stay in their original form with brief translations when helpful.
Q: Is there a live demo I can try? A: The starters page has screenshots and links. You can also clone the repo and run it locally in under 5 minutes to see it in action with your own API keys.
Q: How is this different from building a chatbot with just an LLM? A: A pure LLM chatbot makes up astrology data. It will give you a Mars position that is wrong, interpret a tarot card that does not exist in the Rider-Waite deck, or calculate a Life Path number incorrectly. This chatbot calls real APIs to compute real data, then has the LLM interpret the verified results. Every reading is grounded in actual computation.
Get Started
The full source code is on GitHub: github.com/RoxyAPI/astrology-ai-chatbot
Clone it, add two API keys, run npm run dev. You will have a working chatbot in minutes, not months.
Browse all free starter apps including 5 mobile starters for React Native/Expo. Check the API documentation for the full list of 110+ endpoints across 8 domains. View pricing to pick a plan.