API Uptime and Latency Transparency Report: Real Data
RoxyAPI measured 99.94% API uptime and 190ms p50 latency across 150,000+ requests. Full SLA transparency report with real latency percentiles.
TL;DR
- RoxyAPI measured 99.94% uptime across 150,000+ automated requests over a week-long monitoring period
- p50 latency: 190ms, p95: 210ms, p99: 232ms, with all four tested endpoints performing within 2ms of each other
- Every failure traced to upstream infrastructure, not application errors. Zero application-level errors recorded
- Professional plan ($149/mo) includes a 99.9% uptime SLA. Enterprise plan ($699/mo) includes 99.95% with financial credits. See all plans
About the author: Brett Calloway is a Developer Advocate and AI Integration Specialist with 12 years of experience building APIs and developer tooling. He has led developer relations at two Series B SaaS companies and spoken at PyCon and JSConf on building context-rich AI agents using Model Context Protocol.
Most API providers publish an uptime number on their pricing page and never mention it again. You get a badge that says "99.9%" and a status page that shows green. That is not transparency. That is decoration.
Developers evaluating an API for production use deserve more than a marketing claim. They deserve methodology, raw latency percentiles, per-endpoint breakdowns, and an honest accounting of what went wrong during testing. This API uptime transparency report provides exactly that. We ran continuous automated monitoring against four of the heaviest computation endpoints in the Roxy platform and recorded every request, every failure, and every millisecond of latency over a week-long period. The results follow, with nothing hidden.
Why API Uptime Transparency Matters for Production Apps
Choosing an API is a trust decision. You are handing a critical dependency to a third party, and your users will blame you when it breaks. Yet the industry standard for communicating API reliability is a single percentage on a pricing page, maybe accompanied by a status page that only updates during major outages. That gap between what providers claim and what developers can verify creates real risk.
A 2026 analysis of 215+ API services found that AI and machine learning APIs averaged more incidents per service than any other category, with some providers logging an incident every two and a half days. Payment APIs were the most stable. The category you are evaluating matters as much as the provider. For developers building with spiritual intelligence, wellness, or personality data, there is almost no public reliability data available from any provider. We want to change that by publishing the numbers most companies keep internal.
The Soak Test: Methodology and Setup
Transparency starts with showing your work. Over a week-long continuous monitoring period, we sent automated requests to four of the most computationally intensive endpoints in the Roxy API: Vedic birth chart generation, Western natal chart calculation, full numerology chart analysis, and tarot card draws. These are not lightweight lookups. Each request triggers real astronomical calculations, ephemeris queries, or structured randomization against verified data sets.
The test ran continuously with steady request volume, distributing load evenly across all four endpoints. Requests used realistic payloads with varied birth dates, coordinates, and time zones. The monitoring system recorded HTTP status codes, round-trip latency to the millisecond, and any connection-level errors. No requests were cherry-picked or excluded from the results. The total sample size exceeded 150,000 requests.
Ready to build with an API you can trust? RoxyAPI gives you 9 domains, 122+ endpoints, and a 99.9% uptime SLA starting at $39/month. See all plans.
Measured API Uptime: 99.94% With Zero Application Errors
Across 150,000+ requests, the measured uptime came in at 99.94%. That number is not rounded up or calculated from a favorable window. It represents every single request sent during the monitoring period. The small fraction of failures (0.06%) were all traced to upstream infrastructure, specifically brief network-level interruptions between monitoring nodes and the API gateway. Not a single failure originated from the application layer. No 500 errors. No timeout from slow computation. No malformed responses.
To put 99.94% in context, the Professional plan at $149 per month commits to a 99.9% uptime SLA. The measured performance exceeds that commitment. The Enterprise plan at $699 per month commits to 99.95% with financial credits if the guarantee is not met. Our measured uptime falls between these two thresholds, which is exactly the kind of honest data point developers need when choosing a tier.
Latency Percentiles: What Developers Actually Care About
Average latency is the least useful performance metric an API can publish. It hides the tail, and the tail is where your users feel pain. Here are the full percentile breakdowns from the monitoring period:
| Metric | Value |
|---|---|
| p50 (median) | 190ms |
| p95 | 210ms |
| p99 | 232ms |
| Average | 197ms |
| Maximum | ~1,200ms |
The p50 to p99 spread of just 42ms is unusually tight. Most APIs show a 3x to 10x gap between median and p99 latency because tail requests hit cold paths, retries, or resource contention. A 42ms spread means the system behaves predictably under load. The rare maximum outlier at approximately 5,200ms represents a single infrastructure-level hiccup, not a pattern. For comparison, the 2026 Nordic APIs reliability report found that most API incidents across 215+ services resolved within 30 to 90 minutes. Individual request outliers in the low single-digit seconds are standard for any production system and do not indicate degraded service.
Per-Endpoint Consistency: The Surprising Finding
The most interesting result was not the overall numbers. It was the consistency across endpoints with vastly different computation profiles. Here is the per-endpoint average latency:
| Endpoint | Avg Latency | Computation Type |
|---|---|---|
| Vedic birth chart | 198ms | Full planetary position calculation with Lahiri ayanamsha correction |
| Western natal chart | 197ms | Tropical zodiac positions, house cusps, aspect grid computation |
| Numerology chart | 199ms | Pythagorean reduction, pinnacle cycles, karmic debt detection |
| Tarot draw | 197ms | Cryptographic seed-based card selection with position interpretation |
All four endpoints performed within 2ms of each other on average. That is remarkable because these endpoints do fundamentally different things. A Vedic birth chart requires planetary longitude calculations verified against NASA JPL Horizons. A numerology chart runs iterative digit reduction with master number detection. A tarot draw generates cryptographically seeded selections from a 78-card deck. Yet the response times are nearly identical, indicating the infrastructure handles varied workloads without hot spots.
What Went Wrong: An Honest Failure Analysis
Publishing uptime without explaining what failed is incomplete transparency. During the monitoring period, the handful of failed requests shared a common pattern. They were all network-level connection errors, meaning the request never reached the application. These were brief upstream interruptions, the kind that happen when any internet-connected service routes through multiple network hops.
None of the failures were application errors. No endpoint returned a 500 status code. No request timed out waiting for a computation to finish. No response came back malformed or with missing fields. The API application itself maintained a 100% success rate on every request it received. The 0.06% failure rate was entirely attributable to the network path between the monitoring system and the API gateway. For production applications, this pattern means that a simple retry with exponential backoff would have converted every single failed request into a success.
How This Compares to Industry Benchmarks
Context matters when evaluating API reliability numbers. Here is how the Roxy soak test results compare to broader industry patterns from the 2026 API reliability landscape:
| Benchmark | Industry Range | Roxy Measured |
|---|---|---|
| Uptime (monthly) | 99.5% to 99.99% | 99.94% |
| p50 latency | 50ms to 500ms (varies by complexity) | 190ms |
| p95 latency | 200ms to 2,000ms | 210ms |
| p99 latency | 500ms to 5,000ms | 232ms |
| Application error rate | 0.01% to 1% | 0.00% |
Payment APIs from major fintech providers tend to sit at the high end of uptime (99.99%+) with very low latency. AI and machine learning APIs sit at the low end, with some providers logging incidents every few days. Spiritual intelligence APIs have almost no public benchmarks, which is why we are publishing ours. Developers building astrology, tarot, or numerology applications should not have to guess at reliability. The Roxy API reference documents every endpoint, and this report documents the infrastructure behind them.
How to Test API Uptime Yourself
You do not have to take our word for it. Here is a working curl command you can use to measure response time against the Roxy API:
curl -o /dev/null -s -w "HTTP Status: %{http_code}\nTotal Time: %{time_total}s\nConnect: %{time_connect}s\nTTFB: %{time_starttransfer}s\n" \
-X GET "https://roxyapi.com/api/v2/astrology/zodiac/aries" \
-H "X-API-Key: YOUR_API_KEY"
This prints the HTTP status code, total round-trip time, TCP connect time, and time to first byte. Run it in a loop to build your own latency distribution. For a quicker start, the interactive API reference lets you make live calls with a test key pre-filled, so you can see response times before committing to a plan. The TypeScript SDK includes built-in timing headers if you want to instrument latency tracking in your application code. Combine both approaches for a complete picture of what your users will experience.
SLA Tiers: What Each Plan Guarantees
Not every application needs the same reliability commitment. Roxy offers tiered SLA guarantees that match usage patterns:
| Plan | Price | Requests/Month | Uptime SLA |
|---|---|---|---|
| Starter | $39/mo | 5,000 | Best effort |
| Professional | $149/mo | 50,000 | 99.9% |
| Business | $349/mo | 200,000 | 99.9% |
| Enterprise | $699/mo | 1,000,000 | 99.95% with financial credits |
The Professional plan SLA of 99.9% allows for up to 43 minutes of downtime per month. Our measured 99.94% translates to roughly 26 minutes, providing a meaningful buffer. The Enterprise SLA of 99.95% allows approximately 22 minutes per month. Our measured performance sits just below that threshold, which is why Enterprise customers receive financial credits if the commitment is not met. Every plan includes all 9 domains and 122+ endpoints. You can review the full tier comparison and start a subscription at roxyapi.com/pricing.
Frequently Asked Questions
Q: What does 99.94% API uptime mean in practice? A: Over a 30-day month, 99.94% uptime means approximately 26 minutes of total downtime. During the Roxy soak test, the failures that caused this 0.06% gap were all brief network-level interruptions lasting seconds each, not prolonged outages. A retry mechanism in your client code would have resolved every single one.
Q: How is API uptime measured differently from status page uptime? A: Status page uptime is self-reported. Providers decide what counts as an incident and when to update the page. Soak test uptime is measured by sending real requests and recording real responses. The 99.94% figure reported here counts every HTTP request that did not receive a successful response, regardless of cause. This is a stricter and more accurate measurement than status page monitoring.
Q: What API latency should I expect from Roxy endpoints? A: Median response time (p50) is 190ms across computation-heavy endpoints like birth chart generation and numerology analysis. The p95 is 210ms and p99 is 232ms. Lighter endpoints like daily horoscopes and single-card draws typically respond faster. All endpoints perform within a 2ms average latency window of each other, indicating consistent infrastructure performance regardless of computation complexity.
Q: Does the SLA guarantee cover all endpoints and all domains? A: Yes. The uptime SLA on Professional and Enterprise plans applies to every endpoint across all 9 API domains: Western astrology, Vedic astrology, numerology, tarot, I Ching, dreams, crystals, angel numbers, and location services. There are no exclusions for specific endpoints or specific domains within the same plan tier.
Q: How do I monitor API uptime for my own integration? A: Use the curl command in the developer section above to build a simple health check. For production monitoring, point any HTTP monitoring tool (Uptime Robot, Pingdom, Better Uptime, or your own cron) at a lightweight endpoint and alert on non-2xx responses. The API returns standard HTTP status codes and includes rate limit headers so your monitoring does not consume your request quota unnecessarily.
The Case for Publishing Real Numbers
The API reliability landscape in 2026 rewards transparency. Nordic APIs found that payment providers with near-empty status pages (few incidents reported) correlated with the highest developer trust. But that only works when the absence of incidents is genuine. For newer API categories like spiritual intelligence, wellness data, and personality computation, there are almost no public benchmarks. Developers building in these spaces have to evaluate providers on marketing claims alone.
This report is a step toward changing that. We will continue running soak tests and publishing the results. If the numbers get worse, we will publish that too. Developers deserve infrastructure partners who treat reliability as a shared accountability, not a sales pitch. If you are building an astrology, tarot, numerology, or dream interpretation application, explore the full API and see the performance for yourself.