LLM provider wrappers
Drop-in wrappers that add automatic RAIL scoring to your existing LLM calls. All wrappers return{ response, content, railScore, evaluation }.
OpenAI
Anthropic
Google Gemini
Observability
Langfuse
Guardrail handler
Error handling
| Error | Status | When |
|---|---|---|
AuthenticationError | 401 | Invalid or missing API key |
InsufficientCreditsError | 402 | Not enough credits |
InsufficientTierError | 403 | Feature requires higher plan |
ValidationError | 400 | Invalid parameters |
ContentTooLongError | 400 | Content exceeds max length |
SessionExpiredError | 410 | Safe-regenerate session expired |
ContentTooHarmfulError | 422 | Content avg score below 3.0 |
RateLimitError | 429 | Rate limit exceeded |
RAILBlockedError | — | Content blocked by policy engine |
Utility functions
TypeScript types
What’s next
Evaluation API Reference
Full HTTP parameters and response schema.
Compliance API
GDPR, HIPAA, EU AI Act and more.
Credits and Pricing
Credit costs per endpoint and mode.
Integrations Overview
All supported LLM providers and observability tools.