# RAIL Score ## Docs - [Agent: Prompt Injection Detection](https://docs.responsibleailabs.ai/api-reference/agent-prompt-injection.md): POST /railscore/v1/agent/prompt-injection - Detect prompt injection attacks in any text input. - [Agent: Tool Call Evaluation](https://docs.responsibleailabs.ai/api-reference/agent-tool-call.md): POST /railscore/v1/agent/tool-call -- Evaluate a tool call before execution. - [Agent: Tool Result Scanning](https://docs.responsibleailabs.ai/api-reference/agent-tool-result.md): POST /railscore/v1/agent/tool-result - Scan tool output for PII and injection before passing to the agent. - [CCPA](https://docs.responsibleailabs.ai/api-reference/ccpa.md): Check AI content against CCPA/CPRA. Covers consumer rights (know, delete, opt-out), sale of personal information, financial incentives, and sensitive data categories. - [Compliance Check](https://docs.responsibleailabs.ai/api-reference/compliance.md): POST /railscore/v1/compliance/check - Check content against regulatory frameworks. - [Compliance API](https://docs.responsibleailabs.ai/api-reference/compliance-overview.md): Regulatory compliance API for AI systems. Dedicated India DPDP endpoints with PII scanning, consent management, and breach notification. GDPR, HIPAA, CCPA, and EU AI Act support. - [DPDP: Emit Events](https://docs.responsibleailabs.ai/api-reference/dpdp-emit.md): Record DPDP compliance events for audit evidence. Automatic session state updates and timer creation for DSR SLAs, breach notifications, and retention deadlines. - [DPDP: Evaluate](https://docs.responsibleailabs.ai/api-reference/dpdp-evaluate.md): Real-time DPDP compliance gate. Returns allow, block, or require_action verdicts enforcing child protection (S.9), cross-border rules (S.16), and consent requirements (S.6). - [DPDP: Evidence](https://docs.responsibleailabs.ai/api-reference/dpdp-evidence.md): Generate audit-grade DPDP evidence packets: DSR responses, DPBI breach reports, CERT-In incident reports, consent audits, child protection audits, and SDF annual DPIA packs. - [DPDP: Require](https://docs.responsibleailabs.ai/api-reference/dpdp-require.md): Get required DPDP actions for each workflow step. Covers data collection, processing, decision-making, communication, retention, and DSR handling with section-level guidance. - [DPDP: Content Scan](https://docs.responsibleailabs.ai/api-reference/dpdp-scan.md): Scan AI outputs for Indian PII (Aadhaar, PAN, mobile, UPI, passport), child signals under S.9, and purpose drift under S.4. Supports detect, mask, and block modes. - [DPDP: Session](https://docs.responsibleailabs.ai/api-reference/dpdp-session.md): Create or retrieve DPDP compliance sessions. Track consent state, child flags, events, fulfilled obligations, and active timers across a user journey. - [DPDP: Timers](https://docs.responsibleailabs.ai/api-reference/dpdp-timers.md): List active DPDP compliance timers for DSR SLAs (90 days), breach notifications (CERT-In 6h, DPBI 72h), and retention/erasure windows. Filter by status and approaching deadlines. - [EU AI Act](https://docs.responsibleailabs.ai/api-reference/eu-ai-act.md): Check AI systems against EU AI Act requirements. Covers risk classification, prohibited practices, high-risk obligations, transparency requirements, and conformity assessment. - [Evaluation](https://docs.responsibleailabs.ai/api-reference/evaluation.md): POST /railscore/v1/eval - Score AI content across 8 RAIL dimensions. - [GDPR](https://docs.responsibleailabs.ai/api-reference/gdpr.md): Check AI content against GDPR. Covers lawful basis, data subject rights, transparency, DPO requirements, cross-border transfers, and data minimization. - [Health & Verify](https://docs.responsibleailabs.ai/api-reference/health.md): Health check and API key verification endpoints - both are free and require no credits. - [HIPAA](https://docs.responsibleailabs.ai/api-reference/hipaa.md): Check AI content against HIPAA Privacy and Security Rules. Covers PHI handling, minimum necessary standard, BAA requirements, breach notification, and de-identification. - [India AI Governance](https://docs.responsibleailabs.ai/api-reference/india-ai-gov.md): Check AI systems against India's AI governance guidelines. Covers fairness, transparency, accountability, safety, privacy, and inclusivity for AI deployed in India. - [API Reference Overview](https://docs.responsibleailabs.ai/api-reference/overview.md): Base URL, authentication, and complete endpoint listing for the RAIL Score API. - [Safe Regeneration](https://docs.responsibleailabs.ai/api-reference/safe-regeneration.md): POST /railscore/v1/safe-regenerate - Evaluate and regenerate until content passes threshold. - [Agent Evaluation](https://docs.responsibleailabs.ai/concepts/agent-evaluation.md): Intercept tool calls, scan tool results, and detect prompt injection in AI agent pipelines. - [Compliance](https://docs.responsibleailabs.ai/concepts/compliance.md): Check AI content against GDPR, HIPAA, EU AI Act, CCPA, India DPDP, and more. - [Evaluation](https://docs.responsibleailabs.ai/concepts/evaluation.md): Score AI-generated content across 8 dimensions of responsible AI. - [Middleware](https://docs.responsibleailabs.ai/concepts/middleware.md): Drop-in provider wrappers that intercept every LLM response and attach a RAIL score automatically. - [Concepts Overview](https://docs.responsibleailabs.ai/concepts/overview.md): Core concepts behind the RAIL Score platform. - [Policy Engine](https://docs.responsibleailabs.ai/concepts/policy-engine.md): Declarative rules that translate RAIL scores into actions: block, warn, flag, or allow. - [RAIL Framework: 8 Dimensions](https://docs.responsibleailabs.ai/concepts/rail-framework.md): The 8 dimensions of responsible AI evaluation - what each measures, score anchors, and examples. - [Safe Regeneration](https://docs.responsibleailabs.ai/concepts/safe-regeneration.md): Automatically regenerate AI content that scores below your quality threshold. - [Authentication](https://docs.responsibleailabs.ai/getting-started/authentication.md): API key management, environment variables, rate limits, and security best practices. - [Credits & Pricing](https://docs.responsibleailabs.ai/getting-started/credits.md): Credit costs per endpoint, mode, and plan - with a full breakdown of requests per plan. - [Quickstart](https://docs.responsibleailabs.ai/getting-started/quickstart.md): Get up and running with the RAIL Score API in under 5 minutes. - [RAIL Score Documentation](https://docs.responsibleailabs.ai/index.md): Score AI-generated content across 8 dimensions of responsible AI. Evaluate, enforce policies, and monitor quality at scale. - [Anthropic Integration](https://docs.responsibleailabs.ai/integrations/anthropic.md): RAILAnthropic -- drop-in wrapper for Anthropic Claude. - [Google Gemini](https://docs.responsibleailabs.ai/integrations/gemini.md): RAILGemini - drop-in wrapper for Google Gemini and Vertex AI that scores every response automatically. - [Langfuse Integration](https://docs.responsibleailabs.ai/integrations/langfuse.md): Log RAIL scores alongside LLM traces in Langfuse. - [LiteLLM Integration](https://docs.responsibleailabs.ai/integrations/litellm.md): Use RAIL Score as a LiteLLM guardrail - RAILGuardrail. - [OpenAI Integration](https://docs.responsibleailabs.ai/integrations/openai.md): RAILOpenAI -- drop-in wrapper that scores every OpenAI response. - [Integrations Overview](https://docs.responsibleailabs.ai/integrations/overview.md): Drop-in wrappers for LLM providers and observability platforms. - [OpenTelemetry](https://docs.responsibleailabs.ai/observability/opentelemetry.md): Export RAIL scores as OpenTelemetry spans for full observability. - [Compliance](https://docs.responsibleailabs.ai/sdk/javascript/compliance.md): Check AI content against GDPR, HIPAA, EU AI Act, CCPA, India DPDP and more using client.complianceCheck(). - [Evaluation](https://docs.responsibleailabs.ai/sdk/javascript/evaluation.md): Score AI content with client.eval(). Basic mode, deep mode, selective dimensions, custom weights, and response object reference. - [JavaScript SDK Overview](https://docs.responsibleailabs.ai/sdk/javascript/overview.md): Install and use the RAIL Score JavaScript/TypeScript SDK. - [Providers and Reference](https://docs.responsibleailabs.ai/sdk/javascript/providers.md): LLM provider wrappers (OpenAI, Anthropic, Gemini), observability, error handling, utility functions, and TypeScript types. - [Safe Regeneration](https://docs.responsibleailabs.ai/sdk/javascript/safe-regeneration.md): Evaluate content against quality thresholds and iteratively regenerate improved versions with client.safeRegenerate(). - [Sessions and Policy](https://docs.responsibleailabs.ai/sdk/javascript/sessions-and-policy.md): Multi-turn session tracking, policy enforcement, and middleware for the RAIL JavaScript SDK. - [Python: Agent Evaluation](https://docs.responsibleailabs.ai/sdk/python/agent-evaluation.md): client.agent - tool call evaluation, result scanning, and injection detection. - [Python: Compliance](https://docs.responsibleailabs.ai/sdk/python/compliance.md): client.compliance_check() - regulatory framework checks. - [Python: Evaluation](https://docs.responsibleailabs.ai/sdk/python/evaluation.md): client.eval() - sync and async evaluation examples. - [Python: Middleware](https://docs.responsibleailabs.ai/sdk/python/middleware.md): RAILMiddleware - wrap any async LLM function with automatic RAIL evaluation. - [Python SDK Overview](https://docs.responsibleailabs.ai/sdk/python/overview.md): Install and configure the RAIL Score Python SDK - sync and async clients. - [Python: Policy Engine](https://docs.responsibleailabs.ai/sdk/python/policy-engine.md): Policy and Rule - declarative score enforcement. - [Python: Safe Regeneration](https://docs.responsibleailabs.ai/sdk/python/safe-regeneration.md): client.safe_regenerate() - auto-fix content below threshold. - [Python: Sessions](https://docs.responsibleailabs.ai/sdk/python/sessions.md): RAILSession - track RAIL scores across a multi-turn conversation. - [Building a Responsible AI Chatbot](https://docs.responsibleailabs.ai/use-cases/ai-chatbot.md): Part 1 of 2 - Setup, basic evaluation, deep analysis, and understanding RAIL scores. - [AI Chatbot: Production Features](https://docs.responsibleailabs.ai/use-cases/ai-chatbot-production.md): Part 2 of 2 - Provider wrappers, policy enforcement, session tracking, and Langfuse observability. - [Use Cases](https://docs.responsibleailabs.ai/use-cases/overview.md): End-to-end tutorials for building responsible AI applications with RAIL Score.