Installation
pip install "rail-score-sdk[litellm]" litellm
Guardrail के रूप में use करें
from rail_score_sdk.integrations import RAILGuardrail
import litellm
# RAIL को guardrail के रूप में register करें
rail_guardrail = RAILGuardrail(
api_key="YOUR_RAIL_API_KEY",
threshold=7.0,
mode="basic",
action="block", # "block" | "warn" | "regenerate"
)
litellm.callbacks = [rail_guardrail]
# अब सभी LiteLLM calls automatically score होंगी
response = await litellm.acompletion(
model="gpt-4o",
messages=[{"role": "user", "content": "Explain quantum computing."}],
)
# Response metadata से RAIL data access करें
print(response._hidden_params["rail_score"])
print(response._hidden_params["rail_threshold_met"])
Direct usage
from rail_score_sdk.integrations import RAILGuardrail
guardrail = RAILGuardrail(
api_key="YOUR_RAIL_API_KEY",
threshold=7.0,
)
result = await guardrail.async_post_call_success_hook(
data={"messages": [...]},
response=llm_response,
)
RAILGuardrail LiteLLM का CustomLogger interface implement करता है, तो ये LiteLLM के सभी supported models के साथ काम करता है - Ollama के through local models भी।