Use your own LLM to regenerate. The API returns a RAIL-guided prompt and session ID; you regenerate then submit the result via safeRegenerateContinue().
// Step 1: Start session — API evaluates and returns a guided promptconst initial = await client.safeRegenerate({ content: "Content that needs improvement...", maxRegenerations: 3, thresholds: { overall: { score: 8.0 } }});if (initial.status === "awaiting_regeneration" && initial.rail_prompt) { const { system_prompt, user_prompt } = initial.rail_prompt; // Regenerate with your model const completion = await openai.chat.completions.create({ model: "gpt-4o", messages: [ { role: "system", content: system_prompt }, { role: "user", content: user_prompt } ] }); // Step 2: Submit regenerated content for re-evaluation const continued = await client.safeRegenerateContinue({ sessionId: initial.session_id, regeneratedContent: completion.choices[0].message.content }); console.log(continued.status); // "passed" or "awaiting_regeneration" console.log(continued.best_content); // Best content so far}
Sessions expire after 15 minutes. Expired sessions throw a SessionExpiredError.