Enables an agent to critique its own output (or the output of another agent) to identify errors, hallucinations, or areas for improvement, and then iteratively refine the result. It acts as an internal feedback loop.
A broken pipeline is given to you. Diagnose the bugs — missing components, wrong order, unnecessary blocks — and fix it.
Write the actual system prompt for an agent in this pattern. Your prompt is tested against real scenarios and graded by AI.
The pipeline works but it's expensive. Swap models, toggle optimizations, and hit cost/quality/latency targets.
Need a refresher? Read the Reflection pattern guide →