The foundational design pattern where a complex task is decomposed into a linear sequence of smaller, discrete LLM calls. The output of one step becomes the input (context) for the next step.
A broken pipeline is given to you. Diagnose the bugs — missing components, wrong order, unnecessary blocks — and fix it.
Write the actual system prompt for an agent in this pattern. Your prompt is tested against real scenarios and graded by AI.
The pipeline works but it's expensive. Swap models, toggle optimizations, and hit cost/quality/latency targets.
Need a refresher? Read the Prompt Chaining pattern guide →