Workflow Automation Patterns — 30 Common Patterns for AI Pipelines
A catalog of 30 workflow automation patterns used in AI pipelines, from simple sequential chains to complex saga orchestrations. Each pattern includes use cases, complexity ratings, and flow descriptions sourced from developer community discussions and distributed systems literature.
By Michael Lip · Updated April 2026
Methodology
Patterns cataloged from distributed systems literature (Hohpe & Woolf Enterprise Integration Patterns, Microsoft Azure Architecture Center, AWS Step Functions patterns), open-source workflow engines (Temporal, Airflow, Prefect, n8n), and Stack Overflow developer discussions (14 threads, 12.8K+ combined views on workflow automation). Complexity ratings are on a 1-5 scale based on implementation effort, debugging difficulty, and operational overhead. AI-specific adaptations verified through hands-on implementation with Claude API and LangChain. Data compiled April 2026.
| Pattern | Category | Flow Description | AI Pipeline Use Case | Complexity |
|---|---|---|---|---|
| Sequential Pipeline | Linear | A → B → C → D | Summarize → translate → format output | 1/5 |
| Chain with Context | Linear | A → B(+A) → C(+A,B) | Accumulate context across prompt chain | 2/5 |
| Waterfall | Linear | A → B → C, each validates before next | Input validation → processing → quality check | 2/5 |
| Fan-Out / Fan-In | Parallel | A → [B,C,D] → E | Analyze text for sentiment, entities, and topics simultaneously | 3/5 |
| Scatter-Gather | Parallel | Broadcast → collect → merge | Query multiple LLMs, merge best responses | 3/5 |
| Map-Reduce | Parallel | Split → map(fn) → reduce | Process 100 documents in parallel, aggregate results | 3/5 |
| Competing Consumers | Parallel | Queue → N workers → results | Process prompt queue with worker pool | 3/5 |
| Pipeline with Workers | Parallel | Stages with N workers each | High-throughput content generation | 4/5 |
| Retry with Backoff | Error Handling | Try → fail → wait(2^n) → retry | Handle API 429s and 500s automatically | 2/5 |
| Circuit Breaker | Error Handling | Monitor → open → half-open → close | Stop calling failing API, switch to fallback | 3/5 |
| Dead Letter Queue | Error Handling | Failed items → DLQ → manual review | Capture failed generations for human review | 2/5 |
| Fallback Chain | Error Handling | Try A → fail → try B → fail → try C | Claude → GPT-4 → Gemini fallback | 2/5 |
| Bulkhead | Error Handling | Isolate pools of resources | Separate API keys for critical vs. batch tasks | 3/5 |
| Saga (Orchestration) | Transaction | Steps with compensating actions | Generate → save → publish with rollback | 4/5 |
| Saga (Choreography) | Transaction | Events trigger compensations | Event-driven content pipeline with undo | 5/5 |
| Two-Phase Commit | Transaction | Prepare → commit/abort | Multi-system content deployment | 4/5 |
| Router | Branching | Input → route(condition) → handler | Route by language, topic, or complexity | 2/5 |
| Content-Based Router | Branching | Inspect content → route to handler | Classify intent → route to specialist prompt | 2/5 |
| Dynamic Router | Branching | Route based on runtime config | A/B test different prompt strategies | 3/5 |
| Splitter | Branching | One input → multiple messages | Split document into chapters for parallel processing | 2/5 |
| Aggregator | Composition | Collect related messages → combine | Merge parallel analysis results into report | 3/5 |
| Enricher | Composition | Add data from external source | Augment prompt with RAG context before LLM call | 2/5 |
| Filter | Composition | Remove unwanted items from flow | Filter out low-quality generations before publishing | 1/5 |
| Normalizer | Composition | Convert varied formats → standard | Standardize outputs from different LLM providers | 2/5 |
| Polling Consumer | Trigger | Check source → process if new | Poll for new content requests on schedule | 2/5 |
| Event-Driven | Trigger | Event → trigger workflow | Webhook triggers AI processing pipeline | 3/5 |
| Cron/Scheduled | Trigger | Timer → run workflow | Daily batch content generation | 1/5 |
| Human-in-the-Loop | Approval | Auto → pause → human review → continue | AI drafts content, human approves before publish | 3/5 |
| Approval Gate | Approval | Step → gate(approve/reject) → continue/stop | Quality threshold check before deployment | 2/5 |
| Feedback Loop | Iterative | Generate → evaluate → refine → repeat | Iterative prompt refinement until quality threshold | 3/5 |
Key Findings
Sequential Pipeline and Fan-Out/Fan-In cover approximately 80% of real-world AI workflow needs. The remaining 20% involve error handling patterns (Retry, Circuit Breaker) that are critical for production reliability. Stack Overflow data confirms that developers struggle most with the conceptual difference between orchestration and choreography (2.5K views, 7 upvotes) and choosing between workflow tools like Make vs. Ant (2.3K views, 9 answers). The most underused pattern is the Feedback Loop, which enables iterative self-improvement but is complex to implement with current LLM APIs.
Pattern Selection Guide
For prototyping: start with Sequential Pipeline (complexity 1/5). For production: add Retry with Backoff and Circuit Breaker (complexity 2-3/5). For scale: implement Fan-Out/Fan-In with Competing Consumers (complexity 3/5). For mission-critical: add Saga pattern with compensating actions (complexity 4-5/5). The Docker-based workflow automation thread (2.9K views) and Spring module discussion (1.6K views) on Stack Overflow confirm these are the most-requested patterns among developers building production systems.
Frequently Asked Questions
What are the most common workflow automation patterns for AI?
The most common patterns are: Sequential Pipeline (steps run one after another), Fan-Out/Fan-In (parallel execution with aggregation), Retry with Backoff (automatic failure recovery), Circuit Breaker (preventing cascade failures), and Map-Reduce (splitting work across parallel workers). These five patterns cover approximately 80% of production AI workflow needs.
What is the difference between orchestration and choreography in workflows?
Orchestration uses a central controller that directs all workflow steps — one component tells others what to do and when. Choreography has each component act independently based on events, with no central coordinator. Orchestration is simpler to debug and monitor but creates a single point of failure. Choreography scales better but is harder to reason about. Most AI pipelines use orchestration for clarity.
When should I use the saga pattern in AI workflows?
Use the saga pattern when your workflow involves multiple steps that each have side effects (API calls, database writes, external notifications) and you need to undo previous steps if a later step fails. In AI pipelines, this is common when generating content, saving to a CMS, then publishing — if publishing fails, you need to roll back the CMS save. Each step defines a compensating action for rollback.
How do I choose between sequential and parallel workflow patterns?
Use sequential when each step depends on the output of the previous step (e.g., summarize then translate). Use parallel (fan-out) when steps are independent and can run simultaneously (e.g., analyze sentiment AND extract entities from the same text). Parallel execution reduces total latency proportional to the number of independent steps but increases complexity, cost, and error handling requirements.
What is a circuit breaker pattern and why is it important for AI APIs?
A circuit breaker monitors API call failures and temporarily stops making requests when the failure rate exceeds a threshold (typically 50% over 10 requests). This prevents wasting API credits on calls that will fail, reduces load on struggling services, and allows automatic recovery when the service comes back online. It is critical for AI APIs because rate limit errors (429s) and timeout errors can cascade rapidly through pipelines.