Playbook & Worksheets

Playbook & Worksheets

Playbook & Worksheets

The Weak Link Method™

The Weak Link Method™

The Weak Link Method™

A diagnostic framework that helps you find and break the specific bottlenecks ("weak links") in your business outcomes—so stops working twice (prep work + real work) and starts working once (just real work).
A diagnostic framework that helps you find and break the specific bottlenecks ("weak links") in your business outcomes—so stops working twice (prep work + real work) and starts working once (just real work).
A diagnostic framework that helps you find and break the specific bottlenecks ("weak links") in your business outcomes—so stops working twice (prep work + real work) and starts working once (just real work).

1

Map The Outcome Chain

You get: A visual map showing the complete chain of people, activities, time, and handoffs.

Reveal The Weak Links

You get: Top weak links ranked by data—proof of what to fix first.

2

3

Test What AI Can Actually Fix

You get: A clear yes/no on which links are brittle (fix with AI) vs. unbreakable (keep human-led).

Find Your First Win

You get: Your #1 automation target, justified by a simple formula—giving you a defensible business case.

4

5

Create The Blueprint

You get: A ready-to-build plan that tells your team exactly how to eliminate the work before the work.

Map The Outcome Chain

You get: A visual map showing the complete chain of people, activities, time, and handoffs.

1

2

Reveal The Weak Links

You get: Top weak links ranked by data—proof of what to fix first.

Test What AI Can Actually Fix

You get: A clear yes/no on which links are brittle (fix with AI) vs. unbreakable (keep human-led).

3

4

Find Your First Win

You get: Your #1 automation target, justified by a simple formula—giving you a defensible business case.

Create The Blueprint

You get: A ready-to-build plan that tells your team exactly how to eliminate the work before the work.

5

Map The Outcome Chain

You get: A visual map showing the complete chain of people, activities, time, and handoffs.

1

2

Reveal The Weak Links

You get: Top weak links ranked by data—proof of what to fix first.

Test What AI Can Actually Fix

You get: A clear yes/no on which links are brittle (fix with AI) vs. unbreakable (keep human-led).

3

4

Find Your First Win

You get: Your #1 automation target, justified by a simple formula—giving you a defensible business case.

Create The Blueprint

You get: A ready-to-build plan that tells your team exactly how to eliminate the work before the work.

5

Step 1: Make invisible work visible

Step 1: Make invisible work visible

Step 1: Make invisible work visible

Map The Outcome Chain

Outcome Map

What observable business result gets delivered?

Trigger

What specific event kicks this off? Be precise.

Role A

Activity What they do - be specific

— Min

Activity What they do - be specific

— Min

Handoff

What gets passed, how

Role B

Activity What they do - be specific

— Min

Activity What they do - be specific

— Min

Handoff

What gets passed, how

Role C

Activity What they do - be specific

— Min

Activity What they do - be specific

— Min

Outcome Delivered

—————————————

Metrics

People involved

Collective Time

Calendar time

Handoffs

— hours

— days

Total people involved: Count everyone who touched this

Collective time: Total hours spent on the chain activities

Calendar time: Number of days from trigger → delivery, including waits

Handoffs: Count each time work passes between people

Scaled Impact

Frequency

Monthly Collective Time

— / mo

— hours

Insight

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

What you're doing

Documenting ONE critical business outcome from trigger to delivery.

Not:

  ✗ Mapping your entire operation

  ✗ Documenting ideal process from handbook

  ✗ Creating swimlane diagrams for ISO certification

But:

  ✓ One outcome that matters (monthly reports, client deliverables, etc.)

  ✓ How it ACTUALLY flows today (real process, not aspirational)

  ✓ Where time goes (people, activities, handoffs)

The Three Mapping Questions

Ask these in order:

Q1: "What triggers this outcome?"

    → Be specific: "First business day of month" not "monthly"

    → Write at top of whiteboard

Q2: "What actually happens, step by step?"

    → Who does what first?

    → What gets handed off, to whom?

    → Continue until outcome delivered

Q3: "What are the real costs?"

    → Count people involved

    → Sum collective hours

    → Measure calendar days (trigger → delivery)

    → Count handoffs

What done looks like

You're done with Step 1 when:

✓ The team nods and says "Yes, that's how it really works"

✓ You can see the complete chain (trigger → delivery)

✓ You know where time goes (collective hours by person)

✓ You have four metrics: people, collective time, calendar time, handoffs

Remember, you're NOT trying to:

  ✗ Document every edge case

  ✗ Make it beautiful or formal

  ✗ Get perfect time estimates

Accuracy > polish. Ship the map, move to Step 2.

Common Mistakes to avoid

MISTAKE 1: Mapping the ideal process (from handbook)

FIX: Ask "What ACTUALLY happens?" Map reality, not aspiration

MISTAKE 2: Going too granular (every click and keystroke)

FIX: Stay at activity level, not mouse-click level

MISTAKE 3: Staying too high-level ("Team does reports")

FIX: Get specific enough to see automation opportunities

MISTAKE 4: Mapping multiple outcomes at once

FIX: ONE outcome. Finish it. Then do the next.

Step 2: Turn team complaints into data

Step 2: Turn team complaints into data

Step 2: Turn team complaints into data

Reveal The Weak Links

Weak Link Reveal

Link in the chain

C

B

W

Total

——————————————————-

——————————————————-

——————————————————-

C = Complaints (frustration - "I hate this," "tedious," "dread")

B = Breakage (errors, rework - "breaks X%," "always wrong")

W = Waiting (bottlenecks - "sits for days," "can't start until")

What you're doing

Now that you can SEE the chain (Step 1), you need to HEAR where it rattles.

You'll go through your Outcome Map link by link, counting pain signals at each link.

The insight: Your team already knows what's broken—they complain about it, describe what breaks, tell you where things wait. 

You're not debating. You're counting.

The Three Pain Signals Questions

Signal 1 Complaints (Frustration)

What do people say about this link?

Listen for:

  • "I hate doing this"

  • "This is so tedious"

  • "I dread [this day/this part]"

  • "This takes forever"

  • "We always fight about this"

  • "Why do we still do it this way?"

  • "This is the worst part of my job"

Also watch for non-verbal signals:

  • Eye rolls when it's mentioned

  • Sighs or groans

  • "Here we go again..." tone

  • Slack vents about it

  • Team jokes about the pain

Mark each distinct complaint signal you hear.

This captures emotional/cognitive cost—the frustration tax that drains energy even when time spent is moderate.

Signal 2 Breakage (Errors & Rework)

What goes wrong at this link?

Listen for:

  • "This comes back wrong X% of the time"

  • "We always have to redo this"

  • "There's always something missing"

  • "I catch errors here constantly"

  • "People forget to do their part"

  • "Wrong data gets pasted"

Mark each thing that breaks.

This captures reliability cost—the error tax where mistakes compound and rework eats time.

Signal 3 Waiting (Bottlenecks)

Where does work sit at this link?

Listen for:

  • "We can't start until Y finishes"

  • "I'm always chasing people"

  • "By the time I get it, it's too late"

  • "Blocked on [person/system]"

  • "Calendar availability is always the issue"

  • "Sits in queue for..."

  • "I'm waiting for..."

Mark each waiting point.

This captures latency cost—the bottleneck tax where delays compound across the chain.

Signal Counting Rules

To keep scoring consistent:

RULE 1: Count unique signals, not repetitions

  ✓ Sam says "I hate this" 3 times = 1 signal

  ✗ Not 3 signals

RULE 2: Count per person, per dimension

  ✓ Sam complains + Jeremy complains = 2 signals (different people)

  ✓ Sam complains + Sam describes breakage = 2 signals (different dimensions)

RULE 3: Complaints must be specific

  ✗ "This is kind of annoying" = 0 (too vague)

  ✓ "I dread Mondays because of this" = 1 (specific emotion)

RULE 4: Breakage must be observable

  ✗ "Sometimes goes wrong" = 0 (not measurable)

  ✓ "Breaks 25% of the time" = 1 (observable frequency)

  ✓ "VLOOKUP fails every report" = 1 (specific error)

RULE 5: Waiting must block work

  ✗ "Takes a while" = 0 (not blocking)

  ✓ "Jordan can't start until I finish" = 1 (downstream blocked)

What done looks like

You're done with Step 2 when:

✓ Every link has a signal count

✓ Ranked from highest to lowest

✓ Weak links identified (3+ signals)

Common Mistakes to avoid

✗ MISTAKE: Dismissing complaints as "just venting"

✓ FIX: Complaints ARE data. Count them objectively.

✗ MISTAKE: Waiting for "better data" (time studies, surveys)

✓ FIX: Signal counting is faster and more accurate than studies

✗ MISTAKE: Debating whether something "should" be a signal

✓ FIX: If someone said it, count it. Math removes debate.

Step 3: Separate safe bets from expensive failures

Step 3: Separate safe bets from expensive failures

Step 3: Separate safe bets from expensive failures

Test What AI Can Actually Fix

RDS Assesment

Weak Link

R

D

S

Total

Level

——————————————————-

——————————————————-

——————————————————-

——————————————————-

——————————————————-

R - REPEATABLE (Pattern Strength) 

D - DEFINABLE (Criteria Clarity)

S - SAFE (Error Tolerance)

Score

Automation Level

Human Role

9

Full Automation

Validator: AI does it, human spot-checks 2-5 min

7 - 8

High Automation

Reviewer: AI does it, human reviews every output and sensitive actions 5-10 min

5 - 6

AI Drafts

Editor: Human finishes and refines AI's first draft (saves 60% of original time)

4

AI Assists

Operator: Human does it, AI helps with pieces (saves 30% of original time)

3

Keep Human-Led

Owner: Human owns it, AI can't meaningfully help

What you're doing

Separating brittle weak links (automatable) from unbreakable ones (keep human-led).

The reality: Not all painful work can be automated.

  • Client relationship building = painful but unbreakable (RDS 3)

  • Data merging with format mismatches = painful AND brittle (RDS 9)

You're testing automation potential objectively using RDS:

  • R = Repeatable (can AI learn the pattern?)

  • D = Definable (can you write the rules?)

  • S = Safe (can errors be caught before damage?)

The RDS Scoring System

For each weak link, score 1-3 on:

R - REPEATABLE (Pattern Strength)
D - DEFINABLE (Criteria Clarity)
S - SAFE (Error Tolerance)

Each dimension scored 1-3. Minimum possible score: 3 (if every dimension scores 1). Maximum: 9 (if every dimension scores 3).

Here's what each dimension tests:

R - REPEATABLE: Can AI Learn the Pattern?

The core question:
"If you gave someone 10 past examples of this task, could they write instructions that would work for the next 10 instances?"

Why this matters: AI learns from patterns in examples. If every instance follows a repeatable structure (even with variations), AI can learn the pattern. If every instance is completely unique, AI has nothing to learn from.


Score 3 — Almost identical every time, or variations follow clear rules

What this means:
Same steps in same order, OR variations are systematic and predictable

Observable test:

  • Last 5 times you did this task look essentially the same

  • Changes follow learnable patterns (Platform A always uses format X, Platform B always uses format Y)

  • You could write a 1-page checklist that works every time

Example:
Data extraction from standard sources (same fields, same format every month—or predictable platform-specific variations like "Google Ads always uses alphanumeric IDs, Meta always uses numeric IDs")

When to score 3:

  • Can you create a step-by-step checklist that works every time?

  • Could you train a new hire to do this with a 1-page instruction doc?

  • Do you do the exact same steps in the exact same order?

  • OR: Do variations follow predictable rules that can be written down?


Score 2 — Follows loose structure with predictable variations

What this means:
There's a template or pattern you follow, but you adapt based on context

Observable test:

  • There's a recognizable structure, but details vary by situation

  • You need judgment to decide how to adapt the template

  • Similar, but not identical, each time

Example:
Writing client email updates (always same sections—progress, issues, next steps—but specific content varies by project. You're following a template but customizing based on what happened this week and client's communication style)

When to score 2:

  • Is there a template or structure you follow, but with judgment calls?

  • Do you adapt the approach based on circumstances?

  • Could you teach this, but it would require examples and coaching, not just a checklist?


Score 1 — Every instance unique, no learnable pattern

What this means:
Every situation is fundamentally different; you "figure it out fresh" each time

Observable test:

  • Last 5 times sound completely different from each other

  • No template or pattern connects them

  • Requires deep contextual knowledge and judgment

Example:
Contract negotiation (each client's leverage, needs, alternatives, and relationship dynamics are unique—what worked with Client A won't work with Client B)

When to score 1:

  • Is every situation fundamentally different?

  • Do you "figure it out fresh" each time?

  • Would teaching this require months of apprenticeship and contextual knowledge?

  • Could you even create a template, or is each instance too unique?


D - DEFINABLE: Can You Write the Rules?

The core question:
"Can you write down the rules for when this task is 'done correctly'?"

Why this matters: AI needs explicit success criteria. If "correct" is objective and checklistable, AI can validate its own output. If "correct" is based on taste, judgment, or contextual nuance, AI can't know if it succeeded.


Score 3 — Clear objective checklist, pass/fail criteria

What this means:
"Done correctly" can be defined with factual yes/no criteria—no interpretation needed

Observable test:

  • Two people independently checking would always agree

  • All success criteria are measurable (count, compare, verify presence)

  • No room for debate about whether it's "correct"

Example:
"Are all 119 campaigns present in the merged dataset?" (count expected vs actual, yes/no answer. Either all 119 are there or they're not—no interpretation required)

When to score 3:

  • Can you create a checklist where every item is yes/no?

  • Would two people independently checking always reach the same conclusion?

  • Are the criteria purely factual? (numbers match, fields present, format correct)


Score 2 — Guidelines exist but require interpretation

What this means:
There are standards, but applying them needs judgment about quality or completeness

Observable test:

  • General guidelines exist, but "good enough" is somewhat fuzzy

  • People sometimes disagree on whether something meets the standard

  • Part objective (format), part subjective (quality)

Example:
"Does this project update match our communication standards?" (format is definable—includes timeline, next steps, blockers—but "is it complete and clear?" requires judgment. One reviewer might want more detail, another thinks it's sufficient)

When to score 2:

  • Are there guidelines, but they require interpretation?

  • Do people sometimes disagree on whether something is "done correctly"?

  • Is there a "good enough" threshold that's somewhat subjective?


Score 1 — Highly subjective, gut feeling only

What this means:
Success is based on taste, feel, or contextual judgment—"I know it when I see it"

Observable test:

  • No objective criteria can be written down

  • Experts often disagree on what's correct

  • Quality depends on unstated context or aesthetic judgment

Example:
"Is this email tone persuasive to this specific client?" (opinion-based, requires relationship knowledge. What feels persuasive to one person might feel pushy to another. Context about client personality and relationship history determines success)

When to score 1:

  • Is quality based on aesthetic judgment or "feel"?

  • Do experts often disagree on what's correct?

  • Is success context-dependent with no universal criteria?


S - SAFE: Can Errors Be Caught Safely?

The core question:
"If AI did this task wrong, could you catch the error and fix it in under 5 minutes—before it causes real damage?"

Why this matters:
AI WILL make mistakes. Always. The question isn't "will it make errors?" but "are those errors obvious, trivial to fix, and caught before they cause damage?"

This dimension measures: How safe is it to delegate this task to AI with human oversight?


Score 3 — Obvious errors, trivial to fix before any delivery

What this means:
You'd spot the error in 30 seconds by glancing at output, fix in 2 minutes, and no external stakeholders see it

Observable test:

  • Errors are visually obvious (missing data, totals don't add up)

  • This is internal-only (you review and approve before anything goes external)

  • Quick to fix (re-run automation, 2-minute manual correction)

  • Validation checks exist (expected counts, required fields, totals match)

Example:
Campaign count wrong in merge (see immediately: "Expected 119, got 117"—two campaigns missing. Re-run the automation or manually add the missing campaigns in 90 seconds. Nothing has been sent to client yet)

When to score 3:

  • Are errors visually obvious? (numbers don't add up, missing data jumps out)

  • Is this internal-only? (no external stakeholders see it until you review and approve)

  • Can you fix it in under 5 minutes? (re-run automation, quick manual correction)

  • Are there validation checks? (totals must match, required fields present)


Score 2 — Internal rework needed, time-consuming but fixable

What this means:
Error requires re-doing work internally, but caught before external stakeholders see it

Observable test:

  • Errors aren't immediately obvious—need investigation to find

  • Rework is time-consuming (30-90 minutes) but doable

  • Consequences are internal (team frustrated, deadlines slip, but no client impact)

  • Caught in internal review before going external

Example:
Internal report has wrong numbers (analysis shows Campaign X drove 40% of conversions, but actual was 25%. Team needs to redo the entire analysis with correct data—takes 90 minutes. Frustrating and delays the project, but client doesn't see the error because it's caught in internal QA)

When to score 2:

  • Do errors require investigation to find? (not immediately obvious, need to dig)

  • Is rework time-consuming? (takes 30-90 minutes to fix, but doable)

  • Are consequences internal? (team frustrated, deadlines slip, but no client impact)


Score 1 — Serious external damage possible

What this means:
Mistake reaches clients/customers and causes significant damage (revenue loss, relationship harm, legal issues, safety problems)

Observable test:

  • Errors are hard to detect (look plausible, require deep expertise to spot)

  • External stakeholders see mistakes before you catch them

  • Damage is significant (lose contract, harm relationship, compliance violation, financial loss)

Example:
Sending pricing proposal to client with wrong numbers (shows $250K instead of $350K due to formula error. Client accepts the lower price. You've just lost $100K in revenue and can't backtrack without damaging trust. The $800K annual contract relationship is now at risk)

When to score 1:

  • Are errors hard to detect? (look plausible, require deep domain knowledge to spot)

  • Are consequences external? (clients, partners, regulators see this before you catch it)

  • Is damage significant? (revenue loss, relationship damage, legal/compliance issues)


When in Doubt, Score Lower

Debating between 2 and 3? → Choose 2

Debating between 1 and 2? → Choose 1

Why: Conservative scoring prevents expensive failed builds.

Better to:

  ✓ Score RDS 6, build it, discover it could've been 8 (pleasant surprise)

Than to:

  ✗ Score RDS 9, build it, discover it should've been 4 (wasted $50K)

What done looks like

You're done with Step 3 when:

✓ Each weak link has RDS score (3-9)

✓ Automation level determined (Full/High/Drafts/Assists/Keep Human)

✓ Team understands which links are brittle vs. unbreakable

Common Mistakes to avoid

✗ MISTAKE: Over-scoring judgment work (scoring Morgan's strategic review as RDS 9)

✓ FIX: Be honest. Relationship/strategy work = RDS 3-4, keep human

✗ MISTAKE: Scoring based on future AI ("GPT-6 will handle this")

✓ FIX: Score based on current AI capabilities

✗ MISTAKE: All scores are 7-9 (suspiciously high)

✓ FIX: Some work should be human. If everything scores high, re-evaluate honestly.

Step 4: Pain × RDS = Priority Score

Step 4: Pain × RDS = Priority Score

Step 4: Pain × RDS = Priority Score

Find your first win

Weak Link Decoupling Roadmap

Weak Link Decision Matrix

Weak Link

Pain

RDS

Priority

Bucket

——————————————————-

——————————————————-

——————————————————-

——————————————————-

——————————————————-

Priority = Pain Signals × RDS Score

Bucket Rules (Visual Quadrant Mapping):

  • Decouple First: High pain + High RDS (upper-right quadrant) → Full symptom check + planning for root causes

  • Consider Later: Medium pain + Medium-to-High RDS (middle zones) → Reassess after first decoupling proves value

  • Keep Human-Led: Low RDS (bottom half) OR Low pain (left side) → Don't automate (list only with brief why)

Decouple First

Weak Link ——————————-

Pain: —

RDS: —

Priority: —

Symptom Check

Is this a SYMPTOM of another "Decouple First" weak link?

YES → It's a symptom of weak link: ______________________

Why: ______________________________________________

No further planning needed - Will resolve automatically when we fix the root cause

NO → This is a ROOT CAUSE → Continue to decoupling plan below

Decoupling Plan

Current State

  • Time: _______ hours

  • Scaled: _______ hours/month

  • Error rate: _______%

  • Blocks: ________________________ What downstream work waits

Target State

  • AI handles: __________________________________________________

  • Human reviews: _______________________________________________

  • New time: _______

  • Scaled: _______ hours/month

Expected Impact

  • Time freed: _______ hrs → _______ hrs (_____% reduction)

  • Error reduction: _____% → _____%

  • Speed: Day _____ → Day _____ (_____ days faster)

  • Unlocks: ______________________New capabilities, capacity specific examples

Side Benefits

Does fixing this weak link improve weak links in OTHER buckets?

(Weak links that were filtered as symptoms are already documented in their symptom checks above—no need to list them here)


Improves weak links in "Consider Later":

  • Weak link: __________________

    • Why: ____________________________________________________


Reduces burden on "Keep Human-Led" work:

  • Weak link: __________________

    • How it helps: ____________________________________________


Consider Decoupling Later (Medium Pain + Medium-High RDS)

These are moderate-priority weak links. Reassess in a few weeks after measuring impact from first phase decouplings. Some may drop in priority as side effects.

Weak Link

Pain

RDS

Priority

——————————————————-

Note: —————————————

Keep Human-Led (Low RDS or Low Pain)

Chain Link

Pain

RDS

———————————————————-

How to address pain differently (if applicable): -----------------

———————————————————-

How to address pain differently (if applicable): -----------------

What you're doing

Combining pain (Step 2) with automation potential (Step 3) using a simple formula:

Priority Score = Pain Signals × RDS Score

This finds the "jackpots"—weak links that hurt the most AND can 

actually be decoupled.

The priority formula

Why this works:

High Pain × Low RDS = Frustration you can't fix

  Example: 9 pain signals × 4 RDS = Priority 36

  → Team suffers, but AI can't help meaningfully

  → Address differently (not through automation)

Low Pain × High RDS = Wasted effort

  Example: 1 pain signal × 9 RDS = Priority 9

   →Easy to automate, but nobody cares

  → Why spend resources on low-pain work?

High Pain × High RDS = JACKPOT

  Example: 9 pain signals × 9 RDS = Priority 81

  → Hurts the most AND can be decoupled

  → ATTACK THIS FIRST

The formula finds the jackpots automatically.

The priority Tier System

After calculating scores, use these as a guide (adjust based on your visual quadrant placement):

  • High Pain + High RDS (upper-right quadrant) → Decouple First

  • Medium Pain + Medium-High RDS (middle zones) → Consider Later

  • Low RDS (bottom half) OR Low pain (left side) → Keep Human-Led

Note: The visual quadrant placement is more important than rigid number thresholds. A weak link with Pain 7 × RDS 8 = Priority 56 in one team might map to Pain 3 × RDS 8 = Priority 24 in a smaller team, but both belong in "Decouple First" if they're in the upper-right quadrant.

Tie-breaker: If two links are in the same quadrant with similar 

priority scores, choose the one with higher RDS (safer/easier to automate).

What done looks like

You're done with Step 4 when:

✓ All weak links plotted on decision matrix (visual quadrants)

✓ All weak links have priority scores (Pain × RDS)

✓ Buckets assigned (Decouple First, Consider Later, Keep Human-Led)

✓ Symptom checks completed for "Decouple First" weak links

✓ Root causes identified and planned (symptoms skipped)

✓ Side benefits documented (cross-bucket improvements)

Common Mistakes to avoid

✗ MISTAKE: Politics override the formula "I know it's in the bottom-left quadrant (low pain, low RDS), but the VP wants it automated first"

✓ FIX: Show the math and visual placement publicly. Make override socially costly. "This weak link scored 14 (Pain 2 × RDS 7) and sits in 'The Real Work' quadrant. The merge scored 81 (Pain 9 × RDS 9) in 'The Jackpot' quadrant. That's 5.7× higher priority. Can you explain to the team why we're building the low-priority one first?"

✗ MISTAKE: Building multiple "Decouple First" weak links simultaneously

✓ FIX: ONE root cause at a time. Prove it works. Measure impact. Then do the next. Building 2-3 simultaneously = slower on all, proven on none.

✗ MISTAKE: Committing to "Consider Later" bucket before measuring "Decouple First" results

✓ FIX: "Consider Later" means "reassess in a few weeks after measuring first decoupling." Some weak links will drop in priority as side effects (e.g., if fixing weak link A reduces weak link B's pain from 5 signals to 2 signals). Don't overcommit.

Step 5: Spec how to decouple #1 priority

Step 5: Spec how to decouple #1 priority

Step 5: Spec how to decouple #1 priority

Create the Blueprint

Decoupling Blueprint

Weak Link

Pain

RDS

Priority

——————————————————-

Quick reference (from Roadmap):

Current: _____ hrs/instance → Target: _____ min/instance

Scaled: _____ hrs/month freed across _____ instances

The Forced Dependency (What We’re Solving)

Currently to do ___________ (real work), _____________ (role) Must manually:

Steps:

  1. ———————————————

  2. ———————————————

  3. ———————————————


TOTAL TIME: — hours


This forced dependency blocks (Downstream work that must wait):

  • ———————————

  • ———————————

The Decoupled Process (How It Will Work With AI)

Trigger: ——————————

AI Does (automated steps):

  1. ———————————————

  2. ———————————————

  3. ———————————————

Deliverable: ——————

Human Does:

Approve AI Actions / Review AI Output / Refine AI Output

What to Watch For (common issues):

  • ———————————-

  • ———————————-


Validation Criteria (How to know AI output is correct):

  • ———————————-

  • ———————————-

Role Transformation:

(Role)______'s duties transform:


From:

  • ——————————

  • ——————————


To:

  • Review AI Output: ___ min/instance

  • Freed Time For:

    • —————————

    • —————————

Defer to v2

Core use case (V1) handles _____% of instances.


Build these ONLY if they become actual problems:


Edge Case 1: __________________________________________________

  When to build: If ___________________________________________


Edge Case 2: __________________________________________________

  When to build: If ___________________________________________


Enhancement: ___________________________________________________

  When to build: If usage patterns show ________________________


DON'T BUILD NOW. Ship core use case first. Add enhancements later based on real usage, not hypothetical needs.

What you're doing

Creating a specification document that answers:

  • What's linked today that shouldn't be?

  • How does it work when decoupled?

  • What does AI do? What does human do?

Target length: 5 pages maximum

Any longer = you're in spec paralysis (see Tangles section)

The Three decoupling questions

Q1: "What is LINKED that shouldn't be?"

Format: "To do [real work], [role] MUST manually do [tedious work]"

The MUST = the forced dependency you're breaking.

Q2: "How does it work when decoupled?"

Format: "AI does [tedious work steps], [role] reviews in [X minutes]"

The work still gets done. The role reviews instead of executes.

Q3: "What does the role review/add?"

Design the judgment layer—what stays human, how long it takes.

AI handles execution. Humans add validation and judgment.

What done looks like

You're done with Step 5 when:

✓ Blueprint is 5 pages or less (process specification, not business case)

✓ Build team can read it and know exactly what to build

✓ The role involved can read it and understand how their duties transform

✓ Clear on what AI automates vs. what human reviews


You can hand this to a builder and they can start work immediately.

Common Mistakes to avoid

✗ MISTAKE: Spec'ing every edge case upfront

   Example: " What if campaign names change mid-month? What if we add 10 new platforms?"

✓ FIX: Ship core use case (80%). Handle edge cases AS THEY ACTUALLY OCCUR.

✗ MISTAKE: Technical implementation details

   Example: "Use PostgreSQL database with..."

✓ FIX: Stay business-focused. "What AI does," not "how to code it"

about the weak link method™

The Philosophy

Your team isn't slow. Your people aren't the problem.

The problem is invisible forced dependencies—weak links that chain real work (analysis, strategy, judgment) to tedious execution work (data extraction, manual merging, error fixing).

Traditional approach: "Map everything, then automate everything." → Takes quarters. Overwhelming. Often fails.

The Weak Link Method: "Map one outcome, decouple the worst link, measure, repeat." → Takes weeks. Focused. Builds momentum.

The Core insight

Your team already knows what's broken. They complain about it.

Complaints aren't noise—they're data.

Count them objectively. Combine with automation potential (RDS). The math tells you what to fix first.

No politics. No guesswork. No wishful thinking.

The outcome

After decoupling your first weak link:

✓ Team stops working twice (prep work + real work → just real work)

✓ Hours freed monthly (40-70 hours typical)

✓ Quality improves (error rates drop 60-90%)

✓ Speed increases (outcomes delivered days earlier)

✓ Capacity unlocks (new work becomes possible)


But the real transformation:

Your team now has a system for continuous improvement.

They can see weak links (Outcome Maps)

They can hear them rattle (Signal Counting)

They can test what's brittle (RDS Assessment)

They can break them systematically (Decoupling Blueprints)


One weak link at a time. Measured. Repeated. Compounding.