diff --git a/presentations-V3/18-make-impact-ai.html b/presentations-V3/18-make-impact-ai.html new file mode 100644 index 0000000..2e53bcb --- /dev/null +++ b/presentations-V3/18-make-impact-ai.html @@ -0,0 +1,1147 @@ + + + + + + Make an Impact with AI + + + + + +
+ +
+
+
+
+ + +
+
+
+
+ + claude/code +
+
Leadership Briefing
+

Make an Impact with AI

+

Moving from AI adoption to organizational transformation.

+

March 2026Strategystart.io

+
+
+
+ Clawd mascot +
+
+ 1/32 +
+ + +
+
+
+
claude/code
+

“Using AI” ≠ Making Impact

+

The AI illusion — why adoption alone changes nothing

+
    +
  • Most organizations buy licenses and hope for magic — that is not a strategy
  • +
  • AI is an amplifier — it magnifies the strengths of high performers and the dysfunctions of struggling orgs
  • +
  • If your foundation is broken, AI makes it worse faster
  • +
  • Individual productivity gains get swallowed by downstream bottlenecks
  • +
+
+ 2/32 +
+ + +
+
+
+
claude/code
+

The Real Question

+

Not “are we using AI?” but “is AI changing our outcomes?”

+
    +
  • Shift from activity to impact — adoption metrics are vanity without outcome data
  • +
  • Having licenses is table stakes, not a strategy
  • +
  • The gap between orgs that transform vs. orgs that just purchase is widening
  • +
+
+ The core challenge + We need to move from “we have AI tools” to “AI is measurably changing how we build, ship, and operate.” +
+
+ 3/32 +
+ + +
+
+
+
claude/code
+

The License Fallacy

+

Buying tools without fixing the environment = pouring water into a leaky bucket

+
    +
  • AI-generated code piling up at a manual review bottleneck just makes the bottleneck worse
  • +
  • Speed without safety is counterproductive — faster failures are still failures
  • +
  • Tools serve improvement work — they don’t constitute it
  • +
  • Widening one lane on the highway just moves the traffic jam downstream
  • +
+
+ 4/32 +
+ + +
+
+
+
claude/code
+

Trust Is the Bottleneck

+

AI adoption is fundamentally a trust problem, not a technology problem

+
    +
  • Virtuous cycle: trust → adoption → results → more trust
  • +
  • Vicious cycle: low trust → low usage → no benefits → confirmed skepticism
  • +
  • Clear policies + safety nets (code review, automated tests) = trust builders
  • +
  • Don’t mandate usage — mandating undermines trust
  • +
+
+ 5/32 +
+ + +
+
+
+
claude/code
+

Seven Capabilities That Unlock AI

+

These are the real investment — not the tools

+
+
01Clear AI Policies
+
02Healthy Data Ecosystems
+
03AI-Accessible Internal Data
+
04Strong Version Control
+
05Small Batch Discipline
+
06User-Centric Focus
+
07Quality Internal Platforms
+
+
+ Key Insight + Without these capabilities in place, AI adoption has minimal organizational impact. The greatest returns come from investing in foundational systems, not the AI tools themselves. +
+
+ 6/32 +
+ + +
+
+
+
claude/code
+

It’s a Transformation,
Not a Deployment

+

AI requires organizational change, not just tool procurement

+
    +
  • Executive sponsorship is non-negotiable — real sponsorship, not a quarterly mention
  • +
  • Cross-functional ownership: engineering + legal + security + product + leadership
  • +
  • You can’t delegate AI transformation to engineering alone
  • +
  • Cultural change precedes technical change — always
  • +
+
+ 7/32 +
+ + +
+
+
+
claude/code
+

Stop Guessing, Start Measuring

+

If you can’t measure AI’s impact, you can’t manage it

+
    +
  • Define KPIs before rolling out tools, not after
  • +
  • Three pillars: Utilization • Impact • Cost
  • +
  • Vanity metrics (“we have 500 licenses”) tell us nothing about value
  • +
  • Establish baselines first — you can’t prove improvement without a starting point
  • +
+
+ 8/32 +
+ + +
+
+
+
claude/code
+

The Three Pillars

+

A complete measurement framework for AI impact

+
+
+
Utilization
+
Are we adopting?
+
    +
  • AI tool usage (DAU/WAU%)
  • +
  • % of PRs that are AI-assisted
  • +
  • % of code that is AI-generated
  • +
  • Tasks assigned to agents
  • +
+
+
+
Impact
+
Is it working?
+
    +
  • Dev hours saved per week
  • +
  • Developer satisfaction
  • +
  • DX Core 4 metrics
  • +
  • Human-Equivalent Hours (HEH)
  • +
+
+
+
Cost
+
Is it worth it?
+
    +
  • AI spend (total + per dev)
  • +
  • Net time gain per developer
  • +
  • Agent hourly rate (HEH / spend)
  • +
+
+
+
+ 9/32 +
+ + +
+
+
+
claude/code
+

Utilization

+

Measuring adoption — are we actually using what we bought?

+
    +
  • AI tool daily/weekly active usage rates (DAU/WAU%)
  • +
  • % of pull requests that are AI-assisted
  • +
  • % of committed code that is AI-generated
  • +
  • Tasks successfully completed by AI agents
  • +
+
+ Warning + Utilization without impact is just vanity metrics. High usage that doesn’t improve outcomes is waste, not progress. +
+
+ 10/32 +
+ + +
+
+
+
claude/code
+

Impact

+

Measuring real change — is AI actually improving outcomes?

+
    +
  • AI-driven time savings in dev hours per week
  • +
  • Developer satisfaction scores
  • +
  • PR throughput and perceived rate of delivery
  • +
  • Developer Experience Index (DXI)
  • +
  • Code maintainability and change confidence
  • +
  • Change failure % and Human-Equivalent Hours (HEH)
  • +
+
+ 11/32 +
+ + +
+
+
+
claude/code
+

Cost & ROI

+

Proving the investment — is it worth what we’re spending?

+
    +
  • Total AI spend and spend per developer
  • +
  • Net time gain per developer — savings minus tool cost
  • +
  • Agent hourly rate: HEH ÷ AI spend
  • +
  • The goal: demonstrate clear, measurable ROI
  • +
  • Track monthly, report quarterly — make it a discipline
  • +
+
+ 12/32 +
+ + +
+
+
+
claude/code
+

DevEx = Agent Experience

+

The equation that changes everything

+
    +
  • Everything a developer needs is what an agent needs too
  • +
  • Fast CI, modern tooling, clean APIs, good docs, reliable platforms
  • +
  • Invest in DevEx = simultaneously enable humans AND agents
  • +
  • Poor DevEx → poor agent performance → wasted AI investment
  • +
+
+ Key Insight + The single best investment you can make for AI is improving your developer experience. It’s the same investment. +
+
+ 13/32 +
+ + +
+
+
+
claude/code
+

The Platform is the Multiplier

+

Internal platforms are the distribution layer for AI

+
    +
  • Low platform quality → negligible organizational AI impact
  • +
  • High platform quality → strong, compounding AI impact
  • +
  • Most correlated trait: clear feedback on task outcomes
  • +
  • Platform as a product, not a ticket queue
  • +
+
+ 14/32 +
+ + +
+
+
+
claude/code
+

What Great Looks Like

+

The ideal internal platform for humans and agents

+
    +
  • Self-service golden paths for build, test, and deploy
  • +
  • CI that completes in minutes, not hours
  • +
  • Dev environment setup in minutes
  • +
  • Abstracts complexity — K8s, networking, security
  • +
  • Extensible: teams add their own tools
  • +
  • Clear, actionable feedback at every step
  • +
+
+ 15/32 +
+ + +
+
+
+
claude/code
+

DevEx Metrics

+

What to track — the numbers that reveal platform health

+
+

Speed

    +
  • CI pipeline duration
  • +
  • Commit-to-production lead time
  • +
  • Dev environment setup time
  • +
+

Quality

    +
  • Build success rate
  • +
  • Developer satisfaction (survey)
  • +
  • Time building vs. time waiting
  • +
+
+
+ 16/32 +
+ + +
+
+
+
claude/code
+

The Productivity Paradox

+

Feeling fast ≠ delivering impact

+
    +
  • Developers using AI report more flow and higher satisfaction
  • +
  • Yet: no increase in time on valuable work, no decrease in toil
  • +
  • Speed of execution alone is not a proxy for value creation
  • +
  • Must distinguish throughput from value delivery
  • +
+
+ Paradox + Organizations can feel more productive while the actual proportion of effort on high-value work remains unchanged. Measure both perception and reality. +
+
+ 17/32 +
+ + +
+
+
+
claude/code
+

The Real Bottlenecks

+

Coding is only a fraction of the delivery lifecycle

+
    +
  • Real time sinks: meetings, review waits, CI queues, env setup, context switching
  • +
  • AI that speeds coding but ignores everything else = marginal improvement
  • +
  • The pipeline goes commit → production, connecting multiple teams
  • +
  • Optimizing a non-bottleneck yields zero organizational improvement
  • +
+
+ 18/32 +
+ + +
+
+
+
claude/code
+

Reduce Unplanned Work & Noise

+

Noise destroys value — protect the signal

+
    +
  • Unplanned work (incidents, hotfixes) cannibalizes planned work
  • +
  • Context switching makes everything slower AND burns people out
  • +
  • Multitasking doesn’t increase throughput — it destroys it
  • +
  • AI can: triage incidents, auto-remediate, reduce alerts, summarize decisions
  • +
+
+ 19/32 +
+ + +
+
+
+
claude/code
+

Fix the Entire Developer Day

+

AI across the full workflow, not just code generation

+
+

Cut Meetings

  • AI summaries, async standups, decision logs
+

Slash CI Wait

  • Smart test selection, parallel pipelines, flaky test detection
+

Fast Setup

  • AI-assisted onboarding, self-service platforms
+

Faster Reviews

  • AI code review, automated style & security checks
+

Kill Switching

  • WIP limits, AI-powered task prioritization
+
+
+ 20/32 +
+ + +
+
+
+
claude/code
+

Value Stream Mapping

+

Find where time dies — then target AI at real constraints

+
    +
  • Map the full journey: commit → production
  • +
  • Measure process time vs. wait time at every step
  • +
  • Calculate % Complete & Accurate at each handoff
  • +
  • Find the actual bottlenecks
  • +
  • Target AI investment at real constraints
  • +
+
+ Rule + Don’t optimize a non-bottleneck — it yields zero organizational improvement. Find the constraint first. +
+
+ 21/32 +
+ + +
+
+
+
claude/code
+

Not an Engineering Initiative

+

AI requires the whole organization, not just one department

+
    +
  • Cross-functional alignment: eng + legal + security + product + leadership
  • +
  • An AI policy authored by one team alone is insufficient
  • +
  • Greatest returns come from investing in foundational systems
  • +
  • Individual gains are lost to downstream disorder without org-wide change
  • +
+
+ 22/32 +
+ + +
+
+
+
claude/code
+

The Three-Bucket Policy

+

Simple rules so everyone knows what’s okay and what’s not

+
+
+

Don’t Do This

+
  • Never share customer or personal data with AI
  • Keep sensitive business info out of AI tools
  • Don’t use AI tools we haven’t approved
+
+
+

OK, But Be Careful

+
  • Use our code with approved AI tools only
  • A human must always review AI output
  • Stick to company-approved platforms
+
+
+

Go For It

+
  • Writing repetitive code and templates
  • Brainstorming and exploring ideas
  • Writing docs and learning new things
+
+
+
+ Keep It Fresh + This is a living document — update it regularly, make it easy to find, and let people ask questions. +
+
+ 23/32 +
+ + +
+
+
+
claude/code
+

Culture Eats AI Strategy

+

Build the right culture — or the tools won’t matter

+
    +
  • Psychological safety: developers must feel safe to experiment and fail
  • +
  • Don’t mandate usage — mandating undermines trust
  • +
  • Celebrate learning from failure, not just shipping
  • +
  • Communities of practice to share what works and what doesn’t
  • +
  • Developer autonomy over tool usage increases trust and adoption
  • +
+
+ 24/32 +
+ + +
+
+
+
claude/code
+

The J-Curve

+

Expect the dip before the breakthrough

+
+ + + + + + + + + Quick wins + The Dip + Breakthrough + TIME → + PERFORMANCE + +
+
    +
  • The dip: accumulated debt + new process overhead + learning curve
  • +
  • Orgs that quit during the dip never see the payoff
  • +
  • Leadership must understand and commit through the curve
  • +
+
+ 25/32 +
+ + +
+
+
+
claude/code
+

Build the New

+

Stop automating the old — start solving new problems

+
    +
  • Current pattern: replicate routine work (code gen, test gen)
  • +
  • The bigger opportunity: solve problems we couldn’t before
  • +
  • New products, new capabilities, new ways of working
  • +
  • AI should enable innovation, not just acceleration
  • +
+
+ Mindset Shift + The question isn’t “how do we do the same things faster?” — it’s “what can we do now that we couldn’t before?” +
+
+ 26/32 +
+ + +
+
+
+
claude/code
+

Structured Experi­mentation

+

A framework for disciplined innovation

+
    +
  • Define the problem — real business pain, not tech novelty
  • +
  • Set hypothesis + success metric before starting
  • +
  • Time-box: 2–4 week experiments with go/no-go criteria
  • +
  • Sandbox environments — make failure cheap
  • +
  • Share results broadly regardless of outcome
  • +
+
+ 27/32 +
+ + +
+
+
+
claude/code
+

What to Experiment With

+

Real opportunities — start here

+
+

Onboarding

  • AI-powered onboarding for new developers
+

Incidents

  • Automated triage & remediation
+

Testing

  • Smart selection, generation, flaky detection
+

Knowledge

  • Internal assistants on your actual codebase
+

Analytics

  • Predictive analytics on delivery pipelines
+

Operations

  • AI agents handling routine tasks end-to-end
+
+
+ 28/32 +
+ + +
+
+
+
claude/code
+

Next 30 Days

+

Immediate actions — start this week

+
    +
  • Set clear rules for what’s okay and not okay with AI
  • +
  • Pick how we’ll measure success — usage, impact, and cost
  • +
  • Map out where time gets wasted from code to production
  • +
  • Take an honest look at where our teams stand today
  • +
  • Pick one real bottleneck and run a quick AI experiment to fix it
  • +
+
+ 29/32 +
+ + +
+
+
+
claude/code
+

Next 90 Days

+

Build the foundation — invest in what actually matters

+
    +
  • Invest in platform: cut CI times, improve dev environments
  • +
  • Connect AI to internal docs and codebases (context engineering)
  • +
  • Launch 2–3 structured experiments on real business problems
  • +
  • Begin monthly KPI measurement and reporting
  • +
  • Establish communities of practice for AI knowledge sharing
  • +
+
+ 30/32 +
+ + +
+
+
+
claude/code
+

AI Won’t Wait

+

The call to action

+
    +
  • The gap between transformers and purchasers widens every day
  • +
  • This is not about technology — it’s about building the foundation
  • +
  • AI is a magnifying glass. What are we magnifying?
  • +
+
+ “The greatest returns come not from the tools themselves, but from investing in the foundational systems that enable success.” +
+
+ 31/32 +
+ + +
+
+
+

Thank You

+

Written by

+
+
+

Haim Ari

+

start.io

+
+ & +
+

Claude

+

Anthropic

+
+
+
+ 32/32 +
+ + + + \ No newline at end of file