| AI Mandate Playbook
The AI Mandate Playbook

AI mandate expectations: high.
Margin for error: zero.

A practical hub for Controllers and finance leaders navigating the pressure to implement AI — without sacrificing the accuracy, auditability, or trust their teams depend on.

The AI Mandate Playbook

Introduction

Somewhere in the past 18 months, the question changed. It's no longer "should we be using AI?" It's "why aren't we further along?" Controllers and finance leaders across every sector are receiving the same directive: move faster on AI. The expectations are high. The margin for error is zero.

This playbook exists because most AI mandates fail not from a lack of ambition, but from a lack of sequence. Teams buy tools before their data is clean. They run pilots before they have baselines. They automate workflows that aren't documented.

There is a right order. This is it.


01

The State of AI Mandates

The pressure controllers are feeling is real, and the data confirms it. According to Deloitte's Q4 2025 CFO Signals survey, 87% of CFOs expect AI to be extremely or very important to their finance department in 2026 — a near-consensus view. More than half said integrating AI agents into finance workflows is a top transformation priority this year.

What the data also shows is a significant execution gap. L.E.K. Consulting found that only about 11% of CFOs are using AI meaningfully in finance, while roughly 35% are still at the pilot stage. An RGP survey found just 14% had seen clear, measurable impact — and 86% cited legacy tools and fragmented systems as significant barriers.

While the expectation is nearly universal, the execution is genuinely hard. That gap between what leadership wants and what responsible implementation requires is exactly the terrain this piece is designed to help you navigate.

Why Most AI Mandates Fail Before They Start

The failure modes in AI mandate execution are consistent enough across organizations that they're worth naming directly — not as a warning, but as a map of what to avoid.

Failure ModeWhat It Looks Like
The 18-month roadmapGrand transformation timelines that assume a stable landscape. Iterative, outcome-based sprints work. Grand roadmaps don't.
Pain-point thinkingTeams go straight to the technology before they've articulated the value they're trying to create. Fixing pain points optimizes the past. Defining outcomes builds toward the future.
The finance siloAI mandates executed inside finance produce finance-only results. When the initiative lives entirely in finance, you end up with tool proliferation solving narrow problems but creating new coordination overhead.
The headcount fearWhen a CFO frames an AI mandate as a headcount reduction initiative — even implicitly — adoption stalls because the people who best understood the work stopped contributing.
"People do not resist technology. They resist change that threatens their identity."
— David Fuhriman, CFO, Jewish Federation of San Diego

02

Setting a Foundation

There's a reason most teams receiving an AI mandate find themselves stuck early: they're trying to decide what to build before they've assessed what they have to build on.

As CFO of the Jewish Federation of San Diego, David Fuhriman's framing cuts through the noise: AI is much more a capability to build than a tool to acquire. If AI is a tool, you buy it, install it, and wait for results. If it's a capability, you build the organizational conditions that allow AI to be useful.

David Fuhriman's Five Foundations

FoundationWhat it means in practice
Data ReadinessIf finding the answer to "How many active customers do we have right now?" requires pulling from multiple systems or debating definitions, the data isn't ready for AI.
Process MaturityCould a new employee follow your workflows from documentation alone? If the answer involves pointing them to a specific person, the process exists as tribal knowledge — and tribal knowledge cannot be automated.
Human CapitalAI frees the team to finally get to the work they've been deferring. When that's the frame leadership communicates, the institutional knowledge holders stay engaged.
GovernanceGood governance requires clear answers: What can AI access? What review applies? Who is accountable when something goes wrong? When do you disclose AI was involved?
Technology InfrastructureThis is where most organizations start — but it should actually be the last step. The right technology becomes obvious once the first four foundations are in place.
"AI doesn't fix bad data. It amplifies it."
— David Fuhriman, CFO, Jewish Federation of San Diego

Where to Start Applying AI: Outcomes First

Start with: what does the CFO need to be able to do that they can't today? Then work backward to identify which process constraints are preventing it. Outcome first, process second, tool third.

Start Here
  • Repetitive, rules-based tasks with documented inputs and outputs
  • Recurring reconciliations on stable, clean data sources
  • Routine AP workflows: invoice coding, payment runs
  • Reporting that runs on a fixed cadence with consistent structure
Keep Humans Here
  • Exception handling, edge cases, anything that breaks the pattern
  • Decisions that require business context AI doesn't have access to
  • Variance explanations that require understanding why, not just what
  • Controls review and sign-off — the human remains in the loop

Phase 1

Get the Underlying Data Right

Before any AI implementation can succeed, your data needs to be accessible, clean, and consistently defined. Most teams discover during this phase that their data isn't as ready as they assumed — and that the work of fixing it is unglamorous but foundational.

Ask your team: "How many active customers do we have?" If the answer hedges, your data isn't ready.

What Data Readiness Actually Requires

RequirementWhat it means
AccessibilityData behind a manual CSV export isn't accessible for automation. API access to source systems is the baseline.
ConsistencySingle sources of truth per entity. Consistent definitions across systems.
DocumentationDefinitions that exist only in people's heads are not definitions. AI can only work with what is explicitly stated.

Phase 2

Calibrate Risk to Your Company Complexity

Not all accounting processes carry the same risk profile. Phase 2 is about honestly mapping which parts of your operation can tolerate experimentation and which require near-zero error tolerance — before you commit to any implementation path.

Old ApproachNew ApproachWhy It Matters
18-month transformation roadmap30-90 day outcome-based sprintsThe AI landscape shifts faster than long plans can accommodate
Requirements fixed upfrontIterative requirements that sharpen over timeYou don't know what AI can do in your environment until you run it
Pain point as the starting questionBusiness outcome as the starting questionFixing a pain point optimizes the past
Finance-led, finance-ownedCross-functional steering committeeFinance-only AI produces finance-only results

Phase 3

Get Proof at Your Scale

This is the phase most teams skip. They move from pilot to production before establishing genuine benchmarks. Real proof means having before/after data that holds up to scrutiny, not just a vendor case study or an impressive demo.

"Ask questions of the business, not the system. When I evaluate an AI tool for a finance team, I want to know: can it answer a question a CFO would actually ask — across all the systems that question touches?"
— Dean Quiambao, Armanino

What Good Vendor Evaluation Looks Like

Ask the VendorWhat the Answer Tells You
Can you show us value within 30-60 days?Whether their delivery model is genuinely iterative
Can we speak with a reference at our scale, in our ERP?Whether their success stories represent your environment
What broke during implementation for that reference?Whether they're honest about friction
Is the team still using it 3 months after go-live?Whether adoption held

Phase 4

Build an Experimentation Practice

With clean data, calibrated risk, and genuine proof points in hand, you're ready to move fast — but only in the right environment. Phase 4 is about maximizing what you learn before anything touches your general ledger.

The word "experimentation" can imply a lack of structure. What works is the opposite: a deliberate practice of small, sandboxed automation work that produces real output, builds real capability, and generates the internal evidence base that makes Phase 5 decisions defensible.

"The thing about coding with the purpose of finding efficiencies is that it's very contagious. Once you start, you cannot stop."
— Francisco Meyo, Abridge

What to Experiment On

Good Sandbox Candidates
  • CSV-to-journal-entry automation for a recurring workflow
  • Data transformation between systems with known mapping logic
  • Custom report built on a clean data export
  • Automations replicating a manual process you understand end to end
Keep This Out of the Sandbox
  • Anything that posts to the ledger without human review
  • Automations touching data you'd need to explain to an auditor
  • Full integrations requiring security or infrastructure oversight
  • Workflows where edge cases aren't documented yet

Phase 5

Implement Carefully in the GL

Production implementation in the GL is a different discipline than experimentation. Phase 5 is about moving from sandbox to real workflows with the rigor those workflows require — rollout plans, fallback procedures, and the human change management that determines whether any of it sticks.

"Auditability, observability, repeatability — when you're building agents, those three things are really, really important."
— Tom Alexander, CrossCountry Consulting

Cross-Functional Governance

Once automations move to the GL, they stop being finance projects. They become company infrastructure — touching systems IT never reviewed, data legal hasn't approved, controls that external auditors have opinions on.

Steering Committee MemberWhat They Need to Own
CFOStrategic outcome definition, cross-functional authority
ControllerImplementation plan, phased rollout, surfacing blockers
CIO / ITVendor access approval, data security, infrastructure
LegalData handling, third-party AI contracts, regulatory exposure
Internal AuditControl design review, audit trail requirements

Build vs. Buy

Build vs. Buy — What to Consider

AI has made building dramatically cheaper and faster. But the shift doesn't simplify the decision. It moves the hard question from feasibility to maintenance: how costly will this be to maintain?

The teams that will look smart in two years aren't necessarily the ones that built the most. They're the ones that built the right things — and bought the rest from vendors who've thought harder about the infrastructure, security, and audit requirements than any accounting team could reasonably sustain on its own.

Build
  • Data transformation scripts
  • Process automations with known inputs and outputs
  • Commissions calculators
  • Internal workflow automations
Buy
  • Tools that enhance data quality
  • Payroll tooling
  • AI-powered reporting & flux analysis
  • Anything with severe legal exposure
"We're not in the business of creating software. We're in the business of allowing decision makers to have financial information on a timely, accurate basis."
— Francisco Meyo, Abridge

Closing

What to Bring Back to Your CFO

At some point, you have to close the loop. The CFO or board that issued the mandate is going to ask for an update, and what you bring to that conversation matters more than most controllers plan for in advance.

The stronger move is to show judgment. Here's what we evaluated. Here's what we ruled out and why. Here's the risk framework we're applying. Here's what our first deployment will prove — and here's what it won't.

The controller who walks into that CFO conversation with a phased plan, a risk framework, and a 90-day proof point isn't responding to a mandate. They're leading one.

"The question is not whether your organization can build these foundations. The question is whether you will."
— David Fuhriman, CFO, Jewish Federation of San Diego

The work is infinite — which means the opportunity is too. Start where you are. Use what's relevant now, and know that the rest will be here when you need it.

Full Toolkit