| AI Mandate Playbook
The AI Mandate Playbook

AI mandate expectations: high.
Margin for error: zero.

A practical hub for Controllers and finance leaders who've been told to implement AI, but can't afford to get it wrong.

The AI Mandate Playbook

Introduction

Somewhere in the past 18 months, the question changed. It used to be "should we be using AI?" Now it's "why aren't we further along?"

The data backs up the urgency. Deloitte's Q4 2025 CFO Signals survey found 87% of CFOs expect AI to be extremely or very important to their finance department in 2026. But only about 11% are using AI meaningfully in finance, and roughly 35% are still running pilots.

The expectation is near-universal. The execution is hard. We built the AI Mandate Playbook to help you close that gap.


Phase 0

Setting a Foundation

Most teams receiving an AI mandate get stuck early because they jump straight to tool selection before assessing what they actually have to work with.

David Fuhriman, CFO of the Jewish Federation of San Diego, puts it simply: AI is much more a capability to build than a tool to acquire. If AI is a tool, you buy it, install it, and wait for results. If it's a capability, you build the organizational conditions that allow AI to be useful.

David Fuhriman's Five Foundations

FoundationWhat it means in practice
Data ReadinessIf finding the answer to "How many active customers do we have right now?" requires pulling from multiple systems or debating definitions, the data isn't ready for AI.
Process MaturityCould a new employee follow your workflows from documentation alone? If the answer involves pointing them to a specific person, the process exists as tribal knowledge — and tribal knowledge cannot be automated.
Human CapitalAI frees the team to finally get to the work they've been deferring. When that's the frame leadership communicates, the institutional knowledge holders stay engaged.
GovernanceGood governance requires clear answers: What can AI access? What review applies? Who is accountable when something goes wrong? When do you disclose AI was involved?
Technology InfrastructureThis is where most organizations start — but it should actually be the last step. The right technology becomes obvious once the first four foundations are in place.

Building the Right Team

Three roles tend to accelerate AI work in accounting. On lean teams, one person often covers more than one.

Accounting Engineer

Translates documented workflows into running automations. The technical barrier has dropped; what's grown is the judgment about what to build and maintain.

ML Ops

Ensures deployed automations continue operating correctly over time, accounting for model drift and catching quiet failures. Most finance organizations are underinvested here.

Accounting Operations

Owns process improvement across functions and creates the operational infrastructure that makes automation stick. Steve Nolan at Public.com describes this as "the track the train runs on."

Another key role worth distinguishing: what Tom Alexander, Partner and Head of AI Innovation & Transformation at CrossCountry Consulting, calls the "transformation athlete." The qualities that make someone effective at AI transformation are largely the same qualities that made them effective at previous transformations — change management, cross-functional coordination, the ability to prioritize a funnel of competing ideas. If you have someone on your team who has demonstrated those qualities before, they are your most valuable asset in executing an AI mandate.

Where to Start Applying AI: Outcomes First

Start with: what does the CFO need to be able to do that they can't today? Then work backward to identify which process constraints are preventing it. Outcome first, process second, tool third.

Start Here
  • Repetitive, rules-based tasks with documented inputs and outputs
  • Recurring reconciliations on stable, clean data sources
  • Routine AP workflows: invoice coding, payment runs
  • Reporting that runs on a fixed cadence with consistent structure
Keep Humans Here
  • Exception handling, edge cases, anything that breaks the pattern
  • Decisions that require business context AI doesn't have access to
  • Variance explanations that require understanding why, not just what
  • Controls review and sign-off — the human remains in the loop

Phase 1

Get the Underlying Data Right

"AI doesn't fix bad data. It amplifies it."
— David Fuhriman, CFO, Jewish Federation of San Diego

Your data needs to be accessible, clean, and consistently defined before AI can do anything useful with it. Most teams discover during this phase that their data is worse than they thought, and that fixing it is unglamorous work. It's also the work that makes everything else possible.

What Data Readiness Actually Requires

The gap between having data and having usable data comes down to three things:

RequirementWhat it means
AccessibilityData behind a manual CSV export isn't accessible for automation. API access to source systems is the baseline.
ConsistencySingle sources of truth per entity. Consistent definitions across systems.
DocumentationDefinitions that exist only in people's heads are not definitions. AI can only work with what is explicitly stated.

Phase 2

Calibrate Risk to Your Company Complexity

Not all accounting processes carry the same risk. Some can tolerate experimentation. Others require near-zero error tolerance. Phase 2 is about mapping which is which before you commit to anything.

The Readiness Scorecard

A useful tool for upfront calibration is the AI Mandate Readiness Scorecard. It covers seven categories:

CategoryRating (1–7)Comments
Data Readiness
Process Documentation
Tool Governance
Team Readiness
Leadership Alignment
Cross-Functional Buy-in
Iteration Cadence
Score Key
1 = strongly disagree · 7 = strongly agree
What Your Score Means
7–20: Stage 1 · Start with the basics    21–34: Stage 2 · Build your plan    35–49: Stage 2–3 · You have a foundation    50+: Stage 3–4 · Ready to scale

Two categories tend to be the most revealing.

Cross-functional buy-in: most finance teams underweight this. A low score doesn't mean the mandate is stuck, but it probably means the next priority is building the steering committee rather than evaluating tools.

Iteration cadence: teams that score low here are waiting for certainty before they move. The antidote is 30-90 day sprints with defined outcome targets.

Old ApproachNew ApproachWhy It Matters
18-month transformation roadmap30-90 day outcome-based sprintsThe AI landscape shifts faster than long plans can accommodate
Requirements fixed upfrontIterative requirements that sharpen over timeYou don't know what AI can do in your environment until you run it
Pain point as the starting questionBusiness outcome as the starting questionFixing a pain point optimizes the past
Finance-led, finance-ownedCross-functional steering committeeFinance-only AI produces finance-only results

Phase 3

Get Proof at Your Scale

By Phase 3, the foundations are in place. Now comes finding tools that can actually deliver in your environment — not just in a demo.

"Every demo you see is very impressive. You have to push on those demos. Initially, it demos great and someone gets excited and they drive the buy decision — but then once you get the software into your toolkit, it's not always delivering on the value prop."
— Drew Armanino, COO, Armanino LLP

What Good Vendor Evaluation Looks Like

Ask the VendorWhat the Answer Tells You
Can you show us value within 30-60 days?Whether their delivery model is genuinely iterative
Can we speak with a reference at our scale, in our ERP?Whether their success stories represent your environment
What broke during implementation for that reference?Whether they're honest about friction
Is the team still using it 3 months after go-live?Whether adoption held

Phase 4

Build an Experimentation Practice

You have clean data, calibrated risk, and real proof points. Now you can move fast, but only in an environment where mistakes don't hit the general ledger.

"Experimentation" can sound unstructured. In practice, it's the opposite: small, sandboxed automation work that produces real output and builds the internal evidence you need before anything goes to production.

"The thing about coding with the purpose of finding efficiencies is that it's very contagious. Once you start, you cannot stop."
— Francisco Meyo, Controller, Abridge

What to Experiment On

Good Sandbox Candidates
  • CSV-to-journal-entry automation for a recurring workflow
  • Data transformation between systems with known mapping logic
  • Custom report built on a clean data export
  • Automations replicating a manual process you understand end to end
Keep This Out of the Sandbox
  • Anything that posts to the ledger without human review
  • Automations touching data you'd need to explain to an auditor
  • Full integrations requiring security or infrastructure oversight
  • Workflows where edge cases aren't documented yet

Phase 5

Implement Carefully in the GL

Moving to the GL is a different discipline than experimenting outside it. Phase 5 is where you need rollout plans, fallback procedures, and a real change management strategy. Without all three, adoption won't stick.

"Auditability, observability, repeatability — when you're building agents, those three things are really, really important."
— Tom Alexander, CrossCountry Consulting

Cross-Functional Governance

Once automations move to the GL, they stop being finance projects. They touch systems IT never reviewed, data legal hasn't approved, and controls that external auditors will have opinions on.

Cindy Cruz, AI Innovation Lab Leader at CrossCountry Consulting, is direct: "Finance leaders have a responsibility, if they're the ones getting the mandate first, to bring the C-suite together to help create a steering committee."

Steering Committee MemberWhat They Need to Own
CFOStrategic outcome definition, cross-functional authority
ControllerImplementation plan, phased rollout, surfacing blockers
CIO / ITVendor access approval, data security, infrastructure
LegalData handling, third-party AI contracts, regulatory exposure
Internal AuditControl design review, audit trail requirements

Build vs. Buy

Build vs. Buy — What to Consider

AI has made building cheaper and faster. But that doesn't simplify the decision. It just moves the hard question.

"Feasibility is no longer the main question. I'd say it's more like: how costly will this be to maintain?"
— Ben Sheridan, Product Manager, Numeric (former Revenue Accountant, Snowflake)

The teams that will look smart in two years won't be the ones that built the most. They'll be the ones that built the right things and bought the rest from vendors who've already solved for security, audit, and infrastructure at a depth no accounting team should try to replicate.

Build
  • Data transformation scripts
  • Process automations with known inputs and outputs
  • Commissions calculators
  • Internal workflow automations
Buy
  • Tools that enhance data quality
  • Payroll tooling
  • AI-powered reporting & flux analysis
  • Anything with severe legal exposure

Before You Build: Three Questions

1. Who is going to maintain this in a year? If you can't name the person and describe what their maintenance responsibilities would look like, the build decision needs more scrutiny.

2. Ask your vendors what's on their roadmap. Teams routinely build something their existing product vendor is three months away from shipping. A quick conversation before you start building saves weeks of work and avoids the awkward discovery that you've replicated something you're already paying for.

3. Ask whether your process is actually unique. If the answer is yes — custom business logic, competitive workflows you don't want commoditized, data models specific to your organization — build it. If the answer is "we just need this standard thing done," find the vendor who's already built it well.


Closing

What to Bring Back to Your CFO

At some point, the CFO or board that issued the mandate will ask for an update. What you bring to that conversation matters.

The instinct is to show activity: a vendor shortlist, a pilot timeline, a demo that landed well. What actually works is showing judgment. What you evaluated. What you ruled out, and why. What the first deployment will prove, and what it won't.

A Controller who walks in with a phased plan, a risk framework, and a 90-day proof point isn't just responding to a mandate. They're making the case that finance should lead the company's defining initiative of the year.

"Your organization can build these foundations. The real question is whether you will."
— David Fuhriman, CFO, Jewish Federation of San Diego
Build vs. Buy
Full Toolkit