Recevez des informations comptables directement dans votre boîte de réception !
The conversation in most CFOs' offices these days has shifted from "should we invest in AI?" to "can we just build software and agents ourselves?" Reasonable question. A general-purpose AI platform costs roughly $20 per seat per month for a basic subscription.[1] A purpose-built accounting platform costs significantly more. On a spreadsheet, the math looks obvious.
But the spreadsheet is missing most of the costs.
A 2025 survey by Benchmarkit and Mavvrik found that 85% of organizations misestimate their AI costs by more than 10%, and the miss is almost always too low. Nearly a quarter are off by 50% or more.[2] The interesting part: data platforms, integration infrastructure, and network access ranked as the top drivers of unexpected cost. Surprisingly, LLM token pricing, the line item most teams fixate on, ranked fifth.
This difference is what we refer to as “the build tax.”
When an accounting team runs a DIY AI agent through a real close cycle, the AI subscription is the least significant expense of this build tax. The actual consumption breaks down into five layers:
None of these costs appear in a ~$20/month subscription comparison. And none of them are stable. Data volumes change. Model providers update pricing tiers. Retry rates vary with data quality. A CFO looking at this line item will see a number that changes every month and only goes up.
One team that built a PO accruals agent found the cost to actually run the finished script was negligible, roughly a penny per execution. But the cost to build and iterate that agent using an auto-build tool wasn’t taken into account.[3] The development cycle was the problem, not the runtime. And in a DIY environment, the development cycle never ends. Every change to the chart of accounts, every new entity, every model update triggers another round of iteration.
The AI subscription gets all the attention. The infrastructure required to make that AI reliable in a production accounting environment is where the budget actually lives.
A realistic estimate for a small- to mid-sized accounting team (3-10 people) running DIY AI agents:[4]
The monthly range spans from a lean deployment at the low end to heavier API usage and larger teams at the high end.
That table captures the infrastructure costs. It does not include the weeks or months your finance, operations, and IT teams spend building workflows, connectors, and governance. It does not include documentation, training, or adoption support. And it does not include your ERP, AP platform, reconciliation software, or close management system. DIY AI doesn't replace those. It adds a layer on top.
The infrastructure costs alone can exceed the cost of a purpose-built accounting platform. And unlike a SaaS subscription, these costs require ongoing internal management. Someone has to monitor, maintain, update, and troubleshoot every component.
When teams commit to the DIY path, they eventually discover they need a dedicated person to maintain it. The industry is calling this role an "AI Admin" or "AI Operations Lead," and the job description reads like a unicorn posting.
One recent listing required candidates to:
Qualifications included a computer science background, hands-on experience with LLM APIs and agentic workflows, plus 3-6 years in product management, solutions engineering, or technical program management.
The market rate for this role: $170,000 to $200,000 per year.[6]
That's a niche hire. The intersection of "understands accounting workflows" and "can build and maintain production AI systems" is small. And this person becomes your single point of failure — the same kind of key-person dependency risk that makes auditors uncomfortable with manual Excel processes, except now it's in your AI infrastructure.
When a prospect was asked about this dynamic, their champion put it simply: "We're accountants, not computer engineers." [7]
When you stack the real costs, the comparison shifts hard from the ~$20/month mental model.
A realistic annual estimate for a team running DIY AI agents in production:[8]
*The line items come from publicly verifiable inputs: BLS wage data for the AI Admin, published enterprise AI platform pricing, cloud provider pricing for infrastructure. The $300K is directional — your number will depend on team size, complexity, and how many agents you actually build — but the order of magnitude is consistent with what finance teams tell us when they work through this honestly.
The purpose-built platform side is harder to publish precisely because pricing varies by company size. But the comparison that matters is platform cost versus the loaded cost of an AI Admin plus the accounting hires you'd still need.
Field intelligence from pre-IPO companies makes this concrete. One Controller estimated that with purpose-built AI agents properly deployed, their team could avoid hiring 5 additional accounting staff over two years, saving several hundred thousand dollars annually. At a previous company without that capability, they made all five hires to get through the IPO.[9] The DIY path doesn't eliminate those hires. It adds one (the AI Admin) while failing to offset the accounting headcount, because the agents lack the reliability, auditability, and coverage required for public-company-grade operations.
One line in the TCO table deserves its own explanation: per-run execution cost.
Most DIY AI agents query a live model every time they run. Every close cycle, every execution, every retry consumes tokens. Cost scales with data volume, model pricing changes, and how often you run them.
Purpose-built accounting AI platforms work differently. The AI runs during the build phase to generate the automation script. Once that script is built and validated, it runs as deterministic code. For something like the PO accruals agent, token usage at runtime is negligible — about a penny per execution — because the expensive part happened during development, not in production.[10]
It's a structural difference, not a pricing difference. One model gets more expensive the more you use it. The other front-loads the AI work and runs flat at scale.
Model this out over three years: the DIY path costs more each close cycle as data volumes grow and model pricing shifts. The platform path doesn't change much at runtime.
Even if the DIY numbers worked on paper, there's one cost that doesn't show up in any model: the cost of getting it wrong.
When a DIY AI agent hallucinates — and they do — the consequences in accounting aren't a bad customer email. A hallucination that slips through review can produce a material weakness or deficiency finding under PCAOB AS 2201, which triggers an adverse opinion on your internal controls.[11] That means additional audit fees, potential restatement costs, and a 10-K disclosure that raises questions about your financial statements.
Nobody builds that into a break-even analysis. But it's what keeps Controllers up at 2 AM during close. McKinsey's 2025 State of AI survey found only 39% of organizations can connect any EBIT impact to their AI work, and for most of them, the impact is below 5%.[12] In accounting, where the tolerance for error is already near zero, "no measurable ROI" doesn't fly the same way it might in other functions.
That Gartner stat from the first blog in our series is worth looking at again: at least 30% of generative AI projects get abandoned after proof of concept, with escalating costs among the reasons cited.[13] In accounting, where precision and auditability are baseline requirements, that number probably doesn't capture the full picture.
Every cost in this analysis compounds in one direction. Infrastructure spend increases with data volume. The AI Admin salary increases with market demand for a scarce skill set. Retry costs grow with model pricing changes you don't control. Audit risk grows with every close cycle that runs on unvalidated agents.
Purpose-built platforms carry costs too, and they're not small. But those costs are bounded: a subscription negotiated once, updates managed by the vendor, compliance tooling built in from the start. The cost curve is flat, whereas the DIY cost curve is exponential.
The CFO who's asking "why pay for accounting software when AI is $20 a month?" is asking the wrong question. The right question is simpler and harder: “What does it actually cost to get this wrong?”
In accounting, the answer is a restatement, a material weakness, an audit finding that shows up in your 10-K. No amount of token savings offsets that.
Key takeaways: