Blog

The Build Tax, Quantified

Michael Whitmire
April 10, 2026
Sign Up for Emails from FloQast

Get accounting insights delivered directly to your inbox!

Error message goes here!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

The conversation in most CFOs' offices these days has shifted from "should we invest in AI?" to "can we just build software and agents ourselves?" Reasonable question. A general-purpose AI platform costs roughly $20 per seat per month for a basic subscription.[1] A purpose-built accounting platform costs significantly more. On a spreadsheet, the math looks obvious.

But the spreadsheet is missing most of the costs.

A 2025 survey by Benchmarkit and Mavvrik found that 85% of organizations misestimate their AI costs by more than 10%, and the miss is almost always too low. Nearly a quarter are off by 50% or more.[2] The interesting part: data platforms, integration infrastructure, and network access ranked as the top drivers of unexpected cost. Surprisingly, LLM token pricing, the line item most teams fixate on, ranked fifth.

This difference is what we refer to as “the build tax.”

The five cost layers nobody budgets for

When an accounting team runs a DIY AI agent through a real close cycle, the AI subscription is the least significant expense of this build tax. The actual consumption breaks down into five layers:

  1. Large workbooks (the multi-tab, 50-sheet reconciliation files accountants actually use) must be converted to text before an AI model can process them. That conversion alone consumes a serious volume of input tokens.
  1. The model "thinking" through reconciliation steps, generating journal entries, producing variance explanations, etc. Reasoning and output tokens are the most expensive in any AI pricing model.
  1. Every time an agent's output fails a sum-to-zero check or a formatting validation, the entire prompt is resubmitted. Each retry bills the full cost again. During a close cycle, these retries can compound quickly.
  1. Large ledgers can crowd out the instructions within a model's context window, leading to errors or requiring chunking strategies that multiply the number of API calls.
  1. Running Python scripts or code interpreter functions to process the actual math, billed on top of the tokens.

None of these costs appear in a ~$20/month subscription comparison. And none of them are stable. Data volumes change. Model providers update pricing tiers. Retry rates vary with data quality. A CFO looking at this line item will see a number that changes every month and only goes up.

One team that built a PO accruals agent found the cost to actually run the finished script was negligible, roughly a penny per execution. But the cost to build and iterate that agent using an auto-build tool wasn’t taken into account.[3] The development cycle was the problem, not the runtime. And in a DIY environment, the development cycle never ends. Every change to the chart of accounts, every new entity, every model update triggers another round of iteration.

The infrastructure you're not seeing

The AI subscription gets all the attention. The infrastructure required to make that AI reliable in a production accounting environment is where the budget actually lives.

A realistic estimate for a small- to mid-sized accounting team (3-10 people) running DIY AI agents:[4]

Monthly cost estimate — small to mid-sized accounting team (3–10 people)
Cost Bucket Monthly Range
Core AI platform seats $300–$700
API usage (production) $150–$500+
API usage (testing/sandbox) $25–$100
Integration and hosting $50–$150
Monitoring and logging $20–$100
Total monthly $545–$1,550
Annual range $6,500–$18,600

The monthly range spans from a lean deployment at the low end to heavier API usage and larger teams at the high end.

That table captures the infrastructure costs. It does not include the weeks or months your finance, operations, and IT teams spend building workflows, connectors, and governance. It does not include documentation, training, or adoption support. And it does not include your ERP, AP platform, reconciliation software, or close management system. DIY AI doesn't replace those. It adds a layer on top.

The infrastructure costs alone can exceed the cost of a purpose-built accounting platform. And unlike a SaaS subscription, these costs require ongoing internal management. Someone has to monitor, maintain, update, and troubleshoot every component.

The hire you didn't plan for

When teams commit to the DIY path, they eventually discover they need a dedicated person to maintain it. The industry is calling this role an "AI Admin" or "AI Operations Lead," and the job description reads like a unicorn posting.

One recent listing required candidates to:

  • Diagnose and prioritize automation opportunities across teams
  • Build and ship AI-powered workflows from prototype to production
  • Drive adoption with enablement sessions and documentation
  • Establish success metrics and dashboards
  • Develop prompt libraries, governance frameworks, and integration patterns.[5] 

Qualifications included a computer science background, hands-on experience with LLM APIs and agentic workflows, plus 3-6 years in product management, solutions engineering, or technical program management.

The market rate for this role: $170,000 to $200,000 per year.[6]

That's a niche hire. The intersection of "understands accounting workflows" and "can build and maintain production AI systems" is small. And this person becomes your single point of failure — the same kind of key-person dependency risk that makes auditors uncomfortable with manual Excel processes, except now it's in your AI infrastructure.

When a prospect was asked about this dynamic, their champion put it simply: "We're accountants, not computer engineers." [7]

The TCO nobody puts on the whiteboard

When you stack the real costs, the comparison shifts hard from the ~$20/month mental model.

A realistic annual estimate for a team running DIY AI agents in production:[8]

Realistic annual estimate — team running DIY AI agents in production
Cost Category DIY AI Agents Purpose-Built Platform
AI subscription / credits ~$50,000
(~70 users at ~$60/seat/month; scales with team size and usage)*
Included
Infrastructure
(hosting, pipelines, monitoring)
$6,500–$18,600 Included
Dedicated AI Admin ~$200,000 Not required
Per-run execution cost Token consumption on every run; scales with data volume and retries Negligible after initial build
Audit and compliance tooling Build or bolt on separately Built in
Ongoing maintenance
(every close cycle)
Absorbed by AI Admin Vendor managed
Realistic annual total ~$300,000
(directional estimate)*
A fraction of one FTE

*Line items drawn from publicly verifiable inputs: BLS wage data, published enterprise AI platform pricing, cloud provider pricing. The $300K is directional — your number will depend on team size, complexity, and number of agents built.

*The line items come from publicly verifiable inputs: BLS wage data for the AI Admin, published enterprise AI platform pricing, cloud provider pricing for infrastructure. The $300K is directional — your number will depend on team size, complexity, and how many agents you actually build — but the order of magnitude is consistent with what finance teams tell us when they work through this honestly.

The purpose-built platform side is harder to publish precisely because pricing varies by company size. But the comparison that matters is platform cost versus the loaded cost of an AI Admin plus the accounting hires you'd still need.

Field intelligence from pre-IPO companies makes this concrete. One Controller estimated that with purpose-built AI agents properly deployed, their team could avoid hiring 5 additional accounting staff over two years, saving several hundred thousand dollars annually. At a previous company without that capability, they made all five hires to get through the IPO.[9] The DIY path doesn't eliminate those hires. It adds one (the AI Admin) while failing to offset the accounting headcount, because the agents lack the reliability, auditability, and coverage required for public-company-grade operations.

The differentiator hidden in the architecture

One line in the TCO table deserves its own explanation: per-run execution cost.

Most DIY AI agents query a live model every time they run. Every close cycle, every execution, every retry consumes tokens. Cost scales with data volume, model pricing changes, and how often you run them.

Purpose-built accounting AI platforms work differently. The AI runs during the build phase to generate the automation script. Once that script is built and validated, it runs as deterministic code. For something like the PO accruals agent, token usage at runtime is negligible — about a penny per execution — because the expensive part happened during development, not in production.[10]

It's a structural difference, not a pricing difference. One model gets more expensive the more you use it. The other front-loads the AI work and runs flat at scale.

Model this out over three years: the DIY path costs more each close cycle as data volumes grow and model pricing shifts. The platform path doesn't change much at runtime.

The cost that breaks every break-even analysis

Even if the DIY numbers worked on paper, there's one cost that doesn't show up in any model: the cost of getting it wrong.

When a DIY AI agent hallucinates — and they do — the consequences in accounting aren't a bad customer email. A hallucination that slips through review can produce a material weakness or deficiency finding under PCAOB AS 2201, which triggers an adverse opinion on your internal controls.[11] That means additional audit fees, potential restatement costs, and a 10-K disclosure that raises questions about your financial statements.

Nobody builds that into a break-even analysis. But it's what keeps Controllers up at 2 AM during close. McKinsey's 2025 State of AI survey found only 39% of organizations can connect any EBIT impact to their AI work, and for most of them, the impact is below 5%.[12] In accounting, where the tolerance for error is already near zero, "no measurable ROI" doesn't fly the same way it might in other functions.

That Gartner stat from the first blog in our series is worth looking at again: at least 30% of generative AI projects get abandoned after proof of concept, with escalating costs among the reasons cited.[13] In accounting, where precision and auditability are baseline requirements, that number probably doesn't capture the full picture.

The question worth asking

Every cost in this analysis compounds in one direction. Infrastructure spend increases with data volume. The AI Admin salary increases with market demand for a scarce skill set. Retry costs grow with model pricing changes you don't control. Audit risk grows with every close cycle that runs on unvalidated agents.

Purpose-built platforms carry costs too, and they're not small. But those costs are bounded: a subscription negotiated once, updates managed by the vendor, compliance tooling built in from the start. The cost curve is flat, whereas the DIY cost curve is exponential.

The CFO who's asking "why pay for accounting software when AI is $20 a month?" is asking the wrong question. The right question is simpler and harder: “What does it actually cost to get this wrong?”

In accounting, the answer is a restatement, a material weakness, an audit finding that shows up in your 10-K. No amount of token savings offsets that.

Key takeaways:

  • The AI subscription is the smallest line item. Infrastructure, a dedicated hire, maintenance, and risk costs dwarf the token spend. And unlike a SaaS subscription, every one of those costs requires ongoing internal management with no ceiling.
  • 85% of organizations misestimate AI costs, and the miss isn't where they expect it to be. Data platforms and integration infrastructure drive the overruns, not LLM tokens. The "$20/month" mental model is off by orders of magnitude.
  • The real question is risk, not cost. The costs of a purpose-built platform are bounded and predictable. The DIY path's costs compound every close cycle, and the downside of getting it wrong isn't a budget overrun. It's a finding in your financials.

Footnotes

  1. ChatGPT Plus and Claude Pro pricing as of early 2026.
  2. Benchmarkit and Mavvrik, "2025 State of AI Cost Governance," 2025.
  3. Internal analysis based on SME interviews and field intelligence, March 2026. Directional estimates, not published benchmarks.
  4. Based on published pricing from AWS, GCP, Datadog, OpenAI (GPT-4o: $2.50/$10.00 per 1M input/output tokens), and Anthropic (Claude Sonnet: $3/$15 per 1M tokens) as of Q1 2026.
  5. Composite of AI Operations job postings on LinkedIn and Indeed, Q1 2026.
  6. BLS, Computer and Information Systems Managers (SOC 11-3021), May 2024: median $171,200, 75th percentile ~$200,000. Corroborated by Glassdoor data for Lead AI Engineer roles (avg. $197,104, March 2026).
  7. Prospect champion quote, March 2026.
  8. Directional estimate built from publicly verifiable inputs: AI Admin salary (see footnote 6); enterprise AI seat costs based on industry estimates for ChatGPT Enterprise (~$60/seat/month; OpenAI does not publish Enterprise sticker pricing); infrastructure from published cloud provider rates (see footnote 4). Public components alone total ~$257,000-$269,000/year; the $300,000 estimate accounts for iteration, testing, and maintenance overhead.
  9. Field intelligence from pre-IPO deal evaluation, March 2026.
  10. Internal product testing, March 2026.
  11. PCAOB AS 2201 defines a material weakness as a deficiency in internal controls such that a material misstatement may not be prevented or detected on a timely basis.
  12. McKinsey, "The State of AI in 2025," 2025. Only 39% of organizations surveyed reported any EBIT impact from AI initiatives; fewer than 6% reported impact above 5%.
  13. Gartner, "Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025," July 2024.
No items found.