Blog

7 Biggest SOX Compliance Risks of Using AI in Accounting (And What to Do About Them)

Vicky Levay
February 25, 2026
Sign Up for Emails from FloQast

Get accounting insights delivered directly to your inbox!

Error message goes here!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Accountants have been grappling with how to justify their use of AI to their auditors. Now that COSO has published formal guidance on generative AI, auditors are aligning to that guidance. This has raised expectations for the controls over AI use in financial reporting processes. And while the efficiency gains are real, so are the risks, especially when it comes to high-stakes scenarios such as SOX compliance.

Before your team goes all-in on AI-assisted accounting, here's what our compliance experts want you to know about the risks.

1. Completeness & Accuracy of AI Outputs

Here's something that catches a lot of accountants off guard: AI tools aren't like Excel.

Excel is deterministic — put in the same formula, get the same result, every single time. 

AI is non-deterministic — it reasons through problems on its own, which means you can enter the exact same prompt twice and get two meaningfully different responses.

For day-to-day tasks, that might be fine. For SOX-audited financials? That's a problem. You need to know that what you're reporting is both complete and accurate. And with AI in the mix, that requires an extra layer of validation that can't be skipped.

2. Human (Accountant) in the Loop

Another important principle for using AI responsibly in accounting is keeping a “human in the loop,” or “accountant in the loop” for us. As accounting teams use AI tools and agents at various points throughout their workflow, it’s important that there are human checkpoints along the way. AI simply isn’t qualified to take on full end-to-end tasks along with decision-making based on the results. To illustrate this, let’s step out of our accounting context for a moment.

The 2,000 Limes Problem

You have a potluck to attend this weekend and want to bring a homemade key lime pie. While at the office, you ask an AI agent to order the necessary ingredients so you can start making the pie this evening. 

After work, you come home to 2,000 limes in bags covering your front porch. What went wrong? No human in the loop!

Why: To accomplish this, the AI agent had to make a series of decisions (which recipe to use, which ingredients to order and in what amounts, which store to order from, when to deliver, etc). Since you’d empowered that AI agent to actually place the order without your review and approval, getting 2,000 limes is just one example of the dozens of ways this could have gone sideways. Without a human eye to interpret context and scale before approving AI output, something small (like a typo in a recipe’s serving size) could have an outsized impact. 

Minimizing AI Mistakes in Accounting

AI has a tendency to hallucinate, over-generalize, and it struggles with placing data or requests in the proper context. It’s an incredibly powerful tool for accountants, but it still needs a human supervisor in the loop at every major decision or handoff point

A few things worth building into your accounting team’s AI workflow:

  • Transparency: Any AI-generated output should be clearly labeled (think color-coding or highlighting) so human reviewers know to give it extra scrutiny.
  • Training your team: AI presents wrong answers with the same confidence as correct ones. Your team needs to know what to look for and why verification matters, even when the output looks completely reasonable.
  • AI Interrogation: When working with an AI tool, try asking how it got its answer — what was factored in vs not, why it chose a particular formula, what time period it used, etc. Asking AI to effectively “show its work” and context along the way can help you identify major issues and false assumptions sooner. 

3. The Risks of Accountants as Coders

AI has made it relatively simple for non-coders and non-technical people to build tools, write scripts, and create automations. This ability to build customized tools and processes can be extremely useful, helping teams stay nimble and efficient. For accountants, building a new automation or AI agent might feel similar to simply documenting a common internal process. 

What accountants may not realize is that they are actually treading on unfamiliar territory because how the tools and automations they’re building likely qualify as technology or software, not just SOPs.  

SOX has a specific set of controls around software development called IT General Controls (ITGCs). As the name implies, ITGCs are typically the domain of the IT department, not accounting teams — which means most accountants have never had to truly think about them.

ITGCs are not complex, but they do need to be followed. When an accountant builds an AI-powered tool or automation, understanding the basics of ITGCs and applying them to the AI-enabled workflows they're building helps ensure those processes can be supported during an audit. 

4. Segregation of Duties

Segregation of duties is a foundational SOX concept: the idea that no single person should control an entire process from start to finish, for example, one person should not be able to both complete and approve a transaction on their own. It's a fraud deterrent as much as an accuracy check, because true collusion between two employees is a much higher bar than one person acting alone.

With AI in the picture, some accounting teams are asking: Can we count AI as one of the two parties in a segregation of duties control?

The honest answer is: it depends.

When AI Could Help

If the goal of incorporating AI is purely a second set of eyes for accuracy, AI can potentially play that role. You’ll want to take into serious consideration the limitations of AI and be smart about in which context you choose to do this. For example, asking AI to double-check the result of a series of calculations would be acceptable in most cases.  

When Two Humans Are Needed

If the goal of segregation of duties is fraud deterrence, holding two humans mutually accountable, AI doesn't really fit the bill. AI is famously “agreeable” (incentivized to tell users that they are right), and it's not hard for someone to prompt their way to the answer they want rather than the answer that's right.

Work with your auditor to understand where human-only segregation of duties is required, particularly for validating AI-generated work.

5. Results Are Not Repeatable

In a traditional audit, an auditor can repeat and verify your work when needed. They might take your inputs, apply your process, and arrive at the same output you did. That repeatability is a cornerstone of audit trust.

AI makes that significantly harder, if not impossible. Because LLMs are making their own decisions about how to reason through a problem, the process portion is largely invisible; accountants are basically left with an input (a prompt) and an output (the result) only. An auditor cannot simply "run it again" and expect the same result.

To compensate for this, build a habit of logging your AI usage:

  • Save the exact prompt you used
  • Note the date it was run
  • Record which tool and which version was used

Along with highlighting AI-produced outputs in your calculations, keep this log somewhere accessible so that when an auditor asks questions about an AI-produced result, you have something to show them. It won't replicate the result, but it goes a long way toward demonstrating due diligence.

6. LLM Model & Version Management

Even if you're using the same AI tool with the same prompt, a version update may meaningfully change your outputs. Models evolve, and the assumptions or reasoning patterns you've come to rely on for consistent outputs can shift without much warning.

This is why model version management matters. In addition to logging your prompts, note which version of the LLM was used. When a new version rolls out, you should:

  • Test it out before fully switching over
  • Formally announce the version change to the full team
  • Ask everyone to flag unexpected outputs or differences from prior results

That prompt log we mentioned earlier becomes especially useful here. It gives you a baseline for comparison when you're troubleshooting or evaluating whether a new version is behaving the way you need it to.

7. Data Retention & Confidentiality

Financial data is one of the most sensitive types of data your company handles. When you're feeding it into an AI tool, two rules should be non-negotiable:

Short memory. Make sure whatever model you're using has a data retention period of 30 days or less. If the model is ever breached, shorter retention means less exposure.

No training on your data. If a model trains on your company's financial information, that data becomes embedded in its general knowledge, and it can surface in responses even after the specific data is gone. That's a confidentiality risk you can't fully control after the fact.

How you enforce this depends on your setup:

  • If you're building a custom model, work with your legal team to make sure you have the right to use any external data for training. If others can use the model, keep in mind that the information it was trained on may show up in its responses. 
  • If you're using an external LLM, look for settings that let you opt out of data retention and model training. 
    • Some platforms require an enterprise plan to access these controls; trust us, it's worth it.
    • Sometimes these options are located at the platform integration or import level, rather than account settings.

A note on FloQast: Whenever FloQast integrates with a major LLM, we already have contractual and technical safeguards in place to enforce these rules automatically. Our customers don't have to worry about managing this within FloQast's LLM integrations. 

Any time you're connecting another platform, make sure you're accounting for these requirements independently.

Built for Accountants, By Accountants

Most AI software is built by engineers who have never sat through a close, filed a 10-K, or dealt with an auditor's follow-up questions at midnight. FloQast is different.

Our two founders were accountants themselves. They knew firsthand what wasn't working, and they built FloQast to actually fix it. Our team includes former auditors and CPAs who understand your workflows, your pressures, and what compliance really looks like in practice.

If you're evaluating how to bring AI into your accounting processes responsibly, we'd love to show you how FloQast approaches it. Schedule a FloQast platform demo today to see for yourself!

No items found.