Blame the Bot? AI, Error, and Liability in Accounting practice

0

By Clara Adade

Not long ago, a mid-sized Ghanaian business upgraded it accounting system to meet the Ghana Revenue Authority’s (GRA’s) new electronic VAT (E-VAT) requirement which was rolled out in May this year.

The software was smart, it could process invoices, track transactions in real time and automatically compile tax submissions. It worked, until it didn’t.

A month later, the system incorrectly tagged an imported machine as VAT-deductible. The return was submitted, signed off by the inhouse accountant and everything seemed in order until the GRA issued a penalty for over-claimed input VAT.

The vendor behind the software denied fault, stating the classification rules were the client’s responsibility. The company’s accountant had relied on the tool. So, who is to blame? The accountant? The Vendor? or the Algorithm?

This situation, although hypothetical in the case of Ghana, but increasingly plausible, reveals the awkward space AI occupies in accounting. These tools are not only speeding up the job, they are changing how decisions are made and if something should go wrong, no one seem quite sure who is answerable.

The Expanding role of AI in financial reporting

Artificial intelligence has moved quickly from the periphery of accounting into its core operations. Once limited to sorting transactions or automating reconciliations, AI now plays a growing role in audit analytics, fraud detection, tax modelling, and even financial statement drafting.

Algorithms comb through vast data sets, flagging inconsistencies long before human reviewers intervene. Invoices are matched, anomalies highlighted, and reports generated — often without a single spreadsheet being opened.

The appeal is obvious. AI delivers speed, scale, and a kind of tireless precision that traditional systems can’t match. For firms under pressure to do more with less — and to close faster, report earlier, and comply with growing regulatory demands — automation looks less like a luxury and more like a necessity.

But while the tools have advanced rapidly, the frameworks surrounding them have not. In many cases, accountants rely on systems they cannot fully explain, and regulators are struggling to keep pace. The technology is evolving faster than the ethical and legal norms that ought to govern its use. This is no longer just a technical shift — it’s an emerging risk landscape, and one the profession is only beginning to understand.

The liability grey zone

As more accounting tasks shift to AI, the question of who takes responsibility when things go wrong is becoming harder to answer. In Ghana, where the Ghana Revenue Authority has rolled out the E‑VAT system to improve tax compliance, many businesses now rely on AI-enabled software to calculate their monthly returns. These systems are often fast and efficient — until they’re not.

Tools like Xero with Hubdoc, QuickBooks, and Dext are becoming more common, especially among SMEs, where they automate data entry and categorize transactions without much human input. Larger companies and audit firms, meanwhile, are experimenting with more complex tools like MindBridge, which scans entire general ledgers to flag anomalies, or CaseWare IDEA, which supports audit analytics and fraud detection. There’s also BlackLine for automating month-end close processes, and OneStream, used in financial planning and reporting. In theory, these tools reduce errors. In practice, they can also introduce them — quietly and without much visibility.

Under current accounting standards, however, there’s no grey area: the responsibility sits squarely with the professional. ICAG, like most professional bodies globally, holds accountants to the standard of due care, regardless of the tools used. Whether a number was keyed in manually or generated by an AI system, it’s the accountant who’s expected to check it, understand it, and stand by it.

That’s the theory. In reality, many of the systems in use today don’t exactly invite interrogation. Some are designed to be easy to use, but not necessarily easy to question. When an invoice is misread, or an expense is misclassified, say, a piece of imported machinery coded as deductible when it isn’t, it might take weeks before anyone notices. And when the error leads to a regulatory fine or a misstated report, the blame tends to land on the person who approved it, not the software that produced it.

This creates an uncomfortable kind of shared responsibility. The accountant signs off, the company adopts the tool, and the vendor often shields itself behind its terms of service. But Ghana’s legal and regulatory environment, like many others, still assumes that someone is ultimately in charge. That assumption worked well when decisions were made by people. It’s not so simple now that decisions are being made, or at least shaped, by machines.

Professional Judgment in the Age of Automation

Professional judgment has always been the cornerstone of good accounting. It’s what allows an auditor to question a number that looks technically correct but intuitively wrong. It’s what prompts a tax advisor to consider the wider implications of a one-off adjustment. And it’s what distinguishes a qualified accountant from an automated system that simply processes what it’s given.

But as AI tools become more embedded in financial workflows, the space for judgment is quietly shrinking. Many platforms operate in ways that don’t invite second-guessing, not because they hide their workings deliberately, but because their logic isn’t always visible. A tool might flag a transaction as “high risk” or assign it to a particular cost center based on past patterns, but it rarely explains why. And as these tools become more accurate and efficient, the instinct to override them weakens.

The danger isn’t that accountants will lose their jobs to automation. It’s that they’ll stop applying the judgment their roles still require. In Ghana, where many firms are adopting software to meet evolving tax and reporting obligations, there’s a risk that the presence of a sophisticated tool is mistaken for an assurance of accuracy. But no matter how advanced the system, it does not carry responsibility. That still rests with the professional.

Preserving that role means resisting the temptation to hand over decisions wholesale to technology. It means asking how, not just what, the system concluded. And it means remembering that speed and scale don’t eliminate the need for skepticism. If anything, they make it more important.

The Legal Implications and What’s Missing

The law hasn’t quite caught up with the technology. Accountants are now using tools that weren’t even imagined when most professional standards were written. And while the expectations around accountability remain — sign off with care, apply judgment, follow the rules — the tools we now use to get there don’t always fit neatly into those frameworks.

In Ghana, many firms are adopting accounting systems that automate tax filings, reconciliations, and even disclosures. But if something goes wrong — a VAT return is overstated, or a journal entry is posted to the wrong account — there’s often no clear answer about who takes the fall. The accountant might have relied on the software, the business might have approved it, and the vendor might have a clause in its contract that says: “We’re not liable.”

That puts professionals in a difficult spot. Most of the time, these tools work. But when they don’t, the person signing the return or approving the numbers is the one left explaining what happened. And that’s assuming they even know how the error occurred — which isn’t always the case with AI.

The bigger issue is that there’s no real structure for how to handle this. Regulation is slow. Vendor contracts are one-sided. And the profession hasn’t yet had the difficult conversation about what shared responsibility should actually look like in an age of automated decisions. If we don’t address that soon, it’s not just risk we’ll be mismanaging — it’s trust.

Rethinking Risk and Accountability

For all its promise, AI has introduced a layer of risk the profession hasn’t fully reckoned with — not just technical risk, but accountability risk. When a tool quietly misclassifies a transaction or fails to flag something unusual, it may take months before anyone notices. By then, the numbers have been signed off, reports have been submitted, and the damage is done.

What’s missing isn’t more automation — it’s more oversight. Not just from regulators, but from within firms themselves. Too often, AI is treated like a plug-and-play solution: install it, trust it, move on. But as the tools become more complex, that mindset becomes harder to defend.

Professionals need to stay involved — not just in the review process, but in how the systems are chosen, how they’re trained, and how their outputs are used. That doesn’t mean rejecting automation. It means putting guardrails around it. Firms should be asking tougher questions about how these tools work, and accountants should feel empowered to challenge their results when something doesn’t sit right — even if it’s technically “correct.”

The goal isn’t to slow things down. It’s to make sure speed doesn’t come at the cost of accuracy or accountability. We need to build systems that keep people in the loop, not out of it — because no matter how smart the technology gets, it can’t be the one holding the pen when the numbers go out the door.

Conclusion: Not Just Smarter Systems, but Smarter Governance

AI isn’t going away. In fact, it’s only going to become more embedded in the way accountants work — from audits and tax filings to forecasting and reporting. That’s not something to fear. But it is something to manage.

The question is no longer whether we should use AI, but how we do it responsibly. We need clearer standards — not just technical ones, but ethical and professional ones — that reflect the reality of shared decision-making between humans and machines. We need better contracts that don’t simply shift liability back to the user. And we need more honest conversations in firms, classrooms, and boardrooms about what professional judgment looks like when part of the thinking is being done by a tool.

In Ghana and beyond, the profession has an opportunity to get ahead of this. Not by resisting change, but by shaping it — with clearer guidance, stronger oversight, and a renewed commitment to the one thing AI can’t replace: human accountability.

Because at the end of the day, when the numbers are wrong, the question won’t be what went wrong — it will be who was supposed to get it right.