From Innovation to Illusion: The Rise of AI Washing in Finance

🤖 What Is AI Washing?

Think of AI washing as the tech world’s version of greenwashing. It’s when companies exaggerate, misrepresent, or even invent their use of AI just to look innovative to investors, customers, or regulators. As Ryan Adams and Scott Lesmes from Morrison Foerster pointed out in a recent keynote, the days of getting away with AI hype are over—the SEC is already stepping in.

Back in March 2024, the SEC brought its first enforcement actions for misleading AI claims. Two investment advisers paid fines after one boasted about “expert, AI-driven forecasts” that never existed and the other advertised a machine learning platform that was never built. A third case went after a startup CEO who claimed to offer AI-powered recruiting tools that simply didn’t exist.

The message is clear: If you talk the AI talk, you’ve got to walk the walk.

📣 What the SEC Expects from Disclosures

While the SEC hasn’t rolled out AI-specific rules yet, it’s made it plain that existing disclosure requirements—around risk factors, the MD&A section of reports, and any major cybersecurity incidents—already cover AI.

What’s different now is the spotlight on qualitative materiality. As Adams says, “Accounting teams often look at numbers, but with AI, non-numeric risks—like reputational damage, ethical missteps, or biased models—can be just as significant.”

A few things to ask yourself:

  • Are you using third-party AI tools? Be upfront about potential data privacy or intellectual property risks.

  • Building your own models? Think about how their accuracy could affect your products or customers.

  • Promising AI-driven efficiencies? Have the evidence ready—no more marketing fluff.

In other words, your risk disclosures should be specific, not a boilerplate paragraph. And if AI is central to your business strategy or operations—especially if it affects spending, staffing, or competition—it deserves a spot in your MD&A.

🚀 AI Is Transforming Financial Reporting—Fast!

Artificial intelligence isn’t just a buzzword for tech firms—it’s on track to revolutionize how we prepare and consume financial information. That was the eye-opening takeaway from Kris Bennatti, CEO and co-founder of Hudson Labs, during her keynote at a virtual conference for finance leaders.

Having worked as a CPA and auditor at KPMG before diving into AI, Bennatti brings a rare blend of real-world accounting know-how and cutting-edge tech expertise. She reminded the audience that finance teams face unique challenges—huge reporting loads, talent gaps, and the constant juggle of complex regulations and stakeholder questions. As she put it, “We need AI more than most industries.”

❌ “XBRL Is Dead”—And Why That’s Good News

One of her boldest points? That traditional, template-based formats like XBRL aren’t the be-all and end-all anymore. Thanks to vision-language models, modern AI can read PDFs, images, and even earnings call transcripts—just like a human would. No rigid tags required.

“Structure used to be essential for making data usable. Now, it matters a lot less. That means you can focus on the message, not the format,” Bennatti said.That freedom lets finance pros spend less time wrestling with data formats and more time uncovering insights.

🎯 Why Specialized AI Outshines General Models

If you’ve tried using ChatGPT at work and felt underwhelmed, don’t worry—you’re not alone. As Kris Bennatti explained, generic AI tools often miss the mark in finance because they lack the precise context we need. By contrast, finance-focused AI is built to handle numbers accurately, tag data correctly, and reason through long, detailed documents.

Most finance teams aren’t in a position to build their own AI from scratch (unless you’re Google). Instead, look for specialized platforms that come pre-trained on financial data, have rock-solid security, and won’t mismanage your proprietary information.

“The true power of an AI solution isn’t just the underlying model—it’s how it processes, retrieves, and makes sense of your data,” Bennatti said.

🚀 Quick Wins: Where AI Is Already Earning Its Keep

Bennatti pointed out some easy wins for finance teams to get fast returns on AI:

  • Auto-extracting tricky disclosures, like going-concern warnings or internal-control issues.

  • Summarizing earnings calls, with side-by-side comparisons to peers.

  • Scrapping manual XBRL tagging—no more endless spreadsheet work.

  • Bridging disconnected systems, letting AI transform data between your CRM, ERP, and analytics tools.

Remember: AI isn’t here to replace your expertise—it’s here to make your workflows faster and smarter. Think of it like Excel on steroids: not “intelligent” by itself, but a tool you—and only you—know how to leverage best.

⚠️ A Friendly Warning—and a Bold Invitation

Bennatti also cautioned that real-world performance can differ from lab tests. “Just because an AI model nails the bar exam doesn’t mean it can replace your lawyer,” she joked. But even with limits, the benefits are very real.

For finance leaders, the message is simple: AI is coming for both how we prepare and how we consume financial data. The real question isn’t if it will change our world, but whether you’ll be the one steering that change—or playing catch-up later.

As artificial intelligence keeps reshaping finance and accounting, leaders want practical guidance on using these tools well. Ahead of the “AI in Finance and Accounting 2025: Managing Governance, Adapting the Workforce” event (April 9–10), we sat down with Ryan Hittner, Audit & Assurance Principal at Deloitte & Touche, to learn how teams are blending classic AI with the newest generative tools, what auditors need to watch for, and why governance and human oversight remain crucial. Below, Ryan shares straightforward steps finance leaders can take to embrace AI while keeping their reporting solid and trustworthy.

FEI Weekly: Where is AI already making a difference in finance functions? Are companies really adopting it?
Ryan Hittner:
Absolutely—organizations are moving fast. On one side, “traditional” AI and intelligent automation are handling tasks like two- or three-way transaction matching, reducing errors and freeing people up for higher-value work. On the other, Generative AI (GenAI) is jumping in to draft memos, research topics, and pull key clauses from contracts without a ton of manual effort.

The real magic happens when you combine both: let proven automation keep your controls tight, then layer on GenAI where you need creativity and flexibility—say, summarizing a long report or brainstorming narrative language.

But remember, adding AI isn’t just a tech project; it changes your audit landscape. Both internal and external auditors will want to know how your AI tools work, which data they use, and how you’re monitoring them. So as you roll out new AI capabilities, beef up your governance: document each AI workflow, assign clear roles and responsibilities, and loop in your audit teams from the get-go.

Steps to Make Your Audits AI-Ready

FEI Weekly: What can organizations do to manage the risks AI brings to their audits?
Ryan Hittner:
AI adds powerful capabilities, but it also introduces new risks—especially around the accuracy of its outputs and how transparent its “thinking” is. Since everyone from executives to regulators relies on finance data, you not only need confidence in your AI tools, you must be able to show why you trust them. There’s no one-size-fits-all playbook here, but you can start by building solid procedures and controls in these key areas:

  1. Human Oversight & Transparency

    • Keep people in the loop. Wherever possible, have staff review AI results and make it clear to stakeholders which processes are human-driven and which use AI.

  2. Data Management & Audit Trail

    • Treat your AI data like gold. Put quality checks in place so the inputs your models use are accurate and up-to-date. Archive both what you feed the AI and what it spits back out, and track any changes you make to the model or its data—this paper trail is crucial for audits.

  3. Testing & Ongoing Monitoring

    • Before you go live, stress-test your AI models to prove they work as intended. Then keep an eye on them in production—set up regular validation checks so you can catch and fix issues before they become problems.

  4. Clear Documentation & Reporting

    • Write down your decisions. From why you chose a particular model to how you’ve tweaked it over time, good documentation means everyone can trace the “why” behind your AI processes. It also shows auditors you’re on top of things.

Balancing Automation with Human Review

FEI Weekly: AI is powerful, but human oversight still matters. How do you strike the right balance?
Ryan Hittner:
Once your AI consistently delivers reliable results during testing and early use, you don’t need to manually check every single output forever. Instead, consider:

  • Spot-Checking: Sample a portion of the AI’s work rather than 100% review.

  • Automated Alerts: Set up controls that flag odd results or unusual patterns so people only jump in when needed.

Leave a Comment

Your email address will not be published. Required fields are marked *

error: Content is protected !!