The CFO in the Crosshairs: Why AI-Powered Impersonation is Finance's Most Dangerous New Threat
Why identity-based controls are no longer sufficient, and what Australian finance leaders must do about it
You have approval hierarchies. You have segregation of duties. You have multi-factor authentication on your ERP. And yet, a single video call, indistinguishable from the real thing, is all it takes to move millions out of your organisation's accounts. This is not a technology failure. It is a governance failure. And it is happening to organisations far more sophisticated than yours.
The Call Sounded Real. The Face Looked Real. It Cost $25 Million.
In early 2024, a finance worker at global engineering firm Arup joined what appeared to be a routine video call. The CFO was on screen. So were several familiar colleagues. The conversation was credible, the request urgent, and the authority unambiguous. By the end of the call, the employee had authorised fifteen wire transfers totaling approximately USD $25 million to overseas accounts. Every person on that video call, the CFO, the colleagues, and the entire meeting, was a deepfake. AI-generated imposters, constructed from publicly available conference footage and company videos, had replicated faces and voices convincingly enough to fool a trained finance professional in real time.
This was not a hypothetical scenario designed to scare boards into action. It happened. And it is becoming the template.
An Accelerating Threat, Not an Emerging One
Finance leaders who have spent years hardening their organisations against phishing emails and payment redirection scams are now contending with something qualitatively different. Deepfake fraud attempts surged 3,000% in 2023 alone.1 By 2024, businesses were experiencing average losses of nearly USD $500,000 per deepfake-related fraud event, with large enterprises reporting losses up to $680,000 per incident.2 Fraud losses facilitated by generative AI are projected to escalate to USD $40 billion annually in the United States alone by 2027.2
In Australia, the threat environment is no less severe. According to the Australian Signals Directorate's Annual Cyber Threat Report 2024–25, phishing-based social engineering, now increasingly enhanced by AI, was recorded in 60% of cyber incidents reported during the financial year.3 Australian businesses collectively lost AUD $2.03 billion to scams in 2024, with Scamwatch reporting a significant rise in AI-generated impersonation techniques targeting executives and finance teams directly.4 By early 2025, Australian scam losses had already increased 28% compared to the same period the prior year, despite a reduction in total reported incidents, meaning criminals are extracting larger amounts per attack.5
The mathematics of that trend should concern every CFO in the country.
Why Finance is the Primary Target
It is not coincidence that deepfake attacks overwhelmingly target finance functions. CFOs, controllers, and AP teams sit at the intersection of authority and access. They can move money. They have approval rights. They operate under genuine urgency, including end-of-quarter transfers, board-directed payments, acquisitions that must remain confidential. Attackers understand this operational reality and engineer their impersonations accordingly.
The attack pattern is increasingly sophisticated. Fraudsters typically begin with passive reconnaissance, monitoring compromised inboxes, studying organisational charts, and identifying who in finance handles wire transfers. They harvest voice and video samples from public conference recordings, LinkedIn content, and company webinars. Security researchers have confirmed that as little as three seconds of audio is sufficient to produce a convincing voice clone.6 They then combine spoofed email with a follow-up deepfake call or video conference, multi-channel "verification" that disarms even well-trained employees.
The psychological mechanics are deliberate. Finance teams operate in environments where urgency is normal, where requests from the CFO or CEO are not questioned, and where delaying a legitimate payment carries its own professional risk. Attackers exploit these hierarchies. The result is that CEO fraud now targets at least 400 companies per day globally.7
And yet, more than half of business leaders report that their employees have received no training whatsoever on identifying deepfake fraud attempts.2
The Gap in Traditional Controls
The problem is not that finance teams are careless. The problem is that the controls most organisations rely upon were built for a threat environment that no longer exists.
Dual-authorisation processes that depend on email confirmation are compromised when attackers have access to spoofed or breached email accounts. Call-back verification fails when the voice on the other end is indistinguishable from the real person. Even video calls, long considered a gold standard for verification, are no longer reliable when the technology to fabricate them in real time is freely available and improving monthly.
What organisations require is a fundamentally different posture: one that validates the transaction, not just the instruction.
The question can no longer simply be "did the right person authorise this?" It must be "does this payment, at this value, to this payee, at this time, conform to every verified parameter of legitimate activity, regardless of who appears to have requested it?"
This is the principle behind real-time financial governance. When controls are embedded in the payment workflow itself, not in the identity of the person approving it, deepfake impersonation becomes a much harder attack to execute successfully. A payment that requires cross-validated supplier data, matched ABN records, consistent historical patterns, and out-of-band confirmation cannot be released simply because someone convincing appeared on a video call.
What CFOs Should Be Doing Now
The organisations that are successfully managing this risk share several characteristics. They have separated the verification of payment instructions from the identity of the person issuing them. They have established rigid out-of-band authentication processes for high-value transfers, processes that cannot be waived regardless of apparent urgency or seniority. They have invested in transaction-level controls that apply policy enforcement automatically, removing reliance on human judgement in high-pressure moments.
They have also accepted a difficult truth: that "never trust, always verify" is no longer a cybersecurity philosophy. It is a finance operations philosophy.
The 2024 ACFE Report to the Nations found that a typical occupational fraud scheme runs for 12 months before detection, costing organisations an average of $9,900 per month in losses, and that was before AI dramatically lowered the barrier to sophisticated impersonation.8 The median loss per fraud case globally now sits at $145,000. For attacks enabled by deepfake technology, the numbers are an order of magnitude higher.
The CFO's role has always included stewardship of financial controls. In 2025 and beyond, that stewardship must extend to the payment moment itself, because that is where the attack now lands.
Get in touch with the RedOwl team
Whether you have a question or need support, reach out and we’ll connect you with the right person.
Contact usReferences
- Eftsure AU. (2024). Finance worker loses $39 million to deepfake. Retrieved from https://www.eftsure.com/en-au/blog/cyber-crime/finance-worker-loses-39-million-to-deepfake/
- Eftsure US. (2025). Deepfake statistics 2025: 25 new facts for CFOs. Retrieved from https://eftsure.com/en-au/statistics/deepfake-statistics/
- Australian Signals Directorate (ASD). (2025). Annual Cyber Threat Report 2024–25. Retrieved from https://www.cyber.gov.au/about-us/view-all-content/reports-and-statistics/asd-cyber-threat-report-2024-2025
- Australian Competition and Consumer Commission (ACCC), National Anti-Scam Centre. (2025). Targeting Scams Report 2024. Retrieved from https://www.scamwatch.gov.au/system/files/targeting-scams-report-2024.pdf
- BankInfoSecurity. (2025). Australian Scam Losses Increase 28% in 2025. Retrieved from https://www.bankinfosecurity.com/australian-scam-losses-increase-28-in-2025-a-28580
- Compass MSP. (2025). AI-Generated Deepfakes Are Here: Why Your Business Governance Must Adapt. Retrieved from https://compassmsp.com/resources/ai-generated-deepfakes-are-here-to-stay
- Brightside AI. (2025). CEO Fraud: $50M Voice Cloning Threat to CFOs. Retrieved from https://www.brside.com/blog/deepfake-ceo-fraud-50m-voice-cloning-threat-cfos
- Association of Certified Fraud Examiners (ACFE). (2024). Occupational Fraud 2024: A Report to the Nations. Retrieved from https://www.acfe.com/report-to-the-nations/2024/
This article is intended as a thought leadership piece for finance, risk, and governance professionals. Statistics and figures cited are sourced from publicly available third-party research and regulatory publications.

