Explore Our Latest Insights on Artificial Intelligence (AI). Learn More.
AI: Governance and Oversight – Navigating the New Frontier of Financial Services
by Mark Turner, Richard Taylor, Richard Kerr
Wed, Mar 6, 2024
Explore Our Latest Insights on Artificial Intelligence (AI). Learn More.
According to recent news reports, unknown bad actors used deepfake technology to defraud a Hong Kong-based multinational company of approximately USD 25 million. Specifically, news outlets reported that the bad actors deployed artificial intelligence (AI) to mislead a company employee into believing that the employee was communicating with real company personnel, when in fact such interactions were an illusion. The defrauded employee was under the impression that they conducted a video call with the company's CFO and other company employees. However, the likenesses and voices of the additional parties to the video, including the CFO, were replicated through AI, using publicly available video and audio footage. Based upon instructions received during the video call, the employee transferred HK 200 million, or USD 25.6 million, to various Hong Kong bank accounts across 15 transfers, according to reports.
AI use is proliferating in the marketplace, bringing about helpful and positive results for organizations around the world. It is not a surprise that AI use is also finding its way into the hands of bad actors. Replication of audio and video likenesses taken from publicly available file footage is a natural extension of established social engineering attacks that manipulate human behavior to disclose personally identifiable information (PII). Examples of social engineering attacks include compromising an email platform and phishing. Phishing occurs where a perpetrator sends a message falsely representing that it came from a legitimate source to gather PII to be used for illicit purposes (e.g., identity theft and money transfers). Reports of the Hong Kong multinational deepfake fraud noted that the defrauded employee received a message purportedly from the CFO ahead of the deepfake video call.
Sophisticated fraud schemes facilitated through AI technologies pose significant fraud risk and other compliance challenges for organizations and compliance professionals. Days after news outlets reported the fraud suffered by the Hong Kong-based company, the SEC filed settled charges against a company and its founder for allegedly luring investors by falsely claiming that a hedge fund used AI and machine-learning technologies to execute trading strategies. The SEC further alleged that both the hedge fund and the AI tools never existed.
This news report and recent SEC enforcement action together underscore how phishing, deepfake and other evolving AI technologies present a vast range of fraud and compliance risks to financial services firms and their investors. The SEC is tracking these developments and proposed new rules concerning AI, as detailed in Kroll’s recent article concerning “AI Risks and Compliance Strategies" Compliance professionals should proactively prepare for the unique challenges they may face as a result.
How can financial services firms guard against increasingly complex fraud attempts? What lessons can we learn from the recent fraud perpetuated through deepfake technology? The circumstances warrant reviewing good hygiene practices regarding AI use internally as well as reinforcing authorized instructions and money movement practices to mitigate fraud risk through AI technology. While not all financial services firms maintain custody of funds or assets directly, it is a good reminder for chief compliance officers (CCOs) to regularly inventory the circumstances unique to their firms and the internal controls regarding authorized parties, contact information and interactions with custodians.
The importance of human intervention to reduce fraud risk cannot be overstated as an integral step to fraud prevention. Money movement systems and associated authorizations are automated for efficiencies of resources and client convenience. However, human involvement continues to be integral to reduce fraud risk. Changes to authorized parties, contact information or wire transfer instructions must be verified with pre-existing internal or client personnel on record prior to the request for execution. Said differently, to obtain confirmation, do not reply to the email and do not call back the number or use the video link contained in the email request. Instead, as a safeguard against email compromise and deepfake use, implement revised instructions or update information only after receiving direct confirmation from existing authorized personnel. A CCO can implement regular testing, monitoring and reporting upon authorized instructions along with regular reminders and communications regarding established protocols as part of an overall compliance program.
Maintain a “back to the basics” approach, focused upon fundamental human intervention and confirmation procedures to reduce fraud risk, regardless of how convincing or time-sensitive circumstances appear. The good news is that CCOs, risk managers and firm personnel can mitigate fraud risks by proactively taking the following steps:
In addition to the guidance outlined in “AI Risks and Compliance Strategies,” Kroll’s experts stand ready to leverage our experience in regulatory compliance to craft policies, procedures, testing, training and recordkeeping, designed to help firms mitigate the risk of noncompliance when they adopt AI tools into their workplace operations. Kroll will design gap analyses targeted to identify risks and recommend enhancements to compliance programs to account for AI adoption. We will also prepare SEC-registered firms for navigating the complexities associated with examination and investigation inquiries, especially as the SEC continues to probe AI applications within the financial services industry. Contact our experts today to learn more.
End-to-end governance, advisory and monitorship solutions to detect, mitigate, drive efficiencies and remediate operational, legal, compliance and regulatory risk.
Kroll is a leader in performing regulatory and compliance reviews that complement the financial, commercial and legal M&A transaction due diligence process.
End-to-end governance, advisory and monitorship solutions to detect, mitigate and remediate security, legal, compliance and regulatory risk.
Incident response, digital forensics, breach notification, security strategy, managed security services, discovery solutions, security transformation.
Navigate the ever-changing U.S. financial regulatory environment with confidence. Kroll provides unparalleled expertise in SEC, FINRA, NFA and CFTC regulations, helping clients mitigate risks, maintain current compliance programs and confidently overcome regulatory challenges.
by Mark Turner, Richard Taylor, Richard Kerr
by Ken C. Joseph, Esq., Jonathan "Yoni" Schenker, Ana D. Petrovic
by Nicole Sette, Joe Contino
by Laurie Iacono, Isaiah Jensen, Caitlin Muniz, Amie Nguyen