Wed, Jan 31, 2024
Explore Our Latest Insights on Artificial Intelligence (AI). Learn More.
Aside from broader societal concerns regarding the proliferation and use of artificial intelligence (AI) in almost every aspect of daily life, the use of AI tools and work product in the financial services sector exposes market participants to a spectrum of risks that demand a robust compliance, governance and supervisory response. Unmitigated and uncontrolled AI risks could expose investment advisers regulated by the U.S. Securities and Exchange Commission (SEC) to reputational, enforcement and examination liability based on regulatory concerns over breaches of fiduciary duty, ineffective cybersecurity protocols, failure to protect confidential client or investor information, inadequate portfolio and risk management practices, deficient vendor management oversight, and overall failures in the design, tailoring, testing, training and documentation of the firm’s compliance program. Kroll’s regulatory compliance, data analytics, cybersecurity, investigations and governance experts are uniquely equipped to assist in the identification and mitigation of risks related to the use of AI within SEC-registrants’ ecosystems.
While AI has only recently grabbed headlines and entered the lexicon of the population at large, its use in the financial services industry is not new and has tremendous upside potential. Both internally and externally created AI solutions either have been deployed or are being tested in a variety of use cases that are designed to obtain an information advantage or to speed efficiencies and decision-making. Such use cases include: to identify patterns and trends via the parsing of extremely large, structured and unstructured, proprietary and/or public datasets; to detect suspicious fraudulent or outlier activity; to conduct investment research and experiments; to construct model portfolios; to surveil for potentially suspicious trading activity; and even to optimize the drafting of investor correspondence and disclosures. Even government regulators are using machine learning and other forms of data analytics to identify potential targets for examination and/or investigation, particularly after market-moving events.
However, the benefits of AI are counteracted by significant risks. They require a firm’s legal, compliance and supervisory personnel to ensure that they fulfill their gatekeeping and oversight functions by designing and implementing a robust set of AI-related policies and procedures that are documented and periodically stress-tested to ensure effectiveness and tailoring.
Recent SEC examinations and priorities highlight a growing emphasis on AI applications in the financial services industry. The SEC has proposed new rules targeting the unique compliance challenges AI presents. These proposed rules also require firms to establish additional due diligence protocols to ensure that the use of AI within their ecosystems complies with federal regulatory requirements.
Put simply, AI is the use of a machine-based system to generate predictions, recommendations or other decisions for a set of objectives. Mainstream users in multiple industries are increasingly accessing AI due to recent advancements in AI technologies (such as ChatGPT), some of which are seamlessly built into internet search engines. Many have been unknowingly interacting with AI for years—for example, with book and movie recommendations, which are powered by AI technologies.
Reactions to widespread use of AI are varied. Some pioneers in AI prophesize that AI may lead to human extinction. AI proponents counter that the world will tremendously benefit from AI, such as through combating climate change, enhancing health care and driving economic growth. Recognizing this tension, SEC Chair Gary Gensler telegraphed that AI poses both risks and rewards in the financial services industry.
In addition to the use cases described above, some AI tools aim to enhance the investment experience through speed, quality and convenience. For instance, robo-advising firms use AI technologies to expedite trading. Firms also use AI to monitor their clients’ behavior patterns and offer personalized services. For example, firms use AI-driven marketing tools, such as interactive and game-like features on smartphone applications, to predict their clients’ behaviors and preferences. Firms then tailor investment recommendations according to those predictions. Firms’ research departments also use AI tools to aggregate, organize and summarize key provisions in SEC public filings to extract key information from multiple sources efficiently. Firms also use AI tools to offer clients conveniences, such as delivering investment-related alerts in real time through smartphone applications.
Firms also utilize AI to support their regulatory and compliance functions. For instance, they implement AI technologies to conduct surveillance of high-risk areas, such as suspicious trading, anti-money laundering activity and insider trading. In addition, firms use AI technologies to compile their regulatory reports on an automated or expedited basis. Their books and records obligations can also be simplified by AI tools, especially as electronic communications continue to proliferate across multiple mediums, such as email, text messaging, instant messaging and social media.
AI exposes financial services firms to a broad range of regulatory, legal and reputational risks. These risks largely stem from AI’s inherent flaws. Because AI models make predictions based on defined datasets and assumptions, their results carry a risk of being skewed due to error and bias. Said differently, use of AI automation does not equate to accuracy or objectivity. Firms are vulnerable to both internal- and external-facing AI-related risks and ethical concerns, including confidentiality of data, cybersecurity, and “data hallucinations,” which poison results that may then be fed into financial models or be used to influence investment research or portfolio management decisions.
Firms introduce internal risks when they intentionally onboard AI tools onto their platforms. Some of these risks are easy to spot. For example, AI’s inherent flaws may cause firms to generate inadequate research, false reports, inaccurate communications or misinformed investment recommendations. Other internal risks are less obvious. For example, AI tools obtain data through various means, such as web scraping, which may implicate the firms’ legal entitlement to such data. Likewise, firms’ possession of this data may trigger unique legal questions, such as HIPAA obligations or similar privacy requirements tied to the possession or use of underlying medical data. Even less apparent, AI tools that collect from multiple data sources may inadvertently create personal identifiable information (PII), which the firms must take precautions to protect. While each data source may not independently constitute PII, when compiled with other sources, they may collectively present PII.
As to the external risks, firms bear such risk exposure even if they do not intentionally use AI tools. For example, firms face such risks through their vendors that rely on AI technologies to render services. Firms may be unaware that these vendors, such as research providers, even use AI. Firms may fail to properly vet the vendors’ data security, privacy or other controls for alignment with the firms’ compliance standards. These external-facing risks are multi-layered and more challenging for firms to navigate because of the lack of full visibility into or control over how these vendors use AI, conduct surveillance of AI-related risks and mitigate these risks. Ultimately, firms may unknowingly breach the AI data owners’ terms and conditions or even infringe on intellectual property rights.
Leaders at the highest levels of government and in corporate America are tracking AI. In July 2023, President Joe Biden and top public company executives of leading AI providers committed to voluntarily mitigating AI risks, such as through robust public reporting. These companies have publicized their policies and practices for the responsible use of AI, mitigating AI-related risks and providing transparency to their end users. The National Institute of Standards and Technology issued voluntary guidelines for AI risk management and responsible practices across industries. Likewise, the SEC proposed new rules to police the risks generated by predicative data analytics. In a nutshell, the proposed rules would require certain SEC-regulated entities to eliminate or neutralize conflicts of interest, comply with new books and records requirements, and revise their policies and procedures. In October 2023, President Biden issued an Executive Order mandating that certain federal agencies and executive departments undertake actions to adhere to proscribed principles to ensure safe, secure and trustworthy development and use of AI. The Executive Order specifically identified financial services as an industry which needs to adhere to appropriate safeguards to protect Americans.
The SEC’s initial proposed AI-related rules are just the tip of the iceberg of imminent regulatory changes. Like the SEC’s past use of data analytics, Chair Gensler has forecasted that the SEC staff may make greater use of AI to surveil and detect suspicious conduct, which may warrant opening an examination or investigation. Gensler also sought additional funding from Congress to expand the SEC’s 2024 budget for emerging AI technologies. Consistent with that message and budget request, the SEC staff is already examining how AI may affect investment analyses and decision-making. The SEC staff appears to be leaving no stone unturned. Recent SEC inquiries to firms address AI from all possible touch points: disclosures, investment modeling, marketing, policies and procedures, training, supervision, data security, trade errors and incident reports and investor risk tolerance evaluation. This approach underscores that the SEC might also expand its focus to other AI-related risks, such as those highlighted in an SEC risk alert concerning alternative data and material nonpublic information (MNPI).
Although certain industry groups publicly requested that the SEC withdraw its proposed AI-specific rules, chief compliance officers (CCOs) and compliance professionals should not wait for the SEC’s response to act. Firms must recognize that fiduciary, governance and other related laws and regulations in effect already apply to the firms’ use, directly or indirectly, of AI technologies. As mentioned previously, AI presents internal- and external-facing legal, regulatory and reputational risks for firms. The good news is that CCOs and compliance professionals can mitigate such risks by proactively taking the following steps:
Kroll’s experts stand ready to leverage our experience in regulatory compliance to craft policies, procedures, testing, training and recordkeeping designed to help firms mitigate the risk of noncompliance when they adopt AI tools into their workplace operations. Kroll will design gap analyses targeted to identify risks and recommend enhancements to their compliance programs to account for AI adoption. We will also prepare SEC-registered firms for navigating the complexities associated with examination and investigation inquiries, especially as the SEC continues to probe AI applications within the financial services industry. Contact our experts today to learn more.
Within our Regulatory Advisory and Assurance Services, we assist financial services firms in a range of engagements across our suite of subject matter expertise.
End-to-end governance, advisory and monitorship solutions to detect, mitigate, drive efficiencies and remediate operational, legal, compliance and regulatory risk.
Navigate the ever-changing U.S. financial regulatory environment with confidence. Kroll provides unparalleled expertise in SEC, FINRA, NFA and CFTC regulations, helping clients mitigate risks, maintain current compliance programs and confidently overcome regulatory challenges.
by Maria Evstropova, Matt Austen
by Mark Turner, Richard Taylor, Richard Kerr
by Ken C. Joseph, Esq., Ana D. Petrovic, Jonathan "Yoni" Schenker, Jack Thomas, Justin Hearon
by Nicole Sette, Joe Contino