Wed, Oct 16, 2024
There are two aspects to consider when it comes to AI and elections. One is the role of AI in shaping how elections are conducted and the risks it poses in promoting deception and disinformation, which is addressed in another part of Kroll’s election series, “What Have We Learned About GenAI and Elections?”
The other consideration, and the focus of this article, is how the outcome of the U.S. and other recent elections in UK, France and elsewhere may impact the regulation of AI development as governments around the world struggle with balancing the need for security and AI risk management against the desire to promote innovation that would unlock the potential of AI technology.
Thus far, regulatory approaches to AI have varied as governments around the world figure out how to deal with a dynamic and rapidly evolving AI landscape.
The EU Artificial Intelligence Act (AI Act), which took effect in August 2024, seeks to harmonize rules across the EU and is the first comprehensive regulatory framework to specifically address AI with a risk-based approach. In short, the higher the perceived risk of AI in a particular use or circumstance, the more the AI Act rules apply. The highest classification bans AI completely where it is deemed a clear threat to fundamental rights. The AI Act seeks to promote trustworthy AI. Of greatest relevance to business are the ethical guidelines, the regulations with which they must comply, and the penalties for noncompliance, which can reach a maximum of EUR 35 million or 7% of global turnover.
It is not yet clear whether this year’s various EU elections will lead to efforts to fundamentally alter the AI Act. Several areas of the act are still not set in stone, and numerous critics argue that it places barriers to innovation. Some EU member states may seek to alter restrictions around high-risk and general-purpose AI, which they view as too restrictive.
In the U.S., a lighter regulatory touch has prevailed thus far. While the Biden administration has not passed comprehensive federal AI legislation, various states, including California, have enacted their own forms of AI regulation, which businesses will need to consider.
The impending U.S. election may alter things, as the Harris and Trump campaigns have expressed differing perspectives. Harris’ preferred approach can be seen in the Biden-Harris October 2023 Executive Order on safe, secure, and trustworthy AI, which enshrined a number of principles for the safe, secure and trustworthy development of AI. Trump has indicated that he favors deregulation and promoting innovation and has been reported as saying he would repeal the executive order because it is “dangerous” and “hinders innovation”.1
In the UK, the previous Conservative government spelled out a "pro-innovation" approach to its 2023 AI Regulation White Paper, aiming to promote innovation through using existing laws and regulators to implement a framework of ethical principles rather than imposing new regulations. It remains to be seen whether the new Labour government will alter this approach. Prime Minister Keir Starmer has indicated preference for regulation, though not as extensive as the EU’s, but details are limited.
China implemented Interim Measures for the Management of Generative Artificial Intelligence Services in August 2023 “to promote healthy development of generative AI, protect national security and public interests, and protect the rights of citizens, legal entities and other organizations.” The measures reflect a regulatory approach that has evolved from industry self-regulation to national standards to specific rules.
The UAE, through its UAE National Strategy for Artificial Intelligence 2031, seems to establish itself as a global leader and hub for AI development. Among its objectives is “optimizing AI governance and regulations” and promoting ethical use of AI through its AI Ethics Principles and Guidelines.
As with other new technologies, AI’s rapid development poses challenges for regulators. Regulations tend to be written to regulate known entities. Emerging capabilities, particularly in generative AI, are often impossible to anticipate and thus difficult to regulate. For example, early iterations of the EU AI Act did not anticipate the emergence of large language models and had to be updated in real-time to reflect the technology’s rapid development.
Invariably, there will be gaps or holes to be filled in as the technology and issues evolve. Finding the right balance between fostering innovation and protecting investors, consumers and the public is at least partly driven by political choices. The outcome of the upcoming U.S. election is likely to determine whether AI regulation at the U.S. federal level accelerates or decelerates.
And, as with other kinds of regulation, there is a need for global alignment, such as on ethical standards, to discourage more profit-oriented AI players from shopping around for the least restrictive jurisdiction.
There does seem to be near universal agreement around the world on the need for AI to be safe, secure and transparent and to not cause harm or threaten fundamental rights. This principles-based approach is reflected in the International Treaty on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the “Convention”), drafted by 46 Council of Europe member states and 11 non-member states, including the U.S., Japan and Israel.
For businesses navigating this evolving and still uncertain regulatory environment, some key considerations include:
Those considerations and risks can and will be addressed as the opportunity that AI presents is so compelling for making firms more efficient, enabling the high-speed processing of large datasets to obtain actionable intelligence and better detecting fraud and other risks across both structured and unstructured data.
With AI and generative AI, as with past disruptive technologies like the internet, regulators are concerned about conflicts, undisclosed risks and ineffective or nonexistent oversight due to the potential for significant impacts on financial markets and vital sectors like healthcare.
But, in much the same way companies and regulators became comfortable with the internet, which is regulated to some extent, comfort with AI and generative AI is likely to grow over time as use cases increase. That said, industry experts believe we are in the early stages of what is already incredibly powerful AI. Many industry leaders have publicly expressed concerns about the existential risks this technology poses to humanity. Effective guardrails are absolutely essential, and the responsible use of AI technologies will need to be driven by organizations, creators and developers.
1 Experts Worry Republicans Will Repeal Biden's AI Executive Order | TIME
End-to-end governance, advisory and monitorship solutions to detect, mitigate, drive efficiencies and remediate operational, legal, compliance and regulatory risk.
Incident response, digital forensics, breach notification, security strategy, managed security services, discovery solutions, security transformation.
AI is a rapidly evolving field and Kroll is focused on advancing the AI security testing approach for large language models (LLM) and, more broadly, AI and ML.