AI is a rapidly evolving field and Kroll is focused on advancing the AI security testing approach for large language models (LLM) and, more broadly, AI and ML. Our methodology and approach are constantly updated to reflect the latest developments in these fast-changing technologies.
The biggest challenge in LLM testing is dealing with the unlimited complexity of language inputs. LLMs take text-based input and tokenize the words into elements of a vector; these vectors are then transformed across a massive neural network that captures the relationships of all words to form human languages. The scale of the complexity of these relationships means that an attacker has virtually unlimited ways to prompt an LLM to achieve their goals. Designing and testing guardrails to protect such a system is extremely challenging.
Across all our AI security testing, 92% of assessments discovered a “prompt injection” vulnerability, potentially allowing attackers to manipulate the model and its behavior through malicious inputs.
In our testing, 80% of “prompt injection” vulnerabilities were either high or medium risk. Direct and indirect prompt injection attacks can lead to sensitive data exposure, excessive computing costs and reputational damage.
By 2026, more than 80% of enterprises will have used GenAI APIs, models and/or deployed generative AI (GenAI)-enabled applications in production environments.
Thirty-four percent of organizations are either already using or implementing AI application security tools to mitigate the accompanying risks of GenAI.
Kroll has developed an AI security testing methodology that aligns with the OWASP Top 10 for LLM applications.
The OWASP LLM Top 10 is a new standard that will also continue to evolve and mature with LLM security; it serves as a baseline for our coverage. Importantly, our approach goes beyond ensuring coverage of the OWASP LLM Top 10 categories to help clients identify and understand the risks presented by LLM systems in the context of their applications and business. Our AI penetration testing approach has the following components:
All our AI penetration testing services can be delivered as part of Kroll’s ultra-flexible cyber risk retainer, along with a variety of services like penetration testing, red team and tabletop exercises. In addition to bringing solutions together in one flexible package, the retainer allows clients to gain prioritized access to Kroll’s elite digital forensics and incident response team in the event of an incident.
Kroll’s product security experts upscale your AppSec program with strategic application security services catered to your team’s culture and needs, merging engineering and security into a nimble unit.
Kroll’s multi-layered approach to cloud security consulting services merges our industry-leading team of AWS and Azure-certified architects, cloud security experts and unrivalled incident expertise.
Red team security services from Kroll go beyond traditional penetration testing, leveraging our frontline threat intelligence and the adversarial mindset used by threat actors to push the limits of your information security controls.
Kroll’s team of certified cloud pen testers uncover vulnerabilities in your cloud environment and apps before they can be compromised by threat actors.
Kroll delivers more than a typical incident response retainer—secure a true cyber risk retainer with elite digital forensics and incident response capabilities and maximum flexibility for proactive and notification services.
Fortify your defenses and maximize your technology investment with a Microsoft 365 security assessment from Kroll.
Kroll’s Virtual CISO (vCISO) services help executives, security and technology teams safeguard information assets while supporting business operations with augmented cyber expertise to reduce business risk, signal commitment to data security and enhance overall security posture.
by Ken C. Joseph, Esq., Richard Kerr