Hacking AI - Understanding cyber risks in AI-driven applications
WEBINAR ON DEMAND | CYBERSECURITY
As AI becomes more integrated into critical operations, understanding how threat actors can exploit these systems is essential. If you’re involved in AI operations, development, or cybersecurity, this session will provide valuable insights.
With the rise of AI, organizations face new vulnerabilities that differ from traditional cyber threats. From data leakage to unauthorized access, AI can open pathways that are uniquely susceptible to exploitation.
In this session, Ralph Moonen, Technical Director at Secura/Bureau Veritas, will explore the potential attack vectors in AI systems and discuss practical mitigations to safeguard your applications. Learn about real-world risks associated with AI-driven technologies and gain actionable steps to protect your organization.
Keypoints in this webinar
Here are the key points from the webinar:
1. AI as a Cybersecurity Risk: Key Threats and Vulnerabilities
- AI models can leak confidential data if trained on datasets containing sensitive information (e.g., passwords or personal data).
- Prompt injection is a common attack technique where AI systems are manipulated to exhibit unwanted behavior, such as disclosing sensitive data or performing malicious actions.
- Threat actors are increasingly using AI for attacks, such as generating deepfakes and conducting social engineering campaigns.
2. AI Regulations and Standards
- The AI Act (Europe) classifies AI systems into risk categories, such as "prohibited systems" and "high-risk systems," which require strict regulation.
- The upcoming ISO 42001 standard provides a framework for AI management, including risk assessment, ethical use, and quality assurance.
- Regional differences: Europe and China focus on stricter regulations, while the U.S. leans toward guidelines and innovation.
3. Best Practices for Securing AI
- Organizations should implement an AI usage policy and train users to recognize AI-related threats, such as deepfakes and AI-generated content.
- Opt for on-premise AI solutions where possible, as cloud-based AI poses risks of data leakage and reduced security control.
- Validate AI output and utilize resources like the LLM OWASP Top 10, which provides insights into the top threats and security guidelines for AI models.
Intended Audience
This webinar is designed for professionals involved in the development, implementation, and security of AI systems, particularly those interested in understanding the unique cybersecurity risks of AI-driven applications. Key attendees include:
- AI operators overseeing AI deployments in their organizations
- Developers and IT managers responsible for implementing AI solutions
- Pentesters focused on identifying vulnerabilities in emerging technologies
- Security professionals interested in mitigating AI-specific cyber threats
Watch the replay of this webinar today:
Questions & Answers from the Webinar
1. Are there known cases of AI-related data leaks?
Answer: Yes, there have been documented cases of AI-related data leaks. For example Prompt Security lists 8 real world examples of AI-related data leaks.
2. Are there tools to “humanize” AI output?
Answer: Yes, it is possible to humanize AI output by instructing the AI to adopt specific characteristics. Additionally, other AI models can process and refine the output to make it more natural. For instance, some software tools use AI to add human-like nuances, such as in music production, where humanization enhances the output's natural feel
3. How do you train users to recognize deepfakes?
Answer: Training users to recognize deepfakes is challenging as the technology becomes more advanced. One effective approach is using unique codewords or questions that only a real person would know. For example, a deepfake attack on a Ferrari executive was thwarted by asking questions only the real individual could answer. Increased vigilance is key in such scenarios.
4. Are there risks in blocking all AIs and only allowing Microsoft Copilot?
Answer: Completely blocking all AIs is very difficult due to the wide range of versions and access methods available. Creating user policies and usage guidelines is a more practical approach than attempting to block all AIs.
5. What would it take for AI to become sentient?
Answer: This is a complex question. While AIs can pass the Turing test, this does not imply consciousness. True sentience would require a significantly more advanced architecture, but the exact path to achieving sentience remains unclear.
6. How does Secura use AI to enhance productivity?
Answer: One example is a pen tester who used AI to quickly generate PowerShell scripts for privilege escalation during an internal test. AI can be particularly useful in automating repetitive tasks such as scripting.
7. What are the risks of using Microsoft Copilot in software development?
Answer: One risk is that input, such as API keys in source code, could unintentionally be used to train the AI model, making those keys accessible to other users. Another risk arises from training models on example code from platforms like StackOverflow, which may include vulnerabilities, leading to insecure AI-generated code. Disabling the option to train the model on user input can mitigate this risk.
ABOUT THE SPEAKER
Ralph Moonen
Ralph Moonen is Technisch Directeur bij Secura/Bureau Veritas en brengt meer dan 20 jaar ervaring in informatiebeveiliging met zich mee. Hij heeft samengewerkt met grote klanten, waaronder Fortune 500-bedrijven, overheden, financiële instellingen en internationale organisaties. Daarnaast is Ralph docent in het postdoctorale IT-auditprogramma aan de Tilburg University.
In zijn rol heeft Ralph uitgebreid onderzoek gedaan naar de cybersecurity-implicaties van kunstmatige intelligentie (AI). Hij heeft artikelen geschreven over het veilige gebruik van AI, waarbij hij mogelijke risico’s zoals datalekken en systeemmanipulatie behandelt. Zijn werk benadrukt het belang van begrip en mitigatie van de unieke beveiligingsuitdagingen die AI-technologieën met zich meebrengen.
CONTACT US FOR MORE INFORMATION
Would you like to learn more about how we can help you secure your AI applications? Please fill out the form, and we will contact you within one business day.
Why choose Secura | Bureau Veritas
At Secura/Bureau Veritas, we are dedicated to being your trusted partner in cybersecurity. We go beyond quick fixes and isolated services. Our integrated approach makes sure that every aspect of your company or organization is cyber resilient, from your technology to your processes and your people.
Secura is the cybersecurity division of Bureau Veritas, specialized in testing, inspection and certification. Bureau Veritas was founded in 1828, has over 80.000 employees and is active in 140 countries.