The topic of this “Ask the Experts” episode is one that is much discussed right now: how to secure AI. We talked to Scott Best, Senior Director of Security Products at Rambus to find out more.
The discussion focused on the challenges of securing AI systems, drawing parallels with FPGA systems. The discussion focuses on the immense value that an AI inference model holds and how hardware-level security solutions are key to protecting it from potential adversaries.
The discussion also touched on the emerging threat of quantum computers, which could compromise public key cryptography. To counter these threats, Rambus offers a broad portfolio of security IP to protect AI silicon.
The interview concluded with a rather meta discussion on the potential of AI being used to attack AI systems, highlighting further the need for robust security measures.
Expert
- Scott Best, Senior Director of Security Products, Rambus
Key Takeaways
- Securing Inference Models: Securing AI systems revolves around the protection of the inference model, which holds all the information the AI model was trained against. This model can be a potential target for adversaries or competitors, making it crucial to secure it whether it’s sitting in memory (data at rest) or being pulled into a chip (data in use).
- Hardware-Based AI Security: AI security needs to take place at the hardware level, and it’s up to chip manufacturers to implement a secure solution. This means securing data privacy and authenticity and making sure that these security measures do not hinder the system’s performance.
- Quantum Threats to Security: The advent of powerful quantum computers poses a threat to current public key cryptography. Systems being built today that are expected to be in the field for 5-10 years or more need to consider implementing quantum safe cryptography to ensure the privacy and authenticity of their data.
- Rambus Security IP: Rambus offers a broad portfolio of security IP that enables hardware-based security for AI silicon, as well as Root of Trust IP for data at rest protection, Inline Memory Encryption IP for data in use protection, and Quantum Safe Cryptography solutions to protect devices and data in the quantum era.
- AI-Driven Security Attacks: It’s possible that adversaries could potentially use AI to attack AI, particularly in power analysis side channel attacks where AI could be trained to find a small signal within a lot of noise. This highlights the need for robust security measures in AI systems.
Key Quote
In AI systems, there’s an inference model produced by a training system, and that inference model is then loaded into an AI chip, and that AI chip then executes that inference model. These inference models contain years of value to companies who created the training system and associated training data. If you’re an adversary or a competitor that wants to see what the “secret sauce” of a particular company is, then the inference model is of great interest.
Related Content
Leave a Reply