How red teaming helps safeguard the infrastructure behind AI models
...
Unique risks to AI models
According to Ruben Boonen, CNE Capability Development Lead at IBM: “One problem is that you have these models hosted on giant open-source data stores. You don’t know who created them or how they were modified, and there are a number of issues that can occur here. For example, let’s say you use PyTorch to load a model hosted on one of these data stores, but it has been changed in a way that’s undesirable. It can be very hard to tell because the model might behave normally in 99% of cases.” ...
Recently, researchers discovered thousands of malicious files hosted on Hugging Face, one of the largest repositories for open-source generative AI models and training data sets. These included around a hundred malicious models capable of injecting malicious code onto users’ machines. ...
In most cases, AI systems run on cloud architecture rather than local machines. After all, the cloud provides the scalable data storage and processing power required to run AI models easily and accessibly. However, that accessibility also increases the attack surface, allowing adversaries to exploit vulnerabilities like misconfigurations in access permissions. ...
Red teams can proactively address several aspects of AI model theft, such as:
- API attacks
- Side Channel attacks
- Container and Orchestration attacks
- Supply Chain attacks
...
See the full story here: https://www.securityintelligence.com/articles/how-red-teaming-helps-safeguard-the-infrastructure-behind-ai-models/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Trustworthy AI – A Market-Driven approach
- Tufts Alumni Bio