Improving Machine Learning Security Skills at a DEF CON Competition
Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the intersection of the information security and...
View ArticleEvaluating the Security of Jupyter Environments
How can you tell if your Jupyter instance is secure? The NVIDIA AI Red Team has developed a JupyterLab extension to automatically assess the security of Jupyter environments. jupysec is a tool that...
View ArticleNVIDIA AI Red Team: An Introduction
Machine learning has the promise to improve our world, and in many ways it already has. However, research and lived experiences continue to show this technology has risks. Capabilities that used to be...
View ArticleMitigating Stored Prompt Injection Attacks Against LLM Applications
Prompt injection attacks are a hot topic in the new world of large language model (LLM) application security. These attacks are unique due to how malicious text is stored in the system. An LLM is...
View ArticleAnalyzing the Security of Machine Learning Research Code
The NVIDIA AI Red Team is focused on scaling secure development practices across the data, science, and AI ecosystems. We participate in open-source security initiatives, release tools, present at...
View ArticleAI Red Team: Machine Learning Security Training
At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the unique risks presented by machine...
View Article