Quantcast
Channel: Joseph Lucas – NVIDIA Technical Blog
Browsing latest articles
Browse All 6 View Live

Image may be NSFW.
Clik here to view.

Improving Machine Learning Security Skills at a DEF CON Competition

Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the intersection of the information security and...

View Article


Image may be NSFW.
Clik here to view.

Evaluating the Security of Jupyter Environments

How can you tell if your Jupyter instance is secure? The NVIDIA AI Red Team has developed a JupyterLab extension to automatically assess the security of Jupyter environments. jupysec is a tool that...

View Article

Image may be NSFW.
Clik here to view.

NVIDIA AI Red Team: An Introduction

Machine learning has the promise to improve our world, and in many ways it already has. However, research and lived experiences continue to show this technology has risks. Capabilities that used to be...

View Article

Image may be NSFW.
Clik here to view.

Mitigating Stored Prompt Injection Attacks Against LLM Applications

Prompt injection attacks are a hot topic in the new world of large language model (LLM) application security. These attacks are unique due to how ‌malicious text is stored in the system. An LLM is...

View Article

Image may be NSFW.
Clik here to view.

Analyzing the Security of Machine Learning Research Code

The NVIDIA AI Red Team is focused on scaling secure development practices across the data, science, and AI ecosystems. We participate in open-source security initiatives, release tools, present at...

View Article


Image may be NSFW.
Clik here to view.

AI Red Team: Machine Learning Security Training

At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the unique risks presented by machine...

View Article
Browsing latest articles
Browse All 6 View Live