Quantcast
Channel: Joseph Lucas – NVIDIA Technical Blog
Browsing latest articles
Browse All 10 View Live

Image may be NSFW.
Clik here to view.

Improving Machine Learning Security Skills at a DEF CON Competition

Machine learning (ML) security is a new discipline focused on the security of machine learning systems and the data they are built upon. It exists at the intersection of the information security and...

View Article


Image may be NSFW.
Clik here to view.

Evaluating the Security of Jupyter Environments

How can you tell if your Jupyter instance is secure? The NVIDIA AI Red Team has developed a JupyterLab extension to automatically assess the security of Jupyter environments. jupysec is a tool that...

View Article


Image may be NSFW.
Clik here to view.

NVIDIA AI Red Team: An Introduction

Machine learning has the promise to improve our world, and in many ways it already has. However, research and lived experiences continue to show this technology has risks. Capabilities that used to be...

View Article

Image may be NSFW.
Clik here to view.

Mitigating Stored Prompt Injection Attacks Against LLM Applications

Prompt injection attacks are a hot topic in the new world of large language model (LLM) application security. These attacks are unique due to how ‌malicious text is stored in the system. An LLM is...

View Article

Image may be NSFW.
Clik here to view.

Analyzing the Security of Machine Learning Research Code

The NVIDIA AI Red Team is focused on scaling secure development practices across the data, science, and AI ecosystems. We participate in open-source security initiatives, release tools, present at...

View Article


Image may be NSFW.
Clik here to view.

AI Red Team: Machine Learning Security Training

At Black Hat USA 2023, NVIDIA hosted a two-day training session that provided security professionals with a realistic environment and methodology to explore the unique risks presented by machine...

View Article

Image may be NSFW.
Clik here to view.

Secure LLM Tokenizers to Maintain Application Integrity

This post is part of the NVIDIA AI Red Team’s continuing vulnerability and technique research. Use the concepts presented to responsibly assess and increase the security of your AI development and...

View Article

Image may be NSFW.
Clik here to view.

Defending AI Model Files from Unauthorized Access with Canaries

As AI models grow in capability and cost of creation, and hold more sensitive or proprietary data, securing them at rest is increasingly important. Organizations are designing policies and tools, often...

View Article


Image may be NSFW.
Clik here to view.

Sandboxing Agentic AI Workflows with WebAssembly

Agentic AI workflows often involve the execution of large language model (LLM)-generated code to perform tasks like creating data visualizations. However, this code should be sanitized and executed in...

View Article


Image may be NSFW.
Clik here to view.

Structuring Applications to Secure the KV Cache

When interacting with transformer-based models like large language models (LLMs) and vision-language models (VLMs), the structure of the input shapes the model’s output. But prompts are often more than...

View Article
Browsing latest articles
Browse All 10 View Live