Quantcast
Channel: Joseph Lucas – NVIDIA Technical Blog
Viewing all articles
Browse latest Browse all 6

Mitigating Stored Prompt Injection Attacks Against LLM Applications

$
0
0

Prompt injection attacks are a hot topic in the new world of large language model (LLM) application security. These attacks are unique due to how ‌malicious text is stored in the system. An LLM is provided with prompt text, and it responds based on all the data it has been trained on and has access to. To supplement the prompt with useful context, some AI applications capture the input from the...

Source


Viewing all articles
Browse latest Browse all 6

Trending Articles