Prompt injection attacks are a hot topic in the new world of large language model (LLM) application security. These attacks are unique due to how malicious text is stored in the system. An LLM is provided with prompt text, and it responds based on all the data it has been trained on and has access to. To supplement the prompt with useful context, some AI applications capture the input from the...
↧