From 72dbe489f34159726e6b947b83f8c83b51b8b7c6 Mon Sep 17 00:00:00 2001 From: carlospolop Date: Tue, 10 Jun 2025 17:41:34 +0200 Subject: [PATCH] a --- src/AI/AI-Prompts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/AI/AI-Prompts.md b/src/AI/AI-Prompts.md index 5777f019c..da72ee550 100644 --- a/src/AI/AI-Prompts.md +++ b/src/AI/AI-Prompts.md @@ -406,7 +406,7 @@ Let's see common LLM prompt WAF bypasses: As already explained above, prompt injection techniques can be used to bypass potential WAFs by trying to "convince" the LLM to leak the information or perform unexpected actions. -### Token Smuggling +### Token Confusion As explained in this [SpecterOps post](https://www.llama.com/docs/model-cards-and-prompt-formats/prompt-guard/), usually the WAFs are far less capable than the LLMs they protect. This means that usually they will be trained to detect more specific patterns to know if a message is malicious or not.