diff --git a/src/AI/AI-Deep-Learning.md b/src/AI/AI-Deep-Learning.md index 7e8b4f7ba..4540e422a 100644 --- a/src/AI/AI-Deep-Learning.md +++ b/src/AI/AI-Deep-Learning.md @@ -435,4 +435,3 @@ Moreover, to generate an image from a text prompt, diffusion models typically fo {{#include ../banners/hacktricks-training.md}} - diff --git a/src/AI/AI-MCP-Servers.md b/src/AI/AI-MCP-Servers.md index f5bdd5d2e..0e2f4132e 100644 --- a/src/AI/AI-MCP-Servers.md +++ b/src/AI/AI-MCP-Servers.md @@ -102,4 +102,4 @@ For more information about Prompt Injection check: AI-Prompts.md {{#endref}} -{{#include ../banners/hacktricks-training.md}} +{{#include ../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/AI/AI-Model-Data-Preparation-and-Evaluation.md b/src/AI/AI-Model-Data-Preparation-and-Evaluation.md index e46da661a..75352a17e 100644 --- a/src/AI/AI-Model-Data-Preparation-and-Evaluation.md +++ b/src/AI/AI-Model-Data-Preparation-and-Evaluation.md @@ -240,4 +240,3 @@ The confusion matrix can be used to calculate various evaluation metrics, such a {{#include ../banners/hacktricks-training.md}} - diff --git a/src/AI/AI-Models-RCE.md b/src/AI/AI-Models-RCE.md index a624ba26e..69a7297a5 100644 --- a/src/AI/AI-Models-RCE.md +++ b/src/AI/AI-Models-RCE.md @@ -27,4 +27,4 @@ At the time of the writting these are some examples of this type of vulneravilit Moreover, there some python pickle based models like the ones used by [PyTorch](https://github.com/pytorch/pytorch/security) that can be used to execute arbitrary code on the system if they are not loaded with `weights_only=True`. So, any pickle based model might be specially susceptible to this type of attacks, even if they are not listed in the table above. -{{#include ../banners/hacktricks-training.md}} +{{#include ../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/AI/AI-Prompts.md b/src/AI/AI-Prompts.md index 5777f019c..f6f769d59 100644 --- a/src/AI/AI-Prompts.md +++ b/src/AI/AI-Prompts.md @@ -419,4 +419,4 @@ The WAF won't see these tokens as malicious, but the back LLM will actually unde Note that this also shows how previuosly mentioned techniques where the message is sent encoded or obfuscated can be used to bypass the WAFs, as the WAFs will not understand the message, but the LLM will. -{{#include ../banners/hacktricks-training.md}} +{{#include ../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/AI/AI-Reinforcement-Learning-Algorithms.md b/src/AI/AI-Reinforcement-Learning-Algorithms.md index 387ddb27f..70a38f63b 100644 --- a/src/AI/AI-Reinforcement-Learning-Algorithms.md +++ b/src/AI/AI-Reinforcement-Learning-Algorithms.md @@ -77,4 +77,3 @@ SARSA is an **on-policy** learning algorithm, meaning it updates the Q-values ba On-policy methods like SARSA can be more stable in certain environments, as they learn from the actions actually taken. However, they may converge more slowly compared to off-policy methods like Q-Learning, which can learn from a wider range of experiences. {{#include ../banners/hacktricks-training.md}} - diff --git a/src/AI/AI-Risk-Frameworks.md b/src/AI/AI-Risk-Frameworks.md index 77d4de65b..e683c7b1a 100644 --- a/src/AI/AI-Risk-Frameworks.md +++ b/src/AI/AI-Risk-Frameworks.md @@ -78,4 +78,4 @@ Google's [SAIF (Security AI Framework)](https://saif.google/secure-ai-framework/ The [MITRE AI ATLAS Matrix](https://atlas.mitre.org/matrices/ATLAS) provides a comprehensive framework for understanding and mitigating risks associated with AI systems. It categorizes various attack techniques and tactics that adversaries may use against AI models and also how to use AI systems to perform different attacks. -{{#include ../banners/hacktricks-training.md}} +{{#include ../banners/hacktricks-training.md}} \ No newline at end of file diff --git a/src/AI/AI-Supervised-Learning-Algorithms.md b/src/AI/AI-Supervised-Learning-Algorithms.md index 91eb1f1d0..76c6f73a0 100644 --- a/src/AI/AI-Supervised-Learning-Algorithms.md +++ b/src/AI/AI-Supervised-Learning-Algorithms.md @@ -1028,3 +1028,4 @@ Ensemble methods like this demonstrate the principle that *"combining multiple m - [https://medium.com/@sarahzouinina/ensemble-learning-boosting-model-performance-by-combining-strengths-02e56165b901](https://medium.com/@sarahzouinina/ensemble-learning-boosting-model-performance-by-combining-strengths-02e56165b901) {{#include ../banners/hacktricks-training.md}} + diff --git a/src/AI/AI-Unsupervised-Learning-Algorithms.md b/src/AI/AI-Unsupervised-Learning-Algorithms.md index 653874c37..a45a87bcf 100644 --- a/src/AI/AI-Unsupervised-Learning-Algorithms.md +++ b/src/AI/AI-Unsupervised-Learning-Algorithms.md @@ -457,4 +457,3 @@ Here we combined our previous 4D normal dataset with a handful of extreme outlie {{#include ../banners/hacktricks-training.md}} -