mirror of
https://github.com/HackTricks-wiki/hacktricks.git
synced 2025-10-10 18:36:50 +00:00
a
This commit is contained in:
parent
4cfe4a56cc
commit
7eea100571
@ -435,3 +435,4 @@ Moreover, to generate an image from a text prompt, diffusion models typically fo
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
||||
|
@ -103,4 +103,4 @@ For more information about Prompt Injection check:
|
||||
AI-Prompts.md
|
||||
{{#endref}}
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
@ -240,3 +240,4 @@ The confusion matrix can be used to calculate various evaluation metrics, such a
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
||||
|
@ -27,4 +27,4 @@ At the time of the writting these are some examples of this type of vulneravilit
|
||||
Moreover, there some python pickle based models like the ones used by [PyTorch](https://github.com/pytorch/pytorch/security) that can be used to execute arbitrary code on the system if they are not loaded with `weights_only=True`. So, any pickle based model might be specially susceptible to this type of attacks, even if they are not listed in the table above.
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
@ -419,4 +419,4 @@ The WAF won't see these tokens as malicious, but the back LLM will actually unde
|
||||
Note that this also shows how previuosly mentioned techniques where the message is sent encoded or obfuscated can be used to bypass the WAFs, as the WAFs will not understand the message, but the LLM will.
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
@ -77,3 +77,4 @@ SARSA is an **on-policy** learning algorithm, meaning it updates the Q-values ba
|
||||
On-policy methods like SARSA can be more stable in certain environments, as they learn from the actions actually taken. However, they may converge more slowly compared to off-policy methods like Q-Learning, which can learn from a wider range of experiences.
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
||||
|
@ -78,4 +78,4 @@ Google's [SAIF (Security AI Framework)](https://saif.google/secure-ai-framework/
|
||||
The [MITRE AI ATLAS Matrix](https://atlas.mitre.org/matrices/ATLAS) provides a comprehensive framework for understanding and mitigating risks associated with AI systems. It categorizes various attack techniques and tactics that adversaries may use against AI models and also how to use AI systems to perform different attacks.
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
@ -1027,4 +1027,4 @@ Ensemble methods like this demonstrate the principle that *"combining multiple m
|
||||
- [https://medium.com/@sarahzouinina/ensemble-learning-boosting-model-performance-by-combining-strengths-02e56165b901](https://medium.com/@sarahzouinina/ensemble-learning-boosting-model-performance-by-combining-strengths-02e56165b901)
|
||||
- [https://medium.com/@sarahzouinina/ensemble-learning-boosting-model-performance-by-combining-strengths-02e56165b901](https://medium.com/@sarahzouinina/ensemble-learning-boosting-model-performance-by-combining-strengths-02e56165b901)
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
@ -457,4 +457,4 @@ Here we combined our previous 4D normal dataset with a handful of extreme outlie
|
||||
</details>
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../banners/hacktricks-training.md}}
|
@ -97,4 +97,3 @@ print(token_ids[:50])
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -239,4 +239,3 @@ tensor([[ 367, 2885, 1464, 1807],
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -217,4 +217,3 @@ print(input_embeddings.shape) # torch.Size([8, 4, 256])
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -429,3 +429,4 @@ For another compact and efficient implementation you could use the [`torch.nn.Mu
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -700,4 +700,3 @@ print("Output length:", len(out[0]))
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -970,3 +970,4 @@ There 2 quick scripts to load the GPT2 weights locally. For both you can clone t
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -63,4 +63,3 @@ def replace_linear_with_lora(model, rank, alpha):
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -116,4 +116,3 @@ You can find all the code to fine-tune GPT2 to be a spam classifier in [https://
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -106,4 +106,3 @@ You can find an example of the code to perform this fine tuning in [https://gith
|
||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||||
|
||||
|
||||
|
||||
|
@ -99,3 +99,4 @@ You should start by reading this post for some basic concepts you should know ab
|
||||
|
||||
|
||||
|
||||
|
||||
|
Loading…
x
Reference in New Issue
Block a user