mirror of
https://github.com/HackTricks-wiki/hacktricks.git
synced 2025-10-10 18:36:50 +00:00
300 lines
14 KiB
Markdown
300 lines
14 KiB
Markdown
# 2. Data Sampling
|
||
|
||
{{#include ../../banners/hacktricks-training.md}}
|
||
|
||
## **Data Sampling**
|
||
|
||
**Data Sampling** is 'n belangrike proses om data voor te berei vir die opleiding van groot taalmodelle (LLMs) soos GPT. Dit behels die organisering van teksdata in invoer- en teikensekwensies wat die model gebruik om te leer hoe om die volgende woord (of token) te voorspel op grond van die voorafgaande woorde. Regte data sampling verseker dat die model effektief taalpatrone en afhanklikhede vasvang.
|
||
|
||
> [!TIP]
|
||
> Die doel van hierdie tweede fase is baie eenvoudig: **Steek die invoerdata en berei dit voor vir die opleidingsfase deur gewoonlik die dataset in sinne van 'n spesifieke lengte te skei en ook die verwagte reaksie te genereer.**
|
||
|
||
### **Why Data Sampling Matters**
|
||
|
||
LLMs soos GPT word opgelei om teks te genereer of te voorspel deur die konteks wat deur vorige woorde verskaf word, te verstaan. Om dit te bereik, moet die opleidingsdata op 'n manier gestruktureer wees sodat die model die verhouding tussen sekwensies van woorde en hul daaropvolgende woorde kan leer. Hierdie gestruktureerde benadering stel die model in staat om te generaliseer en samehangende en konteksueel relevante teks te genereer.
|
||
|
||
### **Key Concepts in Data Sampling**
|
||
|
||
1. **Tokenization:** Die opsplitsing van teks in kleiner eenhede wat tokens genoem word (bv. woorde, subwoorde of karakters).
|
||
2. **Sequence Length (max_length):** Die aantal tokens in elke invoersekwensie.
|
||
3. **Sliding Window:** 'n Metode om oorvleuelende invoersekwensies te skep deur 'n venster oor die getokeniseerde teks te beweeg.
|
||
4. **Stride:** Die aantal tokens wat die glijdende venster vorentoe beweeg om die volgende sekwensie te skep.
|
||
|
||
### **Step-by-Step Example**
|
||
|
||
Kom ons loop deur 'n voorbeeld om data sampling te illustreer.
|
||
|
||
**Example Text**
|
||
```arduino
|
||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
|
||
```
|
||
**Tokenisering**
|
||
|
||
Neem aan ons gebruik 'n **basiese tokenizer** wat die teks in woorde en leestekens verdeel:
|
||
```vbnet
|
||
Tokens: ["Lorem", "ipsum", "dolor", "sit", "amet,", "consectetur", "adipiscing", "elit."]
|
||
```
|
||
**Parameters**
|
||
|
||
- **Max Sequence Length (max_length):** 4 tokens
|
||
- **Sliding Window Stride:** 1 token
|
||
|
||
**Creating Input and Target Sequences**
|
||
|
||
1. **Sliding Window Approach:**
|
||
- **Input Sequences:** Elke invoerreeks bestaan uit `max_length` tokens.
|
||
- **Target Sequences:** Elke teikenreeks bestaan uit die tokens wat onmiddellik volg op die ooreenstemmende invoerreeks.
|
||
2. **Generating Sequences:**
|
||
|
||
<table><thead><tr><th width="177">Window Position</th><th>Input Sequence</th><th>Target Sequence</th></tr></thead><tbody><tr><td>1</td><td>["Lorem", "ipsum", "dolor", "sit"]</td><td>["ipsum", "dolor", "sit", "amet,"]</td></tr><tr><td>2</td><td>["ipsum", "dolor", "sit", "amet,"]</td><td>["dolor", "sit", "amet,", "consectetur"]</td></tr><tr><td>3</td><td>["dolor", "sit", "amet,", "consectetur"]</td><td>["sit", "amet,", "consectetur", "adipiscing"]</td></tr><tr><td>4</td><td>["sit", "amet,", "consectetur", "adipiscing"]</td><td>["amet,", "consectetur", "adipiscing", "elit."]</td></tr></tbody></table>
|
||
|
||
3. **Resulting Input and Target Arrays:**
|
||
|
||
- **Input:**
|
||
|
||
```python
|
||
[
|
||
["Lorem", "ipsum", "dolor", "sit"],
|
||
["ipsum", "dolor", "sit", "amet,"],
|
||
["dolor", "sit", "amet,", "consectetur"],
|
||
["sit", "amet,", "consectetur", "adipiscing"],
|
||
]
|
||
```
|
||
|
||
- **Target:**
|
||
|
||
```python
|
||
[
|
||
["ipsum", "dolor", "sit", "amet,"],
|
||
["dolor", "sit", "amet,", "consectetur"],
|
||
["sit", "amet,", "consectetur", "adipiscing"],
|
||
["amet,", "consectetur", "adipiscing", "elit."],
|
||
]
|
||
```
|
||
|
||
**Visual Representation**
|
||
|
||
<table><thead><tr><th width="222">Token Position</th><th>Token</th></tr></thead><tbody><tr><td>1</td><td>Lorem</td></tr><tr><td>2</td><td>ipsum</td></tr><tr><td>3</td><td>dolor</td></tr><tr><td>4</td><td>sit</td></tr><tr><td>5</td><td>amet,</td></tr><tr><td>6</td><td>consectetur</td></tr><tr><td>7</td><td>adipiscing</td></tr><tr><td>8</td><td>elit.</td></tr></tbody></table>
|
||
|
||
**Sliding Window with Stride 1:**
|
||
|
||
- **First Window (Positions 1-4):** \["Lorem", "ipsum", "dolor", "sit"] → **Target:** \["ipsum", "dolor", "sit", "amet,"]
|
||
- **Second Window (Positions 2-5):** \["ipsum", "dolor", "sit", "amet,"] → **Target:** \["dolor", "sit", "amet,", "consectetur"]
|
||
- **Third Window (Positions 3-6):** \["dolor", "sit", "amet,", "consectetur"] → **Target:** \["sit", "amet,", "consectetur", "adipiscing"]
|
||
- **Fourth Window (Positions 4-7):** \["sit", "amet,", "consectetur", "adipiscing"] → **Target:** \["amet,", "consectetur", "adipiscing", "elit."]
|
||
|
||
**Understanding Stride**
|
||
|
||
- **Stride of 1:** Die venster beweeg vorentoe met een token elke keer, wat lei tot hoogs oorvleuelende reekse. Dit kan lei tot beter leer van kontekstuele verhoudings, maar kan die risiko van oorpassing verhoog aangesien soortgelyke datapunte herhaal word.
|
||
- **Stride of 2:** Die venster beweeg vorentoe met twee tokens elke keer, wat oorvleueling verminder. Dit verminder redundans en rekenaarlading, maar mag sommige kontekstuele nuanses mis.
|
||
- **Stride Equal to max_length:** Die venster beweeg vorentoe met die hele venstergrootte, wat lei tot nie-oorvleuelende reekse. Dit minimaliseer data redundans, maar mag die model se vermoë om afhanklikhede oor reekse te leer beperk.
|
||
|
||
**Example with Stride of 2:**
|
||
|
||
Gebruik die dieselfde getokeniseerde teks en `max_length` van 4:
|
||
|
||
- **First Window (Positions 1-4):** \["Lorem", "ipsum", "dolor", "sit"] → **Target:** \["ipsum", "dolor", "sit", "amet,"]
|
||
- **Second Window (Positions 3-6):** \["dolor", "sit", "amet,", "consectetur"] → **Target:** \["sit", "amet,", "consectetur", "adipiscing"]
|
||
- **Third Window (Positions 5-8):** \["amet,", "consectetur", "adipiscing", "elit."] → **Target:** \["consectetur", "adipiscing", "elit.", "sed"] _(Assuming continuation)_
|
||
|
||
## Code Example
|
||
|
||
Let's understand this better from a code example from [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb):
|
||
```python
|
||
# Download the text to pre-train the LLM
|
||
import urllib.request
|
||
url = ("https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt")
|
||
file_path = "the-verdict.txt"
|
||
urllib.request.urlretrieve(url, file_path)
|
||
|
||
with open("the-verdict.txt", "r", encoding="utf-8") as f:
|
||
raw_text = f.read()
|
||
|
||
"""
|
||
Create a class that will receive some params lie tokenizer and text
|
||
and will prepare the input chunks and the target chunks to prepare
|
||
the LLM to learn which next token to generate
|
||
"""
|
||
import torch
|
||
from torch.utils.data import Dataset, DataLoader
|
||
|
||
class GPTDatasetV1(Dataset):
|
||
def __init__(self, txt, tokenizer, max_length, stride):
|
||
self.input_ids = []
|
||
self.target_ids = []
|
||
|
||
# Tokenize the entire text
|
||
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})
|
||
|
||
# Use a sliding window to chunk the book into overlapping sequences of max_length
|
||
for i in range(0, len(token_ids) - max_length, stride):
|
||
input_chunk = token_ids[i:i + max_length]
|
||
target_chunk = token_ids[i + 1: i + max_length + 1]
|
||
self.input_ids.append(torch.tensor(input_chunk))
|
||
self.target_ids.append(torch.tensor(target_chunk))
|
||
|
||
def __len__(self):
|
||
return len(self.input_ids)
|
||
|
||
def __getitem__(self, idx):
|
||
return self.input_ids[idx], self.target_ids[idx]
|
||
|
||
|
||
"""
|
||
Create a data loader which given the text and some params will
|
||
prepare the inputs and targets with the previous class and
|
||
then create a torch DataLoader with the info
|
||
"""
|
||
|
||
import tiktoken
|
||
|
||
def create_dataloader_v1(txt, batch_size=4, max_length=256,
|
||
stride=128, shuffle=True, drop_last=True,
|
||
num_workers=0):
|
||
|
||
# Initialize the tokenizer
|
||
tokenizer = tiktoken.get_encoding("gpt2")
|
||
|
||
# Create dataset
|
||
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)
|
||
|
||
# Create dataloader
|
||
dataloader = DataLoader(
|
||
dataset,
|
||
batch_size=batch_size,
|
||
shuffle=shuffle,
|
||
drop_last=drop_last,
|
||
num_workers=num_workers
|
||
)
|
||
|
||
return dataloader
|
||
|
||
|
||
"""
|
||
Finally, create the data loader with the params we want:
|
||
- The used text for training
|
||
- batch_size: The size of each batch
|
||
- max_length: The size of each entry on each batch
|
||
- stride: The sliding window (how many tokens should the next entry advance compared to the previous one). The smaller the more overfitting, usually this is equals to the max_length so the same tokens aren't repeated.
|
||
- shuffle: Re-order randomly
|
||
"""
|
||
dataloader = create_dataloader_v1(
|
||
raw_text, batch_size=8, max_length=4, stride=1, shuffle=False
|
||
)
|
||
|
||
data_iter = iter(dataloader)
|
||
first_batch = next(data_iter)
|
||
print(first_batch)
|
||
|
||
# Note the batch_size of 8, the max_length of 4 and the stride of 1
|
||
[
|
||
# Input
|
||
tensor([[ 40, 367, 2885, 1464],
|
||
[ 367, 2885, 1464, 1807],
|
||
[ 2885, 1464, 1807, 3619],
|
||
[ 1464, 1807, 3619, 402],
|
||
[ 1807, 3619, 402, 271],
|
||
[ 3619, 402, 271, 10899],
|
||
[ 402, 271, 10899, 2138],
|
||
[ 271, 10899, 2138, 257]]),
|
||
# Target
|
||
tensor([[ 367, 2885, 1464, 1807],
|
||
[ 2885, 1464, 1807, 3619],
|
||
[ 1464, 1807, 3619, 402],
|
||
[ 1807, 3619, 402, 271],
|
||
[ 3619, 402, 271, 10899],
|
||
[ 402, 271, 10899, 2138],
|
||
[ 271, 10899, 2138, 257],
|
||
[10899, 2138, 257, 7026]])
|
||
]
|
||
|
||
# With stride=4 this will be the result:
|
||
[
|
||
# Input
|
||
tensor([[ 40, 367, 2885, 1464],
|
||
[ 1807, 3619, 402, 271],
|
||
[10899, 2138, 257, 7026],
|
||
[15632, 438, 2016, 257],
|
||
[ 922, 5891, 1576, 438],
|
||
[ 568, 340, 373, 645],
|
||
[ 1049, 5975, 284, 502],
|
||
[ 284, 3285, 326, 11]]),
|
||
# Target
|
||
tensor([[ 367, 2885, 1464, 1807],
|
||
[ 3619, 402, 271, 10899],
|
||
[ 2138, 257, 7026, 15632],
|
||
[ 438, 2016, 257, 922],
|
||
[ 5891, 1576, 438, 568],
|
||
[ 340, 373, 645, 1049],
|
||
[ 5975, 284, 502, 284],
|
||
[ 3285, 326, 11, 287]])
|
||
]
|
||
```
|
||
## Gevorderde Steekproefstrategieë (2023-2025)
|
||
|
||
### 1. Temperatuur-gebaseerde Menggewigting
|
||
Staat-van-die-kuns LLM's word selde op 'n enkele korpus opgelei. In plaas daarvan, steek hulle monsters uit verskeie heterogene databronne (kode, web, akademiese artikels, forums…). Die relatiewe proporsie van elke bron kan 'n sterk invloed op die afgeleide prestasie hê. Onlangs het oopbronmodelle soos Llama 2 'n **temperatuur-gebaseerde steekproefskema** bekendgestel waar die waarskynlikheid om 'n dokument uit korpus *i* te trek, word
|
||
```
|
||
p(i) = \frac{w_i^{\alpha}}{\sum_j w_j^{\alpha}}
|
||
```
|
||
• *w<sub>i</sub>* – rou token persentasie van korpus *i*
|
||
• *α* ("temperatuur") – 'n waarde in (0,1]. α < 1 plat die verspreiding, wat meer gewig aan kleiner hoë kwaliteit korpora gee.
|
||
|
||
Llama 2 het α = 0.7 gebruik en getoon dat die vermindering van α evaluering punte op kennis-intensiewe take verhoog het terwyl die opleidingsmengsel stabiel gebly het. Dieselfde truuk word deur Mistral (2023) en Claude 3 aangeneem.
|
||
```python
|
||
from collections import Counter
|
||
|
||
def temperature_sample(corpus_ids, alpha=0.7):
|
||
counts = Counter(corpus_ids) # number of tokens seen per corpus
|
||
probs = {c: c_count**alpha for c, c_count in counts.items()}
|
||
Z = sum(probs.values())
|
||
probs = {c: p/Z for c, p in probs.items()}
|
||
# Now draw according to probs to fill every batch
|
||
```
|
||
|
||
```
|
||
|
||
### 2. Sequence Packing / Dynamic Batching
|
||
GPU memory is wasted when every sequence in a batch is padded to the longest example. "Packing" concatenates multiple shorter sequences until the **exact** `max_length` is reached and builds a parallel `attention_mask` so that tokens do not attend across segment boundaries. Packing can improve throughput by 20–40 % with no gradient change and is supported out-of-the-box in
|
||
|
||
* PyTorch `torchtext.experimental.agents.PackedBatch`
|
||
* HuggingFace `DataCollatorForLanguageModeling(pad_to_multiple_of=…)`
|
||
|
||
Dynamic batching frameworks (e.g. FlashAttention 2, vLLM 2024) combine sequence packing with just-in-time kernel selection, enabling thousand-token context training at 400+ K tokens/s on A100-80G.
|
||
|
||
### 3. Deduplication & Quality Filtering
|
||
Repeated passages cause memorization and provide an easy channel for data-poisoning. Modern pipelines therefore:
|
||
|
||
1. MinHash/FAISS near-duplicate detection at **document** and **128-gram** level.
|
||
2. Filter documents whose perplexity under a small reference model is > µ + 3σ (noisy OCR, garbled HTML).
|
||
3. Block-list documents that contain PII or CWE keywords using regex & spaCy NER.
|
||
|
||
The Llama 2 team deduplicated with 8-gram MinHash and removed ~15 % of CommonCrawl before sampling. OpenAI’s 2024 "Deduplicate Everything" paper demonstrates ≤0.04 duplicate ratio reduces over-fitting and speeds convergence.
|
||
|
||
## Security & Privacy Considerations During Sampling
|
||
|
||
### Data-Poisoning / Backdoor Attacks
|
||
Researchers showed that inserting <1 % backdoored sentences can make a model obey a hidden trigger ("PoisonGPT", 2023). Recommended mitigations:
|
||
|
||
* **Shuffled mixing** – make sure adjacent training examples originate from different sources; this dilutes gradient alignment of malicious spans.
|
||
* **Gradient similarity scoring** – compute cosine similarity of example gradient to batch average; outliers are candidates for removal.
|
||
* **Dataset versioning & hashes** – freeze immutable tarballs and verify SHA-256 before each training run.
|
||
|
||
### Membership-Inference & Memorization
|
||
Long overlap between sliding-window samples increases the chance that rare strings (telephone numbers, secret keys) are memorized. OpenAI’s 2024 study on ChatGPT memorization reports that raising stride from 1 × `max_length` to 4 × reduces verbatim leakage by ≈50 % with negligible loss in perplexity.
|
||
|
||
Practical recommendations:
|
||
|
||
* Use **stride ≥ max_length** except for <1B parameter models where data volume is scarce.
|
||
* Add random masking of 1-3 tokens per window during training; this lowers memorization while preserving utility.
|
||
|
||
---
|
||
|
||
## References
|
||
|
||
- [Build a Large Language Model from Scratch (Manning, 2024)](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
||
- [Llama 2: Open Foundation and Fine-Tuned Chat Models (2023)](https://arxiv.org/abs/2307.09288)
|
||
- [PoisonGPT: Assessing Backdoor Vulnerabilities in Large Language Models (BlackHat EU 2023)](https://arxiv.org/abs/2308.12364)
|
||
|
||
{{#include ../../banners/hacktricks-training.md}}
|