»Not So Random«
2018-10-18, 10:30–10:50, Europe

This short talk is about a real world experience where we manage to reduce the randomness of a token from 384 to 42 bits, which allow us to predict the next valid tokens. We will go through each step, from reversing token, to reducing the key space, and then retrieving the internal state of the generator.

It is not new, randomness is widely used in applications for sensitive data: generate an api key, a forgot password url, an authorization token, ... If one can predict the next valid api key, or the next valid session, damages can be important: account takeover, impersonation, ... While a cryptographically secure source of randomness is always recommended, we often find in the code calls to the internal pseudo-random number generator (rand(), Math.Random(), ...). That's why many exploits already exists in the wild.

In this one, we will leverage the implementation of Math.random() in a JS engine written in Java (like Rhino for instance). From the observation of only 1 token, 64 characters long from a base64 character set, we will retrieve the internal state of the generator, hence gain the ability to predict any future token. This walk-through will describe step-by-step how the theoretical 384 bits of randomness is reduced to 42 bits, a not so expensive offline brute-force attack.

In fact, we will also see that the length of the token does not really matter, and the longer the character set, the more the randomness is reduced. We will then conclude with some remediations to make predictions harder by avoiding direct correlation between consecutive character in such a token.