Back to blog
SecuritySEC0042026-02-104 min

Why AI Generates Insecure Random Tokens

Ask any AI assistant to generate a password reset token, a session ID, or an API key. In our testing, every major model — GPT-4, Claude, Gemini — reaches for the same thing: random.randint() in Python or Math.random() in JavaScript.

The problem

These functions use pseudorandom number generators (PRNGs) that are fast but predictable. If an attacker knows the algorithm and can observe a few outputs, they can predict future values. For a password reset token, that means an attacker can generate the same token as your user and take over their account.

Math.random() is specified to provide "approximately uniform" distribution — no security guarantees whatsoever. Python's random module explicitly documents: "This module should not be used for security purposes."

The fix is simple: use secrets.token_hex() in Python or crypto.randomBytes() in Node.js. These pull from the operating system's cryptographic random source, which is designed to be unpredictable.

Why AI gets this wrong

AI models learn from the internet, and the internet is full of tutorials using random.randint() for everything. A tutorial that generates a "random ID" for a demo doesn't need cryptographic randomness — but the AI doesn't distinguish between a demo and a production auth system. It pattern-matches to the most common solution, not the correct one.

What SEC004 catches

StableStack's SEC004 checker scans for random, Math.random(), random.randint(), random.choice(), and similar functions used in contexts that suggest security: token generation, session IDs, password resets, API keys, and nonces. It flags them with a direct suggestion to use the cryptographic alternative.

This isn't a theoretical concern. We've found insecure random tokens in production code at companies of every size — almost always written or suggested by an AI assistant.

SEC004 is included in the free tier.

pip install stablestack