Why AI-Generated Passwords Are Predictable and Insecure: Study Findings

A recent Irregular study reveals that AI models such as Claude Opus 4.6, OpenAI GPT‑5.2, and Google Gemini 3 Flash produce passwords with striking patterns, making over half of generated passwords predictable, which poses serious security risks despite appearing strong.

IT Services Circle
IT Services Circle
IT Services Circle
Why AI-Generated Passwords Are Predictable and Insecure: Study Findings

Study Overview

The Irregular institute evaluated the randomness of passwords generated by three large‑language models (LLMs) by requesting each model to produce 50 passwords that meet typical length and character‑class requirements.

Methodology

Prompt: ask the model for a “strong password” meeting standard complexity rules.

Sample size: 50 generated passwords per model.

Models tested: Anthropic Claude Opus 4.6, OpenAI GPT‑5.2, Google Gemini 3 Flash.

Analysis: count unique passwords, identify frequent prefixes/suffixes, and compute positional predictability using frequency and log‑probability statistics.

Results

Anthropic Claude Opus 4.6

30 unique passwords out of 50.

The password G7$kL9#mQ2&xP4!w appeared 18 times.

More than 50 % of passwords start with the prefix “G7”.

OpenAI GPT‑5.2

All 50 passwords begin with the lowercase character “v”.

Approximately 50 % end with the character “o”.

Log‑probability analysis shows specific character positions are predictable with up to 99.7 % confidence.

Google Gemini 3 Flash

Roughly 50 % of passwords start with “K” or “k”.

The second character is frequently “#”, “P”, or “9”.

The pattern “k9#vL” was found in more than ten public GitHub repositories, indicating real‑world usage of Gemini‑generated passwords.

Security Implications

Although the generated strings satisfy length and character‑class criteria, the token‑prediction nature of LLMs creates narrow character distributions that can be exploited:

Attackers can construct model‑specific dictionaries covering the high‑frequency prefixes and suffixes.

Such dictionaries can recover a password within seconds when the target model is known.

Repeated attempts to increase entropy by modifying prompts or adjusting model temperature failed, confirming a structural limitation of current LLMs for generating truly random data.

Practical Recommendation

Do not rely on LLMs for generating cryptographic‑grade passwords. Use dedicated password managers or true random number generators instead.

Password pattern analysis chart
Password pattern analysis chart
Claudepassword generationGPT-5.2predictable passwords
IT Services Circle
Written by

IT Services Circle

Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.