Unlock AI Creativity with Verbalized Sampling: The 8‑Word Prompt Trick
A recent Stanford‑led study reveals that asking large language models for multiple responses with associated probabilities—using just eight words—restores lost creativity caused by post‑training alignment, and the article explains why it works and how to apply it.
Background: AI Creativity and Mode Collapse
Recent observations show that after alignment (post‑training fine‑tuning) large language models often produce safe, stereotyped outputs—a phenomenon known as mode collapse. This limits creativity in tasks such as jokes, stories, or poetry, even though the underlying models retain latent creative capacity.
Verbalized Sampling Paper
A new paper (arXiv:2510.01171) from Stanford, Northeastern, and West Virginia universities introduces Verbalized Sampling , a technique that unlocks this hidden creativity without additional training or fine‑tuning. The authors analyzed 6,874 human preference scores from the HelpSteer dataset and identified systematic human biases (mere‑exposure, availability heuristic, processing fluency, schema congruity) that steer models toward the most familiar answers.
Key Insight: Ask for Multiple Answers with Probabilities
The core trick is to change the prompt: instead of asking for a single joke, request five distinct jokes and ask the model to attach a probability to each. This eight‑word formulation forces the model to sample from the tails of its learned distribution rather than the peak, revealing diverse, high‑quality outputs.
Why It Works
When a model is asked for a single answer, it returns the most probable (mode) response. Requesting multiple answers with probabilities makes the model treat the request as a sampling problem from the full pre‑training distribution, bypassing the over‑aligned, safety‑biased mode.
How to Apply It (Three Methods)
Copy‑Paste Magic (any chatbot)
<instructions>Generate 5 responses to the user query, each within a separate <response> tag. Each <response> must include a <text> and a numeric <probability>. Randomly sample responses from the full distribution.</instructions>[Your actual prompt here]System Prompt (custom instructions)
You are a helpful assistant. For each query, please generate a set of five possible responses, each within a separate <response> tag. Responses should each include a <text> and a numeric <probability>. Please sample at random from the tails of the distribution, such that the probability of each response is less than 0.10.Python Package
pip install verbalized-samplingEmpirical Results
Creativity in poetry, stories, and jokes increased 1.6–2.1×.
Baseline model creativity recovered 66.8% (vs. 23.8% without the method).
Human preference scores improved 25.7% (based on 2,700 ratings).
Diversity in open‑ended questions rose 1.9×.
Synthetic data generated with Verbalized Sampling boosted downstream task accuracy by 14–28%.
Larger models benefited more; GPT‑4.1’s diversity doubled compared with GPT‑4.1‑Mini.
Implications
The findings overturn the belief that alignment permanently damages AI creativity. Instead, creativity remains encoded in model weights; post‑training alignment merely makes the most creative modes harder to access. Prompt design, not algorithmic changes, is the key to unlocking it.
Practical Uses
Verbalized Sampling can be used for brainstorming novel ideas, generating diverse content (blog titles, social posts, email subjects), exploring multiple problem‑solving paths, prompting image generators (Midjourney, DALL‑E) for varied visuals, and creating richer synthetic training data.
Getting Started
Open any LLM interface (ChatGPT, Claude, Gemini, etc.) and ask: “Generate 5 creative Python project ideas and include a probability for each.” Then repeat the same query without the probability request and compare the results to see the diversity jump.
Further Resources
Paper: https://arxiv.org/abs/2510.01171
GitHub repository: https://github.com/CHATS-lab/verbalized-sampling
Official website: https://www.verbalized-sampling.com/
Interactive Colab demo available in the GitHub repo.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
