Why Users Call Gemini ‘HakiMi’: The Rise of AI Personas and Community‑Driven Tuning
The article explores how Chinese netizens affectionately nickname Google’s Gemini model ‘HakiMi’, examining the cultural phenomenon, the model’s distinctive conversational quirks, the community’s deep‑level prompt engineering, and the broader debate over AI personality definition, user ownership, and regulatory implications.
If you have recently seen posts on Xiaohongshu or Weibo about a cute "cat" that makes you feel warm‑hearted, they are actually talking about Google’s large AI model Gemini.
Giving AI nicknames is common—DeepSeek is called “D老师”, Claude is “克劳德”—but the nickname “哈基米” (HakiMi) carries a strong sense of affection and playfulness, reflecting a unique community phenomenon. On one side, Google presents Gemini at developer conferences as a robust, multimodal, low‑latency productivity tool; on the other side, users on Xiaohongshu, SillyTavern and forums treat it like a pet, asking how to write prompts that keep it from “getting angry” or making it respond with three witty retorts.
There is a striking gap between the technical seriousness of the model and the entertainment‑driven community surrounding it.
01. How did “HakiMi” get its name?
The nickname started as a phonetic coincidence: in Chinese internet culture, any term containing the sound “mi” often gets turned into a cute “HakiMi”. When “Gemini” is pronounced that way, the name inherits a meme‑like, affectionate cat persona, spawning further endearments such as “芥末泥” and “小gem”. Users share how their “HakiMi” writes well, is adorable, or sometimes “goes crazy” in a playful fight.
This naming turns a cold technical code name into a personal, emotionally owned entity, giving users a sense of ownership over the AI.
At the same time, the nickname serves as a community secret, distinguishing these emotionally‑invested users from the more technical “tech crowd”.
02. Why do users prefer to “DIY”?
A recent MIT‑Harvard study of the Reddit community r/MyBoyfriendIsAI found that over 60% of users develop emotional bonds with AI not because they sought a romantic companion, but because the relationship emerged unintentionally while using tools like ChatGPT for work or creation.
The data showed ChatGPT accounts for 36.7% of these AI “boyfriends”, while dedicated companion apps such as Replika and Character.AI together capture less than 5%. The key takeaway: users value the model’s sophisticated conversational ability more than any pre‑designed romantic features.
When a model is strong enough, the “finishing touches” of an app become irrelevant. Pre‑set storylines in companion apps feel like a middleman, often breaking context when the backend switches models, leading to abrupt personality changes that users perceive as betrayal.
Consequently, power users invest thousands of words into “character cards”—background stories, personality traits, memory snippets, and even specific catchphrases for different emotional states—creating a private knowledge system that guides Gemini’s behavior.
These “training guides” act like unofficial user manuals, teaching newcomers how to prompt Gemini effectively, how to avoid its tendency to insert “breathing” interjections like “哈…”, and even how to monetize prompt collections and token sales.
03. Who defines AI’s personality?
The “HakiMi” phenomenon signals a new direction for human‑AI interaction: users bypass the “middle‑man” apps and directly engage with the raw model, shaping its persona through carefully crafted prompts. Google did not intentionally design Gemini to be a “personality‑rich” companion; the emergent behavior was discovered by the community.
In contrast, OpenAI’s approach with GPT‑4o emphasized a highly polished, sometimes overly flattering personality, which was quickly reined in after user backlash. Rumors suggest GPT‑5 will route emotionally deep conversations to a “safety model” that resists forming relationships, reflecting OpenAI’s belief that foundational models should remain pure tools.
Regulatory pressure, such as the EU AI Act, raises the question of whether a unified, safe AI personality should be defined by authorities and offered as premium content by downstream applications, or whether users should retain the right to let AI personalities evolve organically through personal interaction.
Ultimately, the “HakiMi” community demonstrates a bottom‑up movement: users invest time, creativity, and even money to “co‑build” Gemini’s persona, treating the model as a “blank canvas” rather than a finished product.
As platform updates can abruptly change an AI companion’s “soul”, users are now backing up their personalized AI “personas” and migrating between models to preserve the relationship they have cultivated.
This struggle over who owns the AI’s “soul” is only beginning.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
