How AI Fortune‑Telling Works—and Why It Can’t Truly Predict Love, Wealth, or Feng Shui
The article explains that predictive AI combines statistical analysis with machine learning, shows how recommendation systems and large language models generate seemingly personal fortune‑telling results, and outlines five fundamental reasons—data limits, hidden variables, randomness, cumulative small effects, and self‑fulfilling predictions—that prevent reliable forecasts of personal destiny.
Predictive AI definition and examples
Predictive AI combines statistical analysis and machine learning to discover patterns in large historical datasets and extrapolate them to infer possible future events. It works by analyzing data that has already occurred or is currently occurring to predict what might happen.
Concrete commercial scenarios cited:
Bank fraud detection: scanning 8 million transactions per second to identify fraudulent activity.
E‑commerce recommendation: predicting the next home‑goods item a user is likely to purchase.
Hospital readmission forecasting: predicting which discharged patients are likely to be readmitted.
All share three conditions: abundant reliable historical data, verifiable outcomes, and explicit feedback loops that allow incorrect predictions to be detected, corrected, and the model retrained.
Big‑data recommendation and “kill‑familiar” effect
Short‑video and e‑commerce platforms collect massive behavioral signals (view duration, repeat clicks, time of day, swipe speed, etc.) and combine them into hundreds of dimensions of a user profile. Collaborative filtering finds users with similar behavior patterns and recommends items they liked but the target user has not seen.
The article warns of “kill‑familiar”: platforms use deep knowledge of long‑time users to apply differential pricing or more persuasive ads (e.g., higher hotel prices for iPhone users, higher travel costs for returning customers). AI fortune‑telling apps use the same principle: they collect gender, age, keywords, and even the time a user lingers on a paragraph, then dynamically steer the next part of the reading to maximize perceived accuracy and monetisation.
Large language models (LLM) basics
LLMs such as GPT, Claude, Qwen, Doubao are described as “next‑token prediction machines”. Training proceeds by showing the model massive text corpora and repeatedly masking the latter part of a sentence, asking the model to guess the next word. Correct guesses reinforce parameters; incorrect guesses adjust them. This loop runs billions of times over trillions of tokens. 今天天气真 好 / 热 / 糟糕 The core architecture is the Transformer, whose key innovation is the attention mechanism. Attention lets the model consider the relevance of every other word when processing a given token, enabling it to resolve references such as “it”. The article stresses that attention does not provide logical reasoning; the model merely matches statistical language patterns.
Emergent abilities
When parameter counts cross certain thresholds, LLMs exhibit “emergent abilities”—logical reasoning, analogical transfer, and simple arithmetic—that were never explicitly taught. Modern top‑tier LLMs can write code, analyse papers, and simulate dialogue, but these feats remain sophisticated probability interpolation rather than true understanding.
AI fortune‑telling workflow
Collect user inputs (birth date, name, gender, desired domain such as love, wealth, career).
Construct a prompt, e.g., “You are an expert in Zi Wei Dou Shu; based on the following information, give a detailed interpretation…”.
Invoke the LLM to generate text. The model draws on statistical patterns from a large corpus of fortune‑telling literature to produce a professionally worded, structurally complete analysis.
Package the output with UI elements, background music, and payment gates to form a product.
The generated text is not a prediction based on the user’s data; it is a generation based on language statistics. Because traditional fortune‑telling texts are intentionally vague, LLMs excel at producing statements that sound profound yet remain broad.
Why reliable prediction is hard
The article lists five fundamental obstacles:
Data limitation: Traditional inputs for fortune‑telling (birth‑date codes, feng shui layouts, facial images) are coarse and unrelated to future outcomes; the variables that truly drive life events are not captured.
Unobserved key variables: A case study of ~5 000 children showed that the best machine‑learning model performed only marginally better than a four‑variable linear regression because crucial influences (e.g., a neighbour’s tutoring or a blueberry snack) were absent from the data.
Intrinsic randomness: Accidental events, chance encounters, and random shocks cannot be encoded in any model; randomness is a property of reality.
Snowball effect of tiny advantages: Life trajectories often result from many small, cumulative benefits (the “Matthew effect”), which static, deterministic data cannot capture.
Self‑fulfilling predictions: Forecasts can alter behaviour—negative financial forecasts may cause conservatism, while optimistic love forecasts may trigger proactive courting—making the prediction appear accurate.
Mechanisms that create an illusion of accuracy
Barnum effect: Vague, universally applicable statements (e.g., “you crave understanding”) feel personal to most readers.
Confirmation bias: Readers remember the few correct statements (e.g., 3 true out of 10) and ignore the many incorrect ones.
Dynamic content optimisation: The system tracks which sentences a user lingers on and steers subsequent generation toward those topics, mirroring short‑video recommendation loops.
Self‑fulfilling effect: The prediction influences actions, which then validate the prediction, closing a feedback loop.
Side‑by‑side comparison of conditions
Bank fraud detection vs. AI fortune‑telling:
Data‑target correlation: Transaction records are tightly linked to fraud; birth data is loosely linked to destiny.
Result feedback: Fraud detection has clear, actionable feedback for model correction; fortune‑telling outcomes cannot be quantified or fed back.
Sample size: Millions of transactions per second provide stable patterns; fortune‑telling relies on subjective, sparse textual data.
Observable key variables: Complete transaction logs are fully observable; hidden influences such as a neighbour’s help never appear in the data.
What AI can and cannot reliably predict
AI excels at tasks with massive historical data, stable statistical regularities, and clear feedback—shopping preferences, anomalous account behaviour, imminent machine failures.
AI struggles with domains dominated by randomness, individual agency, or hidden variables—marriage happiness, personal wealth trajectories, or the auspiciousness of a house’s feng shui.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
AI Engineer Programming
In the AI era, defining problems is often more important than solving them; here we explore AI's contradictions, boundaries, and possibilities.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
