Tag

Markov Chain

0 views collected around this technical thread.

Model Perspective
Model Perspective
Oct 22, 2024 · Fundamentals

How Time‑Inhomogeneous Markov Chains Reveal Shifting Social Behaviors

By introducing time‑inhomogeneous Markov chains, this article shows how dynamic transition probabilities can model and predict evolving social behaviors such as online activity levels, illustrating the method with a three‑state user engagement example and visualizing future activity trends over a year.

Markov ChainPredictive AnalyticsSocial Behavior Modeling
0 likes · 6 min read
How Time‑Inhomogeneous Markov Chains Reveal Shifting Social Behaviors
Architect
Architect
Aug 11, 2024 · Artificial Intelligence

Understanding Large Language Models: Tokens, Tokenization, and the Evolution from Markov Chains to Transformers

This article explains how generative AI models work by demystifying tokens, tokenization with tools like tiktoken, simple Markov‑chain training, the limitations of small context windows, and how modern LLMs use neural networks, transformers and attention mechanisms to predict the next token.

Artificial IntelligenceLLMMarkov Chain
0 likes · 20 min read
Understanding Large Language Models: Tokens, Tokenization, and the Evolution from Markov Chains to Transformers
Model Perspective
Model Perspective
Jul 24, 2024 · Fundamentals

Boost Time Series Forecast Accuracy with the Grey‑Markov Hybrid Model

This article introduces the Grey‑Markov hybrid model, explains its theoretical foundations, outlines step‑by‑step modeling procedures, and demonstrates its superior forecasting performance on a consumer price index (CPI) case study, achieving a significant reduction in prediction error.

CPI PredictionGrey ModelHybrid Model
0 likes · 7 min read
Boost Time Series Forecast Accuracy with the Grey‑Markov Hybrid Model
Model Perspective
Model Perspective
Mar 24, 2024 · Fundamentals

Can a Markov Chain Predict Your Mood? A Simple Model Explained

This article explains how a Markov chain—a memoryless stochastic model—can be used to define, construct, and analyze a simple three‑state mental‑state transition matrix, demonstrating both short‑term predictions and long‑term steady‑state distributions with concrete probability examples.

Markov Chainmood predictionpsychology
0 likes · 5 min read
Can a Markov Chain Predict Your Mood? A Simple Model Explained
Model Perspective
Model Perspective
Jul 27, 2023 · Fundamentals

Unlocking Markov Chains: From Weather Forecasts to Keyboard Predictions

This article introduces Markov chains as a mathematical model of state transitions, explains definitions, transition matrices, n‑step and steady‑state distributions, and demonstrates practical Python simulations for weather forecasting and simple keyboard word prediction.

Markov ChainPythonmachine learning
0 likes · 7 min read
Unlocking Markov Chains: From Weather Forecasts to Keyboard Predictions
Model Perspective
Model Perspective
Nov 29, 2022 · Artificial Intelligence

MCMC Demystified: Monte Carlo Basics, Metropolis-Hastings & Gibbs Sampling

Markov Chain Monte Carlo (MCMC) extends classic Monte Carlo by generating dependent samples via a Markov chain, enabling Bayesian inference through concepts like the plug‑in principle, burn‑in, asymptotic independence, and algorithms such as Metropolis‑Hastings and Gibbs sampling, while addressing convergence and effective sample size.

Bayesian inferenceGibbs samplingMCMC
0 likes · 13 min read
MCMC Demystified: Monte Carlo Basics, Metropolis-Hastings & Gibbs Sampling
Model Perspective
Model Perspective
Nov 9, 2022 · Fundamentals

Understanding Markov Chains: From Basics to Convergence and Sampling

This article explains the fundamentals of Markov chains, illustrates their transition matrix with a market example, demonstrates convergence through Python code, and outlines how to use the stationary distribution for sampling in Monte Carlo simulations.

Markov ChainStochastic Processconvergence
0 likes · 9 min read
Understanding Markov Chains: From Basics to Convergence and Sampling
Model Perspective
Model Perspective
Oct 4, 2022 · Artificial Intelligence

How Metropolis-Hastings Improves MCMC Sampling Efficiency

This article explains the detailed‑balance condition for Markov chains, shows why finding a transition matrix for a given stationary distribution is hard, and demonstrates how Metropolis‑Hastings modifies MCMC to achieve higher acceptance rates with a concrete Python example.

MCMCMarkov ChainMetropolis-Hastings
0 likes · 9 min read
How Metropolis-Hastings Improves MCMC Sampling Efficiency
Model Perspective
Model Perspective
Oct 2, 2022 · Fundamentals

Why Do Markov Chains Always Converge? A Hands‑On Exploration

This article explains the basic definition of Markov chains, illustrates a stock‑market example with transition matrices, demonstrates convergence through Python simulations, and shows how the steady‑state distribution enables sampling for Monte Carlo methods.

Markov ChainPythonconvergence
0 likes · 10 min read
Why Do Markov Chains Always Converge? A Hands‑On Exploration
Model Perspective
Model Perspective
Aug 20, 2022 · Operations

How a Simple Weekly Restocking Rule Reduces Piano Stockouts

This article models a piano retailer’s weekly demand as a Poisson process, applies a restocking policy that orders three units only when inventory hits zero, and uses a Markov chain to estimate a roughly 10% stockout probability and an average weekly sale of 0.857 units, while also exploring sensitivity to demand changes.

Markov ChainPoisson demandinventory management
0 likes · 4 min read
How a Simple Weekly Restocking Rule Reduces Piano Stockouts
Model Perspective
Model Perspective
Jun 11, 2022 · Fundamentals

Understanding Markov Chains: From Basics to Real-World Applications

This article introduces Markov chains, explains their definition, transition matrices, examples, Kolmogorov theorem, limiting distributions, and absorbing chains, showing how memoryless stochastic processes model diverse real‑world phenomena.

Absorbing ChainMarkov ChainStochastic Process
0 likes · 10 min read
Understanding Markov Chains: From Basics to Real-World Applications
Architect
Architect
Feb 3, 2016 · Fundamentals

The Mathematics Behind Google’s PageRank Algorithm

This article explains how Google’s PageRank algorithm uses the web’s link structure, Markov processes, and stochastic matrix adjustments—including damping factor α—to overcome ranking challenges and provide a mathematically sound method for ordering search results.

AlgorithmGoogleMarkov Chain
0 likes · 21 min read
The Mathematics Behind Google’s PageRank Algorithm