Artificial Intelligence 14 min read

Master Machine Learning Algorithms: Types, Python Code & Real-World Examples

This article categorizes machine learning algorithms into supervised, unsupervised, and reinforcement learning, then details ten common algorithms—including linear regression, logistic regression, decision trees, SVM, Naive Bayes, K‑NN, K‑means, random forest, and dimensionality reduction—accompanied by clear Python code examples and illustrative diagrams.

Model Perspective
Model Perspective
Model Perspective
Master Machine Learning Algorithms: Types, Python Code & Real-World Examples

Machine Learning Algorithm Classification

Generally, machine learning algorithms are divided into three categories.

Supervised Learning Algorithms These algorithms use a target variable to predict outcomes from known predictor variables, learning a function that maps inputs to desired outputs until the model reaches the required accuracy. Examples include linear regression, decision trees, random forest, k‑nearest neighbors, and logistic regression.

Unsupervised Learning Algorithms These algorithms have no target variable and are used for clustering analysis, such as segmenting customers into groups. Examples include association algorithms and k‑means clustering.

Reinforcement Learning Algorithms These algorithms train machines to make decisions through trial‑and‑error interactions with an environment, learning from past experiences to make precise judgments. An example is the Markov Decision Process.

Common Machine Learning Algorithms and Their Python Code

Common algorithms include linear regression, logistic regression, decision trees, support vector machines (SVM), naive Bayes, k‑nearest neighbors, k‑means, random forest, dimensionality reduction, gradient boosting, and AdaBoost. Below each algorithm is introduced with its main Python code.

Linear Regression

Linear regression estimates continuous values (e.g., house prices) by fitting the best line that relates independent and dependent variables.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.linear_model import LinearRegression
model = LinearRegression()
model.fit(X, y2)
model.score(X, y2)
</code>

Logistic Regression

This classification algorithm estimates the probability of a binary outcome by fitting data to a logistic function, producing outputs between 0 and 1.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(X, y1)
model.score(X, y1)
</code>

Decision Tree

Decision trees are supervised learning algorithms used for classification and regression, splitting data based on the most important attributes.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(X, y1)
model.score(X, y1)
</code>

Support Vector Machine (SVM) Classification

SVM classifies data by finding the optimal separating hyperplane that maximizes the margin between classes.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.svm import SVC
model = SVC()
model.fit(X, y1)
model.score(X, y1)
</code>

Naive Bayes Classification

Assuming feature independence, Naive Bayes uses Bayes’ theorem to compute posterior probabilities for classification.

Example: Given a weather dataset with the target variable “Play”, we classify “Play” vs “Don’t Play” based on weather conditions using likelihood tables and Bayes’ formula.
<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X, y1)
model.score(X, y1)
</code>

K‑Nearest Neighbors (KNN) Algorithm

KNN classifies new cases by assigning them the most common class among the nearest neighbors based on a distance metric such as Euclidean, Manhattan, or Hamming distance.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier()
model.fit(X, y1)
model.score(X, y1)
</code>

K‑Means Clustering

K‑means is an unsupervised algorithm that partitions data into a predefined number of clusters by iteratively updating cluster centroids until convergence.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.cluster import KMeans
model = KMeans(n_clusters=2)
model.fit(X)
model.predict(X)
</code>

Random Forest Algorithm

Random forest combines multiple decision trees, each voting for a class, and selects the class with the most votes.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X, y1)
model.score(X, y1)
</code>

Dimensionality Reduction

Dimensionality reduction techniques such as PCA help identify the most important variables from high‑dimensional data for building robust models.

<code>from sklearn.datasets import load_iris
X, y1 = load_iris().data[:,:2], load_iris().target
X, y2 = load_iris().data[:,:2], load_iris().data[:,2]

from sklearn.decomposition import PCA
model = PCA(n_components=1)
model.fit(X)
model.transform(X)
</code>
machine learningPythonAlgorithmsreinforcement learningunsupervised learningsupervised learning
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.