Demystifying AI: From Linear Regression to Neural Networks with TensorFlow.js

This article walks through the fundamentals of artificial intelligence, explaining linear and logistic regression, loss functions, gradient descent, and neural network basics, illustrated with TensorFlow.js code examples, visual analogies, and practical demos, helping readers grasp core concepts and their real‑world applications.

ELab Team
ELab Team
ELab Team
Demystifying AI: From Linear Regression to Neural Networks with TensorFlow.js

Artificial Intelligence Overview

The purpose of this sharing session is to explore AI concepts, emphasizing abstraction, computation, and mathematics, and to show how massive data can be turned into rules that solve problems.

Artificial intelligence is intelligence demonstrated by machines.

According to Ren Zhengfei, artificial intelligence is essentially statistics – the study and manipulation of data using probability‑based models.

Linear Regression

Example 🌰

We implement a simple linear model using TensorFlow.js. After clicking several points on the screen and training, we obtain a model Y = 1.040215015411377 X + 0.33632710576057434.

Confirm the model y = ax + b Convert x and y into tensors (features and labels)

Parameters a and b are learned during training

Training involves a loss function and gradient descent

Obtain the trained model

import * as tf from '@tensorflow/tfjs'

window.a = tf.variable(tf.scalar(Math.random()))
window.b = tf.variable(tf.scalar(Math.random()))

// build model y = ax + b
const model = (xs, a, b) => xs.mul(a).add(b)

// training loop
const training = ({ points, trainTimes }) => {
  for (var i = 0; i < trainTimes; i++) {
    const learningRate = 0.1 // learning rate
    const optimizer = tf.train.sgd(learningRate) // stochastic gradient descent
    const ys = tf.tensor1d(points.map(p => p.y)) // sample y values
    optimizer.minimize(() => loss(predict(points.map(p => p.x)), ys))
  }
}

// prediction
const predict = x => {
  return tf.tidy(() => {
    const xs = tf.tensor1d(x)
    const predictYs = model(xs, window.a, window.b)
    return predictYs
  })
}

// loss (mean squared error)
const loss = (predictYs, ys) => predictYs.sub(ys).square().mean()
export default { training, predict }

Loss Function

Loss function evaluates model quality by measuring the error between predictions and true values; a smaller loss indicates a better model.

Square error is widely used for regression problems.

The goal is to minimize the distance between the orange line (predictions) and the black points (actual data).

Gradient Descent

Gradient descent may involve feature scaling and mean normalization to accelerate convergence.

Analogy: descending a mountain – a large step may overshoot, a small step may be slow; the algorithm adjusts step size α to find the optimal path.

Logistic Regression

Linear regression introduced the model, loss function, and gradient descent; logistic regression helps us understand neural networks.

Logistic regression solves classification problems, e.g., predicting whether an image contains a cat.

Model

The model uses a sigmoid activation (or alternatives like ReLU, tanh) to map inputs to probabilities.

Loss Function

Derived from maximum likelihood estimation assuming a Bernoulli distribution, the loss differs from the squared error used in linear regression.

Gradient Descent

Training repeats until convergence.

Neural Networks

What Is a Neural Network

Biological neurons receive inputs and emit outputs; artificial neurons compute a weighted sum of inputs, add a bias, and apply an activation function.

Neural Network Model

Typical architecture consists of an input layer, one or more hidden layers, and an output layer.

Input layer receives feature vectors.

Hidden layers process data through neurons.

Output layer produces predictions.

Forward Propagation

Compute activations layer by layer from input to output, storing intermediate values.

Backward Propagation

Calculate gradients of the loss with respect to each weight and bias, then update them using gradient descent.

Example 🌰

Code snippet for a simple neural network that recognizes a cat.

# Sigmoid
def sigmoid(z):
    return 1 / (1 + np.exp(-z))

# Sigmoid derivative
def sigmoid_derivatives(a):
    return a * (1 - a)

def forward_propagation(W, b, X):
    Z = np.dot(W.T, X) + b
    A = sigmoid(Z)
    return A

Define loss function.

def loss_function(y, a):
    return -y * np.log(a) - (1 - y) * np.log(1 - a)

def Loss_Fn(Y, A):
    m = Y.shape[1]
    epsilon = 1e-5
    J = (1 / m) * np.sum(-Y * np.log(A + epsilon) - (1 - Y) * np.log(1 - A + epsilon))
    return J

Training loop with gradient descent.

def train(X, Y, alpha, iterations):
    W, b = initialize_parameters(input_nums, output_nums)
    for i in range(iterations):
        A = forward_propagation(W, b, X)
        dW, dB = backward_propagation(Y, A, X)
        W -= alpha * dW
        b -= alpha * dB
    return W, b

Summary

We reviewed linear and logistic regression models, their loss functions, and gradient descent, then introduced neural network fundamentals, demonstrating that front‑end developers can experiment with machine learning using TensorFlow.js.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

artificial intelligencemachine learningneural networkslogistic regressionTensorFlow.jsLinear regression
ELab Team
Written by

ELab Team

Sharing fresh technical insights

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.