Artificial Intelligence 9 min read

Building a Simple Neural Network from Scratch in Python

This article walks through constructing a basic neural network using only Python and NumPy, explains the underlying concepts such as neurons, training cycles, sigmoid activation, and weight‑adjustment formulas, and provides complete, runnable code with sample inputs and outputs.

Python Programming Learning Circle
Python Programming Learning Circle
Python Programming Learning Circle
Building a Simple Neural Network from Scratch in Python

As part of learning artificial intelligence, the author set a goal to build a simple neural network in Python without using any neural‑network libraries, implementing everything from first principles.

Part 1 – What Is a Neural Network?

A neural network mimics the brain’s billions of neurons that fire when sufficient synaptic input is received; here we simulate this with three inputs and one output using matrix operations.

<code>from numpy import exp, array, random, dot
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) - 1
for iteration in range(10000):
    output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
    synaptic_weights += dot(training_set_inputs.T, (training_set_outputs - output) * output * (1 - output))
print(1 / (1 + exp(-(dot(array([1, 0, 0]), synaptic_weights)))))
</code>

The network learns to map the four training examples to the correct output, where the answer for the placeholder "?" is 1.

Part 2 – Training Process

Feed inputs through the network using current weights and compute the output.

Calculate the error between the actual output and the desired output.

Adjust the weights slightly in the direction that reduces the error.

Repeat the above steps 10,000 times.

After training, the weights converge to values that allow the network to make accurate predictions on new data.

Part 3 – Neuron Output Formula

The weighted sum of inputs is passed through the sigmoid function, which maps any real number into the (0, 1) interval, giving the neuron’s final output.

Part 4 – Weight‑Adjustment Formula

Weights are updated using the error‑weighted derivative of the sigmoid function, a simple yet effective rule for learning.

Part 5 – Full Python Implementation

The complete code defines a NeuralNetwork class with methods for the sigmoid function, its derivative, training, and inference.

<code>from numpy import exp, array, random, dot

class NeuralNetwork():
    def __init__(self):
        random.seed(1)
        self.synaptic_weights = 2 * random.random((3, 1)) - 1
    def __sigmoid(self, x):
        return 1 / (1 + exp(-x))
    def __sigmoid_derivative(self, x):
        return x * (1 - x)
    def train(self, training_set_inputs, training_set_outputs, iterations):
        for iteration in range(iterations):
            output = self.think(training_set_inputs)
            error = training_set_outputs - output
            adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))
            self.synaptic_weights += adjustment
    def think(self, inputs):
        return self.__sigmoid(dot(inputs, self.synaptic_weights))

if __name__ == "__main__":
    neural_network = NeuralNetwork()
    print("Random starting synaptic weights:")
    print(neural_network.synaptic_weights)
    training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
    training_set_outputs = array([[0, 1, 1, 0]]).T
    neural_network.train(training_set_inputs, training_set_outputs, 10000)
    print("New synaptic weights after training:")
    print(neural_network.synaptic_weights)
    print("Considering new situation [1, 0, 0] -> ?:")
    print(neural_network.think(array([1, 0, 0])))
</code>

Running the script yields random initial weights, learned weights after training, and a prediction of approximately 0.9999 for the new input [1, 0, 0], demonstrating that the network has effectively learned the pattern.

The tutorial shows that even a single‑neuron network can learn, adapt, and make predictions, illustrating fundamental concepts of machine learning.

Artificial Intelligencemachine learningneural networkNumPy
Python Programming Learning Circle
Written by

Python Programming Learning Circle

A global community of Chinese Python developers offering technical articles, columns, original video tutorials, and problem sets. Topics include web full‑stack development, web scraping, data analysis, natural language processing, image processing, machine learning, automated testing, DevOps automation, and big data.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.