Designing MLPs for XOR and Any Boolean Function: Layers and Node Requirements
This article explains how to construct a multi‑layer perceptron that implements XOR and arbitrary n‑bit Boolean functions, detailing the hidden‑node count for single‑ and multi‑layer designs and the minimal number of network layers needed.
Scene Description
Neural network concepts are inspired by neuroscience; the brain processes information in multiple layers, e.g., visual signals passing through V1, V2, V4 before object recognition. Deep neural networks mimic this multilayer structure and can represent complex functions compactly. This article explores how to construct a Multi‑Layer Perceptron (MLP) that implements any n‑bit Boolean function, such as parity.
Problem Description
How to realize XOR logic with an MLP (binary inputs only)?
If only one hidden layer is used, how many hidden nodes are required to implement an arbitrary n‑input Boolean function?
When moving from a single hidden layer to multiple hidden layers, how many nodes are needed?
What is the minimal number of network layers after reasonable configuration?
Background Knowledge
Mathematical logic, deep learning.
Answer and Analysis
1. XOR with a two‑input MLP
One possible solution is shown below.
2. Hidden nodes for arbitrary n‑input Boolean functions (single hidden layer)
Any n‑input Boolean function can be expressed in Disjunctive Normal Form (DNF). Each hidden neuron can represent one conjunction term. For example, a function requiring six conjunctions can be realized by a three‑layer MLP with six hidden neurons.
Using Karnaugh maps to simplify the DNF can reduce the number of hidden neurons. For the parity function, the minimal DNF contains 2^(n‑1) conjunctions, so a single‑layer MLP needs 2^(n‑1) hidden neurons.
Thus, the maximum number of conjunctions in the DNF of an n‑input Boolean function is 2^(n‑1); a single‑hidden‑layer MLP therefore requires 2^(n‑1) hidden neurons.
3. Nodes required when using multiple hidden layers
For two‑input XOR, three nodes suffice. For four inputs (three XOR operations) 3×3=9 nodes are enough; for six inputs (five XOR operations) 3×5=15 nodes, etc. In general, an n‑input parity function can be built with 3·(n‑1) nodes (including the output node).
Thus, using multiple hidden layers reduces the hidden node count from exponential O(2^(n‑1)) to linear O(3·(n‑1)).
4. Minimal number of network layers
By pairing nodes to perform XOR in each layer, two hidden layers are sufficient, giving a total depth of 2·log₂(N) layers.
Hulu Beijing
Follow Hulu's official WeChat account for the latest company updates and recruitment information.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.