Artificial Intelligence 16 min read

How the Artificial Bee Colony Algorithm Optimizes Complex Problems

The article explains the Artificial Bee Colony (ABC) algorithm, its biological inspiration, core components, and detailed steps—including employed, onlooker, and scout bee phases—followed by a complete Python implementation applied to a pressure‑vessel design optimization problem.

Model Perspective
Model Perspective
Model Perspective
How the Artificial Bee Colony Algorithm Optimizes Complex Problems

Basic Principle of Artificial Bee Colony Algorithm

Artificial Bee Colony (ABC) algorithm, proposed by Karaboga in 2005, is a swarm‑intelligence optimization method inspired by the foraging behavior of honey bees. It models three bee types—employed, onlooker, and scout—to explore and exploit food sources, which correspond to candidate solutions.

Components

Food source : Represents a solution; its quality is measured by a fitness value.

Employed bee (Onlooker bee) : Explores a specific food source and shares information with onlookers.

Scout bee : Searches randomly for new food sources when existing ones are abandoned.

The algorithm iterates through employed bee search, onlooker bee selection via roulette‑wheel, and scout bee mutation, updating the population and preserving the best individuals.

Population Initialization

Randomly generate a set of individuals within the defined bounds to form the initial population.

Employed Bee Search

Half of the population acts as employed bees; each generates a new candidate by perturbing its position with a randomly selected partner and a random vector φ.

Onlooker Bee Search

Onlookers are selected proportionally to fitness using roulette‑wheel selection and then perform a similar perturbation.

Scout Bee Search

If a bee exceeds a limit of unsuccessful trials, it becomes a scout and generates a new random solution.

Algorithm Flow

These three phases are repeated until the maximum number of iterations is reached, continuously improving the best fitness.

Below is a Python implementation of the ABC algorithm applied to a pressure‑vessel design problem.

<code>import numpy as np
import copy
import matplotlib.pyplot as plt

def initialization(pop, ub, lb, dim):
    """Population initialization"""
    X = np.zeros([pop, dim])
    for i in range(pop):
        for j in range(dim):
            X[i, j] = (ub[j] - lb[j]) * np.random.random() + lb[j]
    return X

def BorderCheck(X, ub, lb, pop, dim):
    """Boundary check"""
    for i in range(pop):
        for j in range(dim):
            if X[i, j] > ub[j]:
                X[i, j] = ub[j]
            elif X[i, j] < lb[j]:
                X[i, j] = lb[j]
    return X

def CaculateFitness(X, fun):
    """Calculate fitness of all individuals"""
    pop = X.shape[0]
    fitness = np.zeros([pop, 1])
    for i in range(pop):
        fitness[i] = fun(X[i, :])
    return fitness

def SortFitness(Fit):
    """Fitness sorting"""
    fitness = np.sort(Fit, axis=0)
    index = np.argsort(Fit, axis=0)
    return fitness, index

def SortPosition(X, index):
    """Sort positions according to fitness"""
    Xnew = np.zeros(X.shape)
    for i in range(X.shape[0]):
        Xnew[i, :] = X[index[i], :]
    return Xnew

def RouletteWheelSelection(P):
    """Roulette‑wheel selection"""
    C = np.cumsum(P)
    r = np.random.random() * C[-1]
    out = 0
    for i in range(P.shape[0]):
        if r < C[i]:
            out = i
            break
    return out

def ABC(pop, dim, lb, ub, MaxIter, fun):
    """Artificial Bee Colony algorithm"""
    L = round(0.6 * dim * pop)          # limit parameter
    C = np.zeros([pop, 1])              # trial counter
    nOnlooker = pop                     # number of onlookers

    X = initialization(pop, ub, lb, dim)
    fitness = CaculateFitness(X, fun)
    fitness, sortIndex = SortFitness(fitness)
    X = SortPosition(X, sortIndex)

    GbestScore = copy.copy(fitness[0])
    GbestPositon = np.zeros([1, dim])
    GbestPositon[0, :] = copy.copy(X[0, :])
    Curve = np.zeros([MaxIter, 1])
    Xnew = np.zeros([pop, dim])
    fitnessNew = copy.copy(fitness)

    for t in range(MaxIter):
        # Employed bee phase
        for i in range(pop):
            k = np.random.randint(pop)
            while k == i:
                k = np.random.randint(pop)
            phi = (2 * np.random.random([1, dim]) - 1)
            Xnew[i, :] = X[i, :] + phi * (X[i, :] - X[k, :])
        Xnew = BorderCheck(Xnew, ub, lb, pop, dim)
        fitnessNew = CaculateFitness(Xnew, fun)

        for i in range(pop):
            if fitnessNew[i] < fitness[i]:
                X[i, :] = copy.copy(Xnew[i, :])
                fitness[i] = copy.copy(fitnessNew[i])
            else:
                C[i] = C[i] + 1

        # Calculate selection probabilities
        F = np.zeros([pop, 1])
        MeanCost = np.mean(fitness)
        for i in range(pop):
            F[i] = np.exp(-fitness[i] / MeanCost)
        P = F / sum(F)

        # Onlooker bee phase
        for m in range(nOnlooker):
            i = RouletteWheelSelection(P)
            k = np.random.randint(pop)
            while k == i:
                k = np.random.randint(pop)
            phi = (2 * np.random.random([1, dim]) - 1)
            Xnew[i, :] = X[i, :] + phi * (X[i, :] - X[k, :])
        Xnew = BorderCheck(Xnew, ub, lb, pop, dim)
        fitnessNew = CaculateFitness(Xnew, fun)

        for i in range(pop):
            if fitnessNew[i] < fitness[i]:
                X[i, :] = copy.copy(Xnew[i, :])
                fitness[i] = copy.copy(fitnessNew[i])
            else:
                C[i] = C[i] + 1

        # Scout bee phase
        for i in range(pop):
            if C[i] >= L:
                for j in range(dim):
                    X[i, j] = np.random.random() * (ub[j] - lb[j]) + lb[j]
                C[i] = 0

        fitness = CaculateFitness(X, fun)
        fitness, sortIndex = SortFitness(fitness)
        X = SortPosition(X, sortIndex)

        if fitness[0] <= GbestScore:
            GbestScore = copy.copy(fitness[0])
            GbestPositon[0, :] = copy.copy(X[0, :])
        Curve[t] = GbestScore

    return GbestScore, GbestPositon, Curve

# Example: pressure vessel design
def fun(X):
    x1, x2, x3, x4 = X[0], X[1], X[2], X[3]
    g1 = -x1 + 0.0193 * x3
    g2 = -x2 + 0.00954 * x3
    g3 = -np.pi * x3**2 - 4 * np.pi * x3**3 / 3 + 1296000
    g4 = x4 - 240
    if g1 <= 0 and g2 <= 0 and g3 <= 0 and g4 <= 0:
        fitness = (0.6224 * x1 * x3 * x4 +
                   1.7781 * x2 * x3**2 +
                   3.1661 * x1**2 * x4 +
                   19.84 * x1**2 * x3)
    else:
        fitness = 1e33
    return fitness

pop = 50
dim = 4
lb = np.array([0, 0, 10, 10])
ub = np.array([100, 100, 100, 100])
MaxIter = 500
GbestScore, GbestPositon, Curve = ABC(pop, dim, lb, ub, MaxIter, fun)
print('Best fitness:', GbestScore)
print('Best solution [Ts, Th, R, L]:', GbestPositon)

plt.figure(1)
plt.plot(Curve, 'r-', linewidth=2)
plt.xlabel('Iteration')
plt.ylabel('Fitness')
plt.grid()
plt.title('ABC')
plt.show()
</code>

Reference: Fan Xu, “Python Intelligent Optimization Algorithms: From Principles to Code Implementation and Applications”.

OptimizationPythonswarm intelligencemetaheuristicArtificial Bee Colony
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.