by Stuart Reid (www.stuartreid.co.za)
Originally inspired by the behaviour of flocking birds, the particle swarm optimization algorithm solves optimization problems by moving a "swarm" of three or more particles (vectors) towards the best particle in the swarm. As such, the optimization algorithm can be classified as a direct-search, population-based, meta-heuristic, global optimization algorithm. This IPython notebook will introduce the algorithm using Numpy and mention a few of it's applications in quantitative finance.
import math
import numpy
import numpy.random as nrand
import matplotlib.pylab as plt
import matplotlib.gridspec as gridspec
This class contains the logic belonging to each particle. Particles are defined by three n-dimensional vectors, their current position in the search space, their best ever position in the search space, and their velocities. The velocity of a particle is an n-dimensional vector which describes in which direction it will travel. The velocity is calculated as the currency velocity plus a cognitive component and social component all multiplied by inertia. The cognitive component is the dot-product of a uniformly distributed n-dimensional random vector between zero and one and the difference
class Particle:
def __init__(self, n, r):
self.n = n # Dimensions
self.position = nrand.uniform(low=r[0], high=r[1], size=n)
self.velocity = nrand.rand(n) / 100.0
self.pbest = self.position
def update_particle(self, gbest):
self.update_velocity(gbest)
self.update_position()
def update_velocity(self, gbest):
r1 = nrand.rand(self.n)
r2 = nrand.rand(self.n)
social = 1.496180 * r1 * gbest.position - self.position
cognitive = 1.496180 * r2 * self.pbest - self.position
self.velocity += social + cognitive
self.velocity *= 0.729844
def update_position(self):
self.position += self.velocity
This class contains a simple minimization objective function which I have created. It is simply the sum of the sin graphs for each element in the vector minus the mean of the vector. The resulting fitness lanscape is non-convex and has infinitely many local optima.
def problem_1d(x):
if x > 8.0 or x < -8.0:
return 5.0
return math.sin(x) - (x / 2.0)
Example output for the objective function where each element in the vector is equal and increasing by 0.01
values = []
solution = -9.0
while solution < 9.0:
values.append(problem_1d(solution))
solution += 0.01
x_axis = numpy.linspace(-9.0, 9.0, len(values))
plt.plot(x_axis, values)
[<matplotlib.lines.Line2D at 0x330cb240>]
This class contains the optimizer code
class Optimizer:
def __init__(self, m, n, opt):
self.opt = opt
self.swarm = []
for i in range(m):
self.swarm.append(Particle(n, [-9.0, 9.0]))
def get_best(self):
gbest = None
min_fit = float('+inf')
for i in range(len(self.swarm)):
f = problem(self.swarm[i].pbest)
if f < min_fit:
gbest = i
min_fit = f
return gbest
def optimize(self, steps):
positions = [self.swarm[i].position[0] for i in range(len(self.swarm))]
fitnesses = [problem_1d(positions[i]) for i in range(len(positions))]
for i in range(steps):
positions, fitnesses = self.optimize_step()
return positions, fitnesses
def optimize_step(self):
gbest = self.get_best()
positions = []
fitnesses = []
for i in range(len(self.swarm)):
if i != gbest:
self.swarm[i].update_particle(self.swarm[gbest])
pos_fit = problem(self.swarm[i].position)
pbest_fit = problem(self.swarm[i].pbest)
if pos_fit < pbest_fit:
self.swarm[i].pbest = self.swarm[i].position
positions.append(self.swarm[i].pbest)
fitnesses.append(pos_fit)
gbest_fit = problem(self.swarm[gbest].position)
positions.append(self.swarm[gbest].position)
fitnesses.append(gbest_fit)
return positions, fitnesses
optimizer = Optimizer(10, 1, problem_1d)
pos, fit = optimizer.optimize(0)
plt.plot(x_axis, values)
plt.plot(pos, fit, 'go')
[<matplotlib.lines.Line2D at 0x335b0518>]
pos, fit = optimizer.optimize(5)
plt.plot(x_axis, values)
plt.plot(pos, fit, 'go')
plt.plot(pos[len(pos) - 1], fit[len(fit) - 1], 'ko')
[<matplotlib.lines.Line2D at 0x34364ba8>]
pos, fit = optimizer.optimize(5)
plt.plot(x_axis, values)
plt.plot(pos, fit, 'go')
plt.plot(pos[len(pos) - 1], fit[len(fit) - 1], 'ko')
[<matplotlib.lines.Line2D at 0x34532da0>]