Welcome to the world of Particle Swarm Optimizer (PSO)! This clever algorithm draws inspiration from the cooperative behavior of particles to solve complex problems. By mimicking how particles interact and learn from each other, PSO has proven to be a powerful tool in various fields. In this lesson, we will learn the MATLAB Code for Particle Swarm Optimizer (PSO) Algorithm. From engineering to data analysis, PSO helps us find optimal solutions and unlock new possibilities. Let’s explore the simplicity and effectiveness of PSO, and uncover how it can revolutionize the way we approach optimization challenges. Let’s dive in and discover the magic of Particle Swarm Optimization together!

Here we are presenting the MATLAB Code for Particle Swarm Optimizer (PSO) Algorithm. You just need to define your objective problem in the given code. The code is usable and can be implemented with slight modifications. Having any difficulty, leave your comment below. Happy Learning!

function result = sphere_func(x)
    % x is a vector of input values
    % result is the value of the sphere function at x
    
    % Compute the sum of squares of elements in x
    result = sum(x.^2);
end
clc;
clear all;

% PSO Parameters
pop_size = 50;
num_vars = 30;
lb = -100;
ub = 100;
num_generations = 1000;
w = 0.5;  % Inertia weight
c1 = 1.5;  % Cognitive coefficient
c2 = 1.5;  % Social coefficient

% Initialize population, velocity, personal best, and global best
population = lb + (ub - lb) * rand(pop_size, num_vars);
velocity = zeros(pop_size, num_vars);
pbest = population;
pbest_value = inf * ones(pop_size, 1);
for i = 1:pop_size
    pbest_value(i) = sphere_func(population(i, :));
end
[gbest_value, gbest_index] = min(pbest_value);
gbest = population(gbest_index, :);

% Main loop
for gen = 1:num_generations
    for i = 1:pop_size
        % Update velocity
        velocity(i, :) = w * velocity(i, :) + ...
                         c1 * rand(1, num_vars) .* (pbest(i, :) - population(i, :)) + ...
                         c2 * rand(1, num_vars) .* (gbest - population(i, :));
        
        % Update position
        population(i, :) = population(i, :) + velocity(i, :);
        
        % Boundary check
        population(i, :) = max(population(i, :), lb);
        population(i, :) = min(population(i, :), ub);
        
        % Update personal best
        current_value = sphere_func(population(i, :));
        if current_value < pbest_value(i)
            pbest(i, :) = population(i, :);
            pbest_value(i) = current_value;
        end
    end
    
    % Update global best
    [current_gbest_value, current_gbest_index] = min(pbest_value);
    if current_gbest_value < gbest_value
        gbest = pbest(current_gbest_index, :);
        gbest_value = current_gbest_value;
    end
    
    % Display current best value
    fprintf('Generation %d: Best Value = %f\n', gen, gbest_value);
end

% Final evaluation
fprintf('Final Best Value = %f\n', gbest_value);
disp('Best Solution:');
disp(gbest);

As you can see PSO algorithm is easy to follow and implement. We use the PSO algorithm to solve complex problems ranging from Artificial Intelligence to Machine Learning. Read more about PSO, here.

In case you want to implement the PSO algorithm in a Python environment, click here for the Python code. If you’re interested in the original research article published by Kennedy and Eberhart in 1995, please see here.

Leave a Reply

Your email address will not be published. Required fields are marked *