In this lesson, Introduction to Differential Evolution Algorithm, we will study one of the most popular evolutionary algorithms known as DE. Differential Evolution (DE) is one of the popular population-based metaheuristic algorithms.  The underlying design principle is user-friendly and easy to implement. DE comes under the umbrella of Evolutionary Algorithms (EAs) which are known for their intelligent search capabilities, less problem dependence, and robustness. DE has been used for a wide range of applications such as feature selection, parameter optimization, and pattern recognition. It’s particularly useful when dealing with high-dimensional, non-linear, and noisy data. The algorithm’s effectiveness lies in its ability to maintain a diverse population of candidate solutions and its use of a mutation operator to explore the search space. DE is also highly customizable, allowing users to fine-tune the algorithm to suit their specific needs. As a beginner, learning DE can be an excellent first step into the world of metaheuristics and optimization algorithms. This chapter is going to give you a friendly introduction to the basic differential evolution and all of its main implementation details. If you’re interested in Python and MATLAB code implementation of DE just click on the links

Differential Evolution Vectors
Position Update Mechanism DE, credits: matt eding

Basic Design Principle:

Similar to any other evolutionary algorithm, DE starts the search process by initializing a random population of search agents. If the dimension of the search space is D and the number of search agents is N, then the initial population is generated as an N x D random matrix whose elements are lying within the search bounds. After generating the population in the search space, the DE algorithm evaluates the fitness of the population using your given objective function. If you have some minimization problem then search agents with lower objective function values are considered as more fitter than other search agents who have higher objective function values. And, if your optimization problem is of maximization type then search agents with higher objective function values can be considered as fitter individuals. See how 50 randomly generated particles occupy the search space.

50 search particles in the search bounds [-5,5]

Now if you have generated a random population in the search space, now it’s time to choose individuals from the population and change their position in the search space to look for a better position in terms of fitness values. DE algorithms follow operators like Mutation, Crossover, and Selection to update the position of the search agents in the search space. First, we will discuss the mutation operator in little detail. The mutation operator or also referred to as the mutation strategy can be considered as the backbone of the Differential Evolution algorithm. The mutation operator generates a trial vector using the two randomly chosen vectors from the population. Here we are discussing the mutation operator of the canonical Differential Evolution algorithm. Of course, there is more advanced mutations operator available in the inventory of the Differential Evolution algorithm, but we will start from the basic one. Below, the mathematical equation for generating a trial vector using two random vectors from the population is given.

V_{i,j} = X_{i,j} + F (X_{r_{1},j}-X_{r_{2},j}) \\
\\
\\
V_{i,j}: trial ~vector, ~X_{i,j}: parent~ vector\\
F: scaling~ factor,~ X_{r_{1},j}, X_{r_{2},j}: random ~vectors

Note that in the above equation, i denotes the index of the individual while j denotes the dimension under consideration. F is a scaling factor and r1 and r2 random indices choosing random individuals from the population. The scalar factor F controls the amplification of this difference and is usually set between 0 and 2. A higher F leads to a more extensive search of the search space, but may also result in a slower convergence rate or even divergence. In short, F is a control parameter which is needed to be tuned properly for better performance. So, after generating the trial vector we add some information about the parent vector using the crossover operator. The crossover operator in Differential Evolution is used to combine the parent and trial vectors to produce the next generation of candidate solutions. It involves selecting components from the trial vector and components from the parent vector to create a new solution. The crossover rate parameter determines the probability of selecting components from the trial vector over the parent vector. There are several types of crossover operators available such as binomial crossover, exponential crossover, and uniform crossover. The most commonly used operator is the binomial crossover, which uses a binary mask to decide which genes are inherited from the trial vector and which are inherited from the parent vector. The choice of crossover operator depends on the specific problem being addressed and can be optimized for better performance. Here, the mathematical procedure for the binary crossover operator is given.

\begin{equation*}
U_{i,j} = \begin{cases}
           v_{i,j} \quad &\text{if}~ \,  (rand_{j}(0,1)\leq CR) ~~or~~ (j=j_{rand}) \\
           x_{i,j} \quad &\text{otherwise} \\
    
           \end{cases}
\end{equation*}
\\
U_{i,j} : offspring~ vector

CR is the crossover rate, and in most of cases, the value of CR is taken to be 0.5. A value of 0.5 means that in the Differential Evolution algorithm, the offspring vector will have probably 50% characteristics of the parent vector and the rest 50% characteristics of the trial vector. That’s how DE manages to change the position of the search agents. The position of the offspring vector is evaluated using the objective function and a fitness value is assigned. Now, DE has two choices whether to accept the offspring vector or the parent vector. Here comes the selection mechanism, the most popular is the greedy selection mechanism. The fitness values of the parent vectors and offspring vectors are compared and the vectors with better fitness values are saved to iterate the above process till a termination criterion is satisfied. The flow chart for basic Differential Evolution is given below.

Flow chart of the Differential Evolution

Summary:

DE is a cutting-edge vector-based metaheuristic algorithm that draws inspiration from pattern search and genetic algorithms by utilizing crossover and mutation. Its explicit updating equations take genetic algorithms to the next level, allowing for theoretical analysis. DE is a stochastic search algorithm that is both lively and self-organizing, and it does not rely on gradient-specific or derivative information of the objective function. In addition, DE uses real numbers as solution strings, so no encoding and decoding is needed for your optimization problems. This widespread application of DE highlights its effectiveness and versatility in solving various optimization problems across different domains. One of the great things about DE is its versatility and adaptability to different optimization problems. With numerous variants and implementation details, DE can be customized to suit specific application domains. DE has been successfully applied in many domains, such as engineering design, finance, robotics, machine learning, and artificial intelligence. DE has been used in the design of antennas, the prediction of stock market trends, the development of autonomous robots, and the training of neural networks.

If you’re interested in Python and MATLAB code implementation of DE just click on the links. Enjoy learning Basic DE, and reading more about advanced DE variants and their applications.

Leave a Reply

Your email address will not be published. Required fields are marked *