DE/best/1/bin #

Name #

DE/best/1/bin, Differential Evolution “best/1/bin” Variant

Taxonomy #

Differential Evolution (DE) is a population-based optimization algorithm that belongs to the field of Evolutionary Computation, which is a subfield of Computational Intelligence. DE is closely related to other Evolutionary Algorithms such as Genetic Algorithms (GA) and Evolution Strategies (ES).

  • Computational Intelligence
    • Evolutionary Computation
      • Evolutionary Algorithms
        • Genetic Algorithms (GA)
        • Evolution Strategies (ES)
        • Differential Evolution (DE)
          • DE/best/1/bin

Strategy #

DE/best/1/bin is a specific variant of the Differential Evolution algorithm that employs a unique mutation and crossover strategy to generate new candidate solutions. The algorithm maintains a population of candidate solutions, which are iteratively improved through a process of mutation, crossover, and selection.

Mutation #

In the mutation step, a mutant vector is created for each individual in the population. The mutant vector is generated by adding the weighted difference between two randomly selected population members to the best individual in the current population. This approach is denoted as “DE/best/1”, where “best” refers to the best individual being used as the base vector, and “1” indicates that one difference vector is used.

Crossover #

After mutation, a crossover operation is performed between each individual and its corresponding mutant vector to create a trial vector. The “bin” in DE/best/1/bin refers to the binomial crossover strategy, where each component of the trial vector is independently inherited from either the target vector or the mutant vector based on a predefined crossover probability.

Selection #

Finally, the selection step determines whether the trial vector or the original individual survives into the next generation. If the trial vector has a better fitness value than the original individual, it replaces the original individual in the population; otherwise, the original individual is retained.

Procedure #

Data Structures:

  • Population: A set of candidate solutions, each represented as a real-valued vector.
  • Mutant Vector: A vector created by adding the weighted difference between two population members to the best individual.
  • Trial Vector: A vector created through the crossover operation between an individual and its corresponding mutant vector.


  • Population Size (NP): The number of individuals in the population.
  • Crossover Probability (CR): The probability of inheriting a component from the mutant vector during crossover.
  • Scaling Factor (F): A parameter that controls the magnitude of the differential variation.


  1. Initialize the population with NP individuals randomly distributed in the search space.
  2. While the termination criterion is not met, do:
    1. For each individual xi in the population, do:
      1. Select the best individual xbest from the current population.
      2. Randomly select two distinct individuals xr1 and xr2 from the population, where xr1 ≠ xr2 ≠ xi.
      3. Create a mutant vector vi using the formula: vi = xbest + F * (xr1 - xr2).
      4. Create a trial vector ui by performing binomial crossover between xi and vi:
        1. For each component j in xi, do:
          1. If rand(0, 1) ≤ CR or j = jrand, set uij = vij.
          2. Otherwise, set uij = xij.
      5. Evaluate the fitness of the trial vector ui.
      6. If ui has a better fitness than xi, replace xi with ui in the population.
    2. If the termination criterion is met, stop and return the best solution found. Otherwise, continue to the next generation.

Considerations #


  • Simple and straightforward to implement.
  • Requires few control parameters compared to other Evolutionary Algorithms.
  • Exhibits fast convergence and good performance on a wide range of optimization problems.


  • Performance can be sensitive to the choice of control parameters (NP, CR, F).
  • May converge prematurely or stagnate on complex multimodal problems.
  • Lacks an explicit mechanism for maintaining population diversity, which can lead to reduced exploration capabilities.

Heuristics #

Setting Population Size (NP) #

  • As a general rule, set NP to be 5 to 10 times the dimensionality of the problem.
  • For high-dimensional problems (e.g., 100 dimensions or more), consider using smaller population sizes to reduce computational overhead.
  • If the population size is too small, the algorithm may converge prematurely or have insufficient exploration. If too large, convergence may be slow, and computational cost will increase.

Choosing Crossover Probability (CR) #

  • CR controls the balance between exploration and exploitation. Higher values (e.g., 0.9) favor exploitation, while lower values (e.g., 0.1) favor exploration.
  • A good starting point is to set CR between 0.5 and 0.9. Adjust based on the problem characteristics and desired balance between exploration and exploitation.
  • If the algorithm converges prematurely, try decreasing CR to encourage more exploration. If convergence is slow, try increasing CR to focus on exploitation.

Adjusting Scaling Factor (F) #

  • F controls the magnitude of the differential variation. Higher values (e.g., 1.0) result in larger step sizes and more exploration, while lower values (e.g., 0.1) result in smaller step sizes and more exploitation.
  • A common range for F is between 0.4 and 1.0. Start with a value around 0.5 and adjust based on the problem and desired balance between exploration and exploitation.
  • If the algorithm is not converging or is exploring too much, try reducing F. If the algorithm is converging prematurely or not exploring enough, try increasing F.

Termination Criteria #

  • Common termination criteria include reaching a maximum number of generations (iterations), achieving a satisfactory fitness value, or observing no improvement in the best solution for a specified number of generations.
  • The choice of termination criteria depends on the problem and the available computational resources. Adjust based on the desired trade-off between solution quality and computational time.
  • Consider implementing multiple termination criteria and stopping the algorithm when any one of them is met.