DE/rand/1/exp

DE/rand/1/exp #

Name #

DE/rand/1/exp (Differential Evolution, rand/1/exp)

Taxonomy #

DE/rand/1/exp is a variant of the Differential Evolution (DE) algorithm, which is a population-based optimization algorithm belonging to the field of Evolutionary Computation, a subfield of Computational Intelligence.

  • Computational Intelligence
    • Evolutionary Computation
      • Evolutionary Algorithms
        • Differential Evolution
          • DE/rand/1/exp

Strategy #

DE/rand/1/exp operates on a population of candidate solutions, iteratively improving them through a process of mutation, crossover, and selection. The algorithm maintains a population of fixed size, where each individual represents a potential solution to the optimization problem.

Mutation #

In the mutation step, DE/rand/1/exp creates a mutant vector for each individual in the population. The mutant vector is generated by adding the weighted difference between two randomly selected individuals to a third randomly selected individual. This mutation strategy is denoted as “rand/1”, indicating that the base vector is randomly chosen and one difference vector is used.

Crossover #

After mutation, the algorithm performs crossover between each individual (target vector) and its corresponding mutant vector to create a trial vector. In DE/rand/1/exp, the crossover is performed using the exponential crossover scheme. This scheme starts at a randomly chosen point in the vector and continues to copy elements from the mutant vector to the trial vector until a random number exceeds the crossover probability (CR) or the end of the vector is reached.

Selection #

Finally, the selection step determines whether the trial vector replaces the target vector in the population for the next generation. In DE/rand/1/exp, a greedy selection scheme is employed, where the trial vector replaces the target vector if and only if it has a better fitness value.

The algorithm continues this process of mutation, crossover, and selection for a predefined number of generations or until a satisfactory solution is found.

Procedure #

  1. Initialize the population
    1. Set the population size (NP) and problem dimensionality (D)
    2. Randomly initialize NP individuals within the search space bounds
  2. While stopping criteria not met, do:
    1. For each individual (target vector) in the population, do:
      1. Mutation:
        1. Randomly select three distinct individuals (r1, r2, r3) from the population, different from the target vector
        2. Calculate the mutant vector: v = x_r1 + F * (x_r2 - x_r3), where F is the mutation scale factor
      2. Crossover:
        1. Randomly select a crossover point k from [1, D]
        2. Initialize the trial vector u with elements from the target vector x
        3. For j = k to D, do:
          1. If rand() < CR or j == D, set u_j = v_j, else set u_j = x_j
          2. If j == D, break
      3. Selection:
        1. Evaluate the fitness of the trial vector u
        2. If u has better fitness than x, replace x with u in the population
    2. Update the best solution found so far
  3. Return the best solution found

Data Structures #

  • Population: An array of NP individuals, each representing a candidate solution
  • Individual: A vector of D real-valued components, representing a point in the search space
  • Mutant Vector: A vector created during the mutation step, used to generate the trial vector
  • Trial Vector: A vector created during the crossover step, potentially replacing the target vector in the next generation

Parameters #

  • NP: Population size
  • D: Problem dimensionality (number of decision variables)
  • F: Mutation scale factor, controls the magnitude of the differential variation
  • CR: Crossover probability, determines the fraction of components inherited from the mutant vector

Considerations #

Advantages #

  • Simple and easy to implement
  • Requires few control parameters
  • Effective in solving a wide range of optimization problems
  • Exhibits good exploration and exploitation capabilities

Disadvantages #

  • Performance may deteriorate in high-dimensional search spaces
  • Convergence speed can be slow for complex problems
  • Sensitive to the choice of control parameters (F and CR)

Heuristics #

Population Size (NP) #

  • As a rule of thumb, set NP to be about 10 times the dimensionality of the problem
  • For more complex problems, a larger population size may be required to maintain diversity and prevent premature convergence

Mutation Scale Factor (F) #

  • Typically chosen from the range [0.4, 1.0]
  • Smaller values of F lead to more exploitative search, while larger values promote exploration
  • A good starting point is F = 0.5, which can be adjusted based on the problem characteristics

Crossover Probability (CR) #

  • Usually selected from the range [0.0, 1.0]
  • Higher values of CR favor exploitation, while lower values encourage exploration
  • A common choice is CR = 0.9, but this can be adapted depending on the problem

Stopping Criteria #

  • Set a maximum number of generations based on the available computational resources and problem complexity
  • Alternatively, monitor the progress of the best solution and stop if no significant improvement is observed over a certain number of generations

Boundary Constraints #

  • Ensure that the initial population is generated within the search space bounds
  • After mutation and crossover, check if the trial vector violates any boundary constraints and repair it by either:
    • Resetting the violating components to the nearest boundary value
    • Randomly reinitializing the violating components within the bounds

Problem-Specific Adaptations #

  • Consider incorporating problem-specific knowledge into the mutation or crossover operators to improve the search efficiency
  • Experiment with different mutation strategies (e.g., DE/best/1/exp, DE/current-to-best/1/exp) to balance exploration and exploitation based on the problem characteristics