Adaptive Random Search #
Name #
Adaptive Random Search, ARS
Taxonomy #
Adaptive Random Search is a stochastic optimization algorithm that falls under the broader field of Computational Intelligence. It is closely related to other random search techniques such as Pure Random Search and Adaptive Step Size Random Search.
- Computational Intelligence
- Stochastic Optimization
- Random Search
- Pure Random Search
- Adaptive Step Size Random Search
- Adaptive Random Search
- Random Search
- Stochastic Optimization
Strategy #
Adaptive Random Search is a global optimization algorithm that uses a simple yet effective strategy to find the optimal solution in a search space. The algorithm starts by generating a random initial solution and evaluating its fitness. In each iteration, a new solution is generated by adding a random perturbation to the current best solution. The step size of the perturbation is adapted based on the success rate of the previous iterations.
Exploration and Exploitation #
The algorithm balances exploration and exploitation by adjusting the step size of the perturbation. If the new solution is better than the current best, the step size is increased to encourage further exploration. If the new solution is worse, the step size is decreased to focus the search around the current best solution, promoting exploitation.
Step Size Adaptation #
The step size adaptation mechanism is the key feature of Adaptive Random Search. It allows the algorithm to efficiently explore the search space and converge towards the optimal solution. The adaptation is based on the success rate of the previous iterations, ensuring that the algorithm spends more time exploring promising regions while quickly moving away from unproductive areas.
Procedure #
- Initialize:
- Generate a random initial solution
x_best
- Evaluate the fitness of
x_best
,f(x_best)
- Set the initial step size
step_size
- Set the success rate threshold
success_threshold
- Set the step size adjustment factor
step_factor
- Set the iteration counter
iter = 0
- Generate a random initial solution
- While stopping criteria not met:
- Generate a new solution
x_new
by adding a random perturbation tox_best
:x_new = x_best + step_size * random_vector()
- Evaluate the fitness of
x_new
,f(x_new)
- If
f(x_new)
is better thanf(x_best)
:- Update
x_best = x_new
- Update
f(x_best) = f(x_new)
- Increase the success counter
success_count
- Update
- Update the iteration counter
iter += 1
- If
iter % adaptation_interval == 0
:- Calculate the success rate
success_rate = success_count / adaptation_interval
- If
success_rate > success_threshold
:- Increase the step size
step_size *= step_factor
- Increase the step size
- Else:
- Decrease the step size
step_size /= step_factor
- Decrease the step size
- Reset the success counter
success_count = 0
- Calculate the success rate
- Generate a new solution
- Return the best solution found
x_best
and its fitnessf(x_best)
Data Structures #
x_best
: The current best solutionf(x_best)
: The fitness of the current best solutionx_new
: A new solution generated by adding a perturbation tox_best
f(x_new)
: The fitness of the new solutionstep_size
: The current step size for perturbationsuccess_count
: The count of successful iterationsiter
: The iteration counter
Parameters #
success_threshold
: The threshold for the success rate to trigger step size adaptationstep_factor
: The factor by which the step size is adjustedadaptation_interval
: The number of iterations between step size adaptations
Considerations #
Advantages #
- Simplicity: Adaptive Random Search is easy to understand and implement, making it accessible to a wide range of users.
- Flexibility: The algorithm can be applied to various optimization problems without requiring problem-specific knowledge or modifications.
- Efficiency: The step size adaptation mechanism allows the algorithm to quickly converge towards the optimal solution, making it efficient in terms of function evaluations.
Disadvantages #
- Lack of guarantee: As a stochastic optimization algorithm, Adaptive Random Search does not provide any guarantee of finding the global optimum.
- Sensitivity to parameters: The performance of the algorithm can be sensitive to the choice of parameters, such as the success threshold and step size adjustment factor.
- Scalability: The algorithm may struggle with high-dimensional problems or problems with complex landscapes, as the random perturbations become less effective in exploring the search space.
Heuristics #
Step Size Initialization #
- Start with a step size that is a fraction of the search space range, e.g., 10% of the difference between the upper and lower bounds of each dimension.
- If the problem has different scales across dimensions, consider normalizing the search space or using dimension-specific step sizes.
Success Threshold #
- A typical value for the success threshold is 0.2, meaning that if more than 20% of the iterations in an adaptation interval are successful, the step size is increased.
- Adjust the threshold based on the desired balance between exploration and exploitation. A higher threshold will lead to more exploitation, while a lower threshold will encourage exploration.
Step Size Adjustment Factor #
- A common value for the step size adjustment factor is 2, meaning that the step size is doubled when the success rate is above the threshold and halved when it is below.
- Larger factors will lead to more aggressive step size changes, while smaller factors will result in more gradual adaptations.
Adaptation Interval #
- The adaptation interval determines how frequently the step size is adapted based on the success rate.
- A typical value is 10 iterations, meaning that the step size is adapted after every 10 iterations.
- Longer intervals will lead to more stable adaptations, while shorter intervals will allow for faster adaptation to changes in the search landscape.
Stopping Criteria #
- Define a maximum number of iterations or function evaluations based on the available computational budget.
- Alternatively, stop the algorithm when the step size falls below a certain threshold, indicating that the search has converged to a local optimum.
- Consider implementing a stagnation detection mechanism, where the algorithm stops if the best solution does not improve for a specified number of iterations.