ISSN: 3048-6815

Optimizing Metaheuristic Algorithms via Reinforcement Learning-Driven Parameter Adaptation for Enhanced Global Search Capabilities

Abstract

Metaheuristic algorithms, celebrated for their efficacy in solving complex optimization problems, often rely on manually tuned parameters. The performance of these algorithms is highly sensitive to these parameters, and suboptimal settings can lead to premature convergence or inefficient exploration of the search space. This paper introduces a novel framework for dynamically adapting metaheuristic algorithm parameters using reinforcement learning (RL). Specifically, we employ Q-learning to train an agent that learns to adjust parameters of the Particle Swarm Optimization (PSO) and Genetic Algorithms (GA) during the optimization process. The state space is defined by the current search progress and solution quality, while the action space consists of discrete parameter adjustments. The reward function is designed to incentivize exploration and exploitation based on the algorithm's performance. We evaluate the proposed framework on a suite of benchmark optimization problems, demonstrating significant improvements in solution quality, convergence speed, and robustness compared to static parameter settings and other adaptive approaches. The results indicate that RL-driven parameter adaptation offers a promising avenue for enhancing the global search capabilities of metaheuristic algorithms.

References

  1. Eiben, A. E., & Smith, J. E. (2015). Introduction to evolutionary computing. Springer.
  2. Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. Proceedings of ICNN'95 - International Conference on Neural Networks, 4, 1942-1948.
  3. Eberhart, R. C., & Shi, Y. (1998). A modified particle swarm optimizer. Proceedings of the IEEE International Conference on Evolutionary Computation, 69-73.
  4. Shi, Y., & Eberhart, R. C. (2001). Fuzzy adaptive particle swarm optimization. Proceedings of the IEEE International Conference on Evolutionary Computation, 101-106.
  5. Angeline, P. J. (1995). Adaptive and self-adaptive evolutionary computations. In Computational intelligence: A dynamic systems perspective (pp. 152-161). IEEE Press.
  6. Sörensen, K. (2015). Metaheuristics—the metaphor exposed. International Transactions in Operational Research, 22(1), 3-18.
  7. Zhang, H., Zhou, Y., Chen, J., & Zeng, Z. (2016). A Q-learning-based evolutionary algorithm for continuous optimization. Information Sciences, 367, 875-891.
  8. Wagner, T., Affenzeller, M., & Winkler, S. (2017). Reinforcement learning for parameter control in differential evolution. Genetic Programming and Evolvable Machines, 18(1), 1-23.
  9. Marzaq, A. Q., Hamad, M. A., & Abdulazeez, A. M. (2021). Grey wolf optimizer based on reinforcement learning. Journal of Soft Computing Paradigm (JSCP), 3(01), 46-54.
  10. Ren, Z., Chen, C., & Tang, K. (2019). Deep reinforcement learning for swarm intelligence. IEEE Transactions on Evolutionary Computation, 23(6), 1024-1037.
  11. López-Ibáñez, M., Dubois-Lacoste, J., Pérez Cáceres, L., Birattari, M., & Stützle, T. (2016). The irace package: Iterated racing for automatic algorithm configuration. Operations Research Perspectives, 3, 43-58.
  12. Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.
  13. Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addison-Wesley.
  14. Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. Science, 220(4598), 671-680.
  15. Dorigo, M., & Stützle, T. (2004). Ant colony optimization. MIT press.
Download PDF

How to Cite

Pradeep Upadhyay, (2025-05-02 10:34:01.810). Optimizing Metaheuristic Algorithms via Reinforcement Learning-Driven Parameter Adaptation for Enhanced Global Search Capabilities. JANOLI International Journal of Artificial Intelligence and its Applications, Volume EOCMPeqBj5R9ZDur0Rlk, Issue 3.