ISSN: A/F

Adaptive Hyperparameter Optimization for Deep Learning Models using Reinforcement Learning with Dynamic Exploration-Exploitation Balancing

Abstract

Deep learning models have achieved state-of-the-art performance in various domains, but their effectiveness heavily relies on the proper tuning of hyperparameters. Traditional hyperparameter optimization methods often suffer from high computational costs and limited adaptability to different datasets and model architectures. This paper proposes a novel adaptive hyperparameter optimization approach that leverages reinforcement learning (RL) with dynamic exploration-exploitation balancing. The RL agent learns to select optimal hyperparameter configurations based on the observed performance of the deep learning model. A key contribution is the dynamic adjustment of the exploration-exploitation trade-off, allowing the agent to efficiently explore the hyperparameter space while also exploiting promising regions. We evaluate our approach on several benchmark datasets and deep learning architectures, demonstrating its superior performance compared to existing hyperparameter optimization techniques in terms of accuracy, convergence speed, and computational efficiency. The results highlight the potential of adaptive RL-based methods for automating and improving the hyperparameter tuning process in deep learning.

References

  1. ..
Download PDF

How to Cite

Anjali Vasishtha, (2025-05-26 17:03:49.229). Adaptive Hyperparameter Optimization for Deep Learning Models using Reinforcement Learning with Dynamic Exploration-Exploitation Balancing. JANOLI International Journal of Computer Science and Engineering , Volume rn1ql9uo4BygpjFCAoIa, Issue 2.