Enhancing Gray Wolf Optimization with Recurrent Neural Networks for Improved Optimization Performance

Main Article Content

Dr. A. D. C. Navin Dhinnesh
Dr S Suganthi
Dr D Usha
Dr D Illakiam

Abstract

The optimization of complex problems is a challenging task that often requires advanced techniques to achieve accurate and efficient solutions. This research paper presents a novel approach that integrates the Gray Wolf Optimization algorithm with Recurrent Neural Networks to enhance the performance and efficiency of optimization problems. The proposed Modified Gray Wolf Optimization Algorithm using Recurrent Neural Networks leverages the learning and memory capabilities of RNNs to improve the exploration and exploitation phases of Gray Wolf Optimization. In the proposed approach, each solution is encoded and represented using Recurrent Neural Networks architecture. The Recurrent Neural Network processes the solution representation recursively over a sequence of time steps, capturing dependencies and dynamics within the optimization problem. Extensive experiments were conducted on benchmark functions and real-world optimization problems to evaluate the performance of the proposed algorithm. The results demonstrate that the Modified Gray Wolf Optimization Algorithm using Recurrent Neural Networks outperforms the conventional Gray Wolf Optimization algorithm and other state-of-the-art optimization algorithms in terms of convergence speed and solution quality. Furthermore, this study opens up avenues for future research, including the exploration of different Recurrent Neural Network architectures, parameter settings, and their applications in various optimization domains.

Article Details

Section
Articles