Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy | Scientific Reports

2022-11-27 02:51:36 By : Ms. Cecy Yan

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Scientific Reports volume  12, Article number: 18961 (2022 ) Cite this article Structural Steel Contractors

Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy | Scientific Reports

The traditional Grey Wolf Optimization algorithm (GWO) has received widespread attention due to features of strong convergence performance, few parameters, and easy implementation. However, in actual optimization projects, there are problems of slow convergence speed and easy to fall into local optimal solution. The paper proposed a Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy (CG-GWO) in response to the above problems. The Cauchy-Gaussian mutation operator is introduced to increase the population diversity of the leader wolves and improve the global search ability of the algorithm. This work retains outstanding grey wolf individuals through the greedy selection mechanism to ensure the convergence speed of the algorithm. An improved search strategy was proposed to expand the optimization space of the algorithm and improve the convergence accuracy. Experiments are performed with 16 benchmark functions covering unimodal functions, multimodal functions, and fixed-dimension multimodal functions to verify the effectiveness of the algorithm. Experimental results show that compared with four classic optimization algorithms, particle swarm optimization algorithm (PSO), whale optimization algorithm (WOA), sparrow optimization algorithm (SSA), and farmland fertility algorithm (FFA), the CG-GWO algorithm shows better convergence accuracy, convergence speed, and global search ability. The proposed algorithm shows the same better performance compared with a series of improved algorithms such as the improved grey wolf algorithm (IGWO), modified Grey Wolf Optimization algorithm (mGWO), and the Grey Wolf Optimization algorithm inspired by enhanced leadership (GLF-GWO).

In recent years, swarm intelligence optimization algorithms have been widely applied to optimization problems in various fields due to their flexibility, high robustness, and simple implementation. They are mainly implemented by simulating the predation, migration and other behaviors of various creatures in nature, including Grey Wolf Optimization algorithm1 (GWO), particle swarm optimization algorithm2 (PSO), whale optimization algorithm3 (WOA), and sparrow optimization algorithm4 (SSA), etc. Optimization algorithms can effectively improve system efficiency, reduce energy consumption, and help optimizers to use resources rationally. At the same time, this effect becomes more obvious as the scale of optimization problems increase.

The grey wolf optimizer (GWO) is a swarm intelligence optimization algorithm proposed by Mirjalili et al. in 2014, which simulates the group behavior of grey wolves preying on prey and the leadership mechanism. The algorithm is widely used in parameter optimization5,6,7, knapsack problem8,9, economic scheduling problem10,11,12, shop scheduling problem13,14, fault diagnosis15,16,17, feature selection18,19,20, image processing21,22,23 and many other fields due to its features of few parameters and easy implementation. However, in actual optimization projects, the GWO algorithm has problems of slow convergence speed, insufficient global search ability, and easy to fall into local optimal solution, which has attracted the attention of many scholars and launched a series of studies on Grey Wolf Optimization algorithm. Wen Long et al.24 proposed an improved Grey Wolf Optimization algorithm IGWO inspired by particle swarm optimization. The algorithm adds a nonlinear adjustment strategy of the control parameters and a modified position-updating equation based on the personal historical best position and the global best position. Experimental results showed that the algorithm could find more accurate solutions, had a higher convergence speed and fewer fitness function evaluation times. Mittal et al.25 paid attention to the proper balance between local search and global search of the GWO algorithm. They proposed the mGWO algorithm by changing the parameter adjustment strategy. Based on the benchmark problems and the WSN clustering problem, it was verified that the algorithm converged fast and had fewer opportunities to get stuck at local minima. The algorithm was very effective for practical applications. Gupta et al.26 proposed a Grey Wolf Optimization algorithm GLF-GWO inspired by enhanced leadership. They introduced Levy-flight search mechanism to update the leaders and enhanced the local search ability of the algorithm through the greedy selection mechanism. Experimental results showed that GLF-GWO algorithm had better global search ability to avoid falling into local optimum. Bansal et al.27 introduced inverse learning (OBL) to improve the exploration ability of traditional Grey Wolf Optimization algorithm. The proposed algorithm effectively deals with optimization stagnation problems and maintains faster convergence speed. The effectiveness of the algorithm had been proved by experiments. Mirjalili et al.28 integrated the archive mechanism and the leader selection mechanism based on the traditional Grey Wolf Optimization algorithm in 2016 and proposed the multi-objective Grey Wolf Optimization algorithm (MOGWO). They experimented on 10 multi-objective benchmark problems to compare with the decomposition-based multi-objective evolutionary algorithm and multi-objective particle swarm algorithm. Experimental results showed that the proposed algorithm was more competitive. Gharehchopogh29 used Gaussian mutation, Cauchy mutation, and Levy fight to increase the global search capability of TSA algorithm. Also, QLGCTSA combines the quantum rotation gate to enhance local search capabilities and increase population diversity. Experimental results showed that the QLGCTSA algorithm had outperformed other competing optimization algorithms. The QLGCTSA algorithm is very helpful for our research work. We also used the Cauchy mutation and Gaussian mutation. The differences are as follows: QLGCTSA applied mutation operators to all search agents, while CG-GWO applied the Cauchy-Gaussian mutation to leader wolves. QLGCTSA used mutation operators to improve the global search ability, while CG-GWO used mutation operator to enhance the local search ability and avoid falling into the local optimum. To better clarify the research gaps, we compared the algorithms mentioned above as shown in Table 1.

Because there are problems of slow convergence speed and easy to fall into local optimal solution in the Grey Wolf Optimization algorithm, Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy is proposed. The main contributions are as follows:

We design the Cauchy-Gaussian mutation operator, which acts on the leader wolves. The search range can be increased when the leader wolves tend to the local optimal solution. The operator can effectively improve the local development ability of the leader wolves and avoid falling into the local optimum.

We propose a greedy selection mechanism30, whose main function is to avoid the high diversity of the population caused by variation. The greedy selection mechanism can maintain the diversity of the population and ensure the convergence speed of the algorithm.

We design an improved search strategy to apply to all grey wolf individuals. This strategy considers the average position of all individuals, which can effectively expand the search space and improve the global search ability of the algorithm.

The rest of this paper is organized as follows: section “Classical Grey Wolf optimizer” provides a brief overview of classical GWO. In section “Proposed method”, the proposed improved algorithm called CG–GWO is discussed in detail. Section “Experimental simulation and result analysis” presents the numerical experimentation and discussion on convergence accuracy, convergence speed, algorithm performance, algorithm runtime and case study of real-world application. The conclusion and future works are presented in section “Conclusion and future works”.

The social hierarchy in the grey wolf population is divided into four levels as shown in Fig. 1. The first level is called α wolf, which plays the role of decision maker in the wolves. The wolf has management ability and corresponds to the optimal solution in GWO. The second and third levels are called β wolf and δ wolf respectively, corresponding to the sub-optimal and third-optimal solutions in GWO. They are mainly responsible for assisting α wolf in decision-making, jointly leading and assisting other wolves to keep approaching their prey. The fourth level is called ω wolf, which represents other solutions in the optimization process, and updates the position by following the decisions of α, β and δ wolves31. During the algorithm iteration, grey wolf individuals of all levels are in a state of competition. After each iteration, the leader wolves must be reselected according to the distance between each grey wolf and the prey.

The position update of α, β and δ wolves in the grey wolf population depends on the position of the prey, as shown in formula (1).

where \(X(t + 1)\) represents the position of grey wolf after the update, \(X_{p} (t)\) represents the position of the prey, A is the coefficient vector, and D represents the distance between a grey wolf and the prey.

where \(X(t)\) represents the current position of grey wolf, C is the coefficient vector, α decreases linearly from 2 to 0 over the course of iterations, and \(r_{1} ,r_{2}\) are random vectors in [0,1].

According to the social hierarchy of grey wolves, ω wolves depend on the leader wolves to update the position. The distance between ω wolves and each leader wolves is calculated by formula (5), and finally the direction of movement of the grey wolf individual is determined according to formulas (6) and (7).

where \(X_{\alpha } ,X_{\beta } ,X_{\delta }\) represent positions of α, β and δ wolves respectively, X is the current position of the grey wolf individual \(D_{\alpha } ,D_{\beta } ,D_{\delta }\) represent the distance between the grey wolf individual and each leader wolves respectively, and \(X(t + 1)\) is the position of grey wolf after updating.

The pseudo code of the traditional GWO algorithm is shown in Fig. 2.

The pseudo code of the traditional GWO algorithm.

In the later iterations of the traditional GWO algorithm, grey wolves gradually move closer to α wolf, which results in a lack of diversity in the local search of the population and the algorithm tends to converge prematurely. This work introduces the Cauchy-Gaussian mutation operator to improve the diversity of leader wolves and enhance the local search ability in order to solve above problem. After each iteration, α, β and δ wolves are selected for mutation. The mutation and original position are compared based on the greedy selection mechanism. Individuals with better fitness are selected to enter the next iteration. The mathematical definition of the Cauchy-Gaussian mutation strategy is described as follows:

where \(U_{leader} (t + 1)\) represents the post-mutation position of leader wolves, \(X_{leader}\) represents the current position of leader wolves, \(cauchy(0,\sigma^{2} )\) is random variables satisfying the Cauchy distribution,\(Gauss(0,\sigma^{2} )\) is random variables satisfying the Gaussian distribution, \(f(X_{leader} )\) represents the fitness value of leader wolves, \(f(X_{\alpha } )\) represents the fitness value of α wolf, and \(\lambda_{1} ,\lambda_{2}\) are dynamic parameters adaptively adjusted with the number of iterations.

where t represents the current iteration number, and T represents the maximum number of iterations.

The traditional GWO algorithm has a small number of parameters in the search process and is easy to implement. But the global search ability of the algorithm is weak, resulting in falling into a local optimum in some cases easily. This work proposes an improved search strategy to improve the global search ability of the algorithm and expand the search space. The global search space is expanded under the condition that the current grey wolf individual position \(X(t)\) is generated based on the traditional GWO position update formula (7). The mathematical definition is described as follows:

where \(U(t + 1)\) represents the position of grey wolf individual after through improved search strategy,\(X_{rand} (t)\) represents the position of grey wolf individual that is randomly selected from the population at the tth iteration,\(X(t)\) represents the current position of grey wolf individual,\(r_{1} ,r_{2} ,r_{3} ,r_{4} ,r_{5}\) are random vectors in [0,1],\(X_{\alpha } (t)\) represents the current position of α wolf,\(X_{avg} (t)\) represents the average position of the grey wolf population in the current iteration, and \(ub,lb\) are upper and lower bounds of decision variables.

where \(X(t + 1)\) represents the position of grey wolf individual in the \((t + 1)\) th iteration,\(f(U(t + 1))\) is the fitness value after updating the position through the improved search strategy, and \(f(X(t))\) is the fitness value of the current position.

In the improved search strategy, new solutions generated by iteration are generated around random solutions or optimal solutions, which helps to enhance the search and communication between grey wolf individuals. If the exploration formula (13) cannot provide a better position, the traditional GWO method is used to update the position of the grey wolf individual.

The pseudo code of CG-GWO algorithm is shown in Fig. 3 and the flow chart is shown in Fig. 4.

The pseudo code of CG-GWO algorithm.

The flow chart of CG-GWO algorithm.

To verify the performance of our approach, experiments select 16 benchmark functions for simulation experiments, including five unimodal functions (F1–F5) shown in Table 2, six multimodal functions (F6–F11) shown in Table 3, and five fixed-dimension multimodal functions (F12–F16) shown in Table 4. Unimodal functions have only one global optimal solution and no local optimal solution, so they are used to test convergence and exploration abilities of the algorithm. Multimodal functions have only one global optimal solution and others are local optimal solutions. Global search and local optimization abilities of the algorithm are tested through multimodal functions. Fixed-dimension multimodal functions are more complex, which combine multiple basic functions to test the stability of the algorithm.

This work conducts two-part comparative experiments to test the effectiveness of our approach. On the one hand, the proposed algorithm is compared with classic optimization algorithms such as traditional Grey Wolf Optimization algorithm (GWO), particle swarm optimization algorithm (PSO), whale optimization algorithm (WOA), sparrow optimization algorithm (SSA) and farmland fertility algorithm (FFA)32. On the other hand, the proposed algorithm is compared with a series of improved algorithms such as the enhanced leadership inspired Grey Wolf Optimization algorithm (GLF-GWO), inspired Grey Wolf Optimization algorithm (IGWO) and modified Grey Wolf Optimization algorithm (mGWO).

Experiments in this work are all implemented on a PC (8G memory, 903G hard disk, CPU: Intel i7-4790) using python 3.6.8 environment. To ensure the fairness of the experiments, all algorithms are independently run 30 times on each function. The population size is set to 30 and the maximum number of iterations is 200. Finally, the optimal value (Best), average value (Ave), worst value (Worst) and standard value (SD) of all benchmark functions are obtained.

To verify the optimization effect of our approach on the accuracy of convergence, it is compared with algorithms such as GWO, PSO, WOA, SSA, FFA, IGWO, mGWO and GLF-GWO on 16 benchmark functions in simulation experiments. Experimental results are presented in Tables 5 and 6. The bold part indicates relatively superior comparison results.

As shown in Table 5, statistics of the optimal value, average value and worst value of the CG-GWO algorithm on all benchmark functions were at the optimal level. Experiments were given the same initial population size and number of iterations. On the benchmark function F12, our approach and other four classical optimization algorithms in the experiment could all find the optimal value. However, the average value, worst value and standard deviation of our approach were significantly better. In contrast, on the benchmark functions F1, F2, F3, F4, F7, F9, and F10, our approach had shown absolute superiority compared with other classic optimization algorithms in the experiment. All statistics were several orders of magnitude higher. On the benchmark functions F5 and F13, although the effect of our approach was not so obvious, it was better than other four classic optimization algorithms in the experiment in terms of all statistics. On the benchmark functions F6 and F8, both the CG-GWO algorithm and the WOA algorithm had found the optimal value, but our approach was more concentrated and more stable.

As shown in Table 6, the CG-GWO algorithm showed great superiority on unimodal functions compared with a series of improved algorithms of GWO. Especially on the benchmark functions F1, F2, and F3, the optimal value, average value, worst value and standard deviation are far superior to other improved GWO algorithms in the experiment. The stability of our approach can also be clearly seen on the benchmark functions F4 and F5. On multimodal functions, our approach could find the theoretical optimal value well on the benchmark functions F6, F7, F8 and F10. At the same time, all statistics were better than other optimization algorithms in the experiment. On the benchmark function F9, in addition to the traditional GWO algorithm, other improved algorithms had found the optimal value of the function. But our approach showed greater superiority in comparison. On fixed-dimension multimodal functions, the optimization accuracy of our approach was comparable to that of other improved algorithms in the experiment. Experiments showed that all optimization algorithms had found the optimal value on the benchmark function F12. The GLF-GWO algorithm also showed the same superiority in terms of average and worst value at the same time. But in contrast, our approach was more excellent in standard deviation, and the optimization effect of which was more stable. On the benchmark function F13, the GLF-GWO algorithm and the CG-GWO algorithm had also found the optimal value, but our approach showed more significant concentration and stability.

Convergence performance experiments are conducted to observe the convergence effect and convergence speed of the CG-GWO algorithm more intuitively, classical optimization algorithms, and a series of improved algorithms of GWO. The fitness convergence curves of each algorithm on 16 benchmark functions are drawn respectively as shown in Figs. 5 and 6, using the number of iterations as the abscissa and the fitness value as the ordinate.

Convergence curve comparison of classic optimization algorithms.

Convergence curve comparison of improved GWO algorithms.

It can be seen from Fig. 5 that the convergence curve of the CG-GWO algorithm is below other classical optimization algorithms in the experiment. The convergence accuracy and convergence speed on 16 benchmark functions have been significantly improved. For the unimodal functions, as shown in F1 and F4, CG-GWO converged quickly to the optimal value, but SSA still did not reach the optimal value after 100 iterations. As shown in F2 and F3, our approach had been optimized to the optimal value in the 10th iteration. The GWO algorithm converged to the optimal value until the 50th iteration. In contrast, the PSO, WOA, SSA and FFA algorithms still did not reach the optimal value after 200 iterations, and the optimization results of WOA and SSA algorithms were quite different from the optimal value. As shown in F5, PSO did not converge to the optimal value. For the multimodal functions, as shown in F6, GWO and PSO did not tend to converge after 200 iterations. The SSA algorithm converged in the 110th iterations, but which fallen into a local optimum. Both WOA and CG-GWO had a large convergence curve slope at the beginning of the iteration. Although our approach had a slower convergence than WOA in the early stage, the convergence speed and accuracy of which were higher than WOA after 40 iterations. The above comparison also fully reflected the global search ability of our approach. As shown in F7–F8, PSO, SSA and FFA did not converge to the optimal value after 200 iterations. As shown in F9–F11, CG-GWO converged to the optimal value faster than other algorithms. For the fixed-dimension multimodal functions, as shown in F12-F16, all optimization algorithms could converge to close to the optimal value. But in contrast, the CG-GWO algorithm converged faster, and the WOA algorithm converges the slowest. The convergence curve of the WOA algorithm tends to stabilize until the 50th iteration.

It can be seen from Fig. 6 that the CG-GWO algorithm had shown its superiority compared to the traditional GWO algorithm and a series of improved algorithms. On the unimodal functions F1–F5, all algorithms could converge to the optimal value. But our approach could converge to a stationary value faster. On the multimodal functions F6 and F7, our approach could explore closer to the theoretical optimal value after 200 iterations, and the convergence speed of which was also faster. However, other improved algorithms in the experiment either did not converge to a stable value or fall into a local optimal value. On the multimodal functions F8, F9, F10 and F11, all optimization algorithms in the experiment could also converge to the optimal value, but our approach converged more quickly. On the fixed-dimension multimodal function F12 and F16, the convergence speed of all algorithms in the experiment was comparable. However, the convergence accuracy of our approach was higher. Similarly, on the fixed-dimension multimodal function F13–F15, our approach could converge to the optimal value faster than other optimization algorithms in the experiment.

Therefore, our approach is superior to other optimization algorithms in the experiment in terms of convergence accuracy and convergence speed, and the global search ability of that is also significantly improved, which can effectively avoid falling into the local optimal value. Experiments have proved the effectiveness of improved ideas and reflected the superiority of the CG-GWO algorithm in solving more complex optimization problems.

To evaluate the optimization ability and stability of the improved algorithm, this work draws the box plot33 of all algorithms on 16 benchmark functions. Comparative analysis is performed based on the upper four scores, the median, the lower four scores and outliers in the box plot. This work conducts 30 independent experiments on all algorithms. Since experimental results of different algorithms are quite different, different coordinate systems are used for comparison when drawing the box plot to observe the comparison results more intuitively. Experimental results are presented in Figss. 7 and 8.

Boxplot comparison of classic optimization algorithms.

Boxplot comparison of improved GWO algorithms.

It can be seen from Fig. 7 that the CG-GWO algorithm has a weaker dispersion in the optimization process, and the optimization value of which is more concentrated. On the unimodal functions F1–F5, the order of magnitude of the optimization results of our approach was much smaller than other classic optimization algorithms in the experiment. Experimental results reflected the superiority in the optimization accuracy and higher stability of the improved algorithm. On the multimodal functions F6–F11, our approach was more concentrated and has fewer outliers than other classic optimization algorithms in the experiment. Experimental results verified the robustness of our approach in terms of global search ability. On the fixed-dimension multimodal functions F12–F13, the CG-GWO algorithm and the GWO algorithm had similar optimization effects. They were far better than other classic optimization algorithms in the experiment. However, our approach had fewer outliers, more concentrated optimization values and better optimization results. As shown in F14–F16, CG-GWO had fewer outliers and was more stable.

As shown in Fig. 8, most improved GWO algorithms showed superiority compared to the traditional GWO algorithm. On the unimodal functions F1–F5, the CG-GWO algorithm had a better optimization effect. On the benchmark functions F3 and F5, the contrast degree of each algorithm was relatively close, but our approach showed higher convergence stability and had fewer outliers. On the multimodal functions F6–F11, only the superiority of the GLF-GWO algorithm was close to that of our approach. Although the global search ability of the GLF-GWO algorithm was enhanced due to the Levy-flight search mechanism, it produced more outliers. On the fixed-dimension multimodal functions F12–F16, the GLF-GWO algorithm was relatively close to our approach. The GLF-GWO algorithm was better than the CG-GWO algorithm on the range of the optimal value change, but our approach had far fewer outliers. Experimental results embodied the better global search ability and higher stability of our approach.

To verify the computational effectiveness of the CG-GWO algorithm, this paper compares the runtime of the CG-GWO algorithm with other algorithms. Experimental results are presented in Table 7. To save space, this paper selects four representative functions to draw bar graph. Experimental results are presented in Fig. 9.

As shown in Table 7 and Fig. 9, the CG-GWO algorithm has Cauchy-Gaussian mutation in the improved strategy, which will consume more time. In F3, F8 and F10, the runtime of CG-GWO was only lower than GLF-GWO. However, we can see that the increase of time consumption was not too great. We can accept this change in practical application. In F12, the runtime of CG-GWO is medium. So, the CG-GWO algorithm has certain computational validity.

Therefore, the CG-GWO algorithm has a more stable convergence ability compared with classic optimization algorithms and a series of improved GWO algorithms. Our approach shows its superiority in the optimization accuracy and the degree of dispersion. Experimental results prove that our approach has better optimization ability and stable performance.

IN this section, the performance of the nine mentioned algorithms, PSO, WOA, SSA, FFA, GWO, I-GWO, m-GWO, GLF-GWO and CG-GWO, is evaluated in engineering real application: pressure vessel design34.

Pressure vessels are usually spherical or cylindrical in shape. Cylindrical vessels may be oriented vertically or horizontally. Vertical vessels have many uses: Fractionating Towers, Contactor Towers, Reactors and Vertical Separators. This problem works to reduce the overall cost of material, formation, and welding of the cylindrical pressure vessel reinforced at both ends by hemispherical heads as shown in Fig. 10. The mathematical formulation of pressure vessel design is described as follows:

where \(0 \le T_{s} ,T_{h} \le 99\) and \(10 \le R,L \le 200.\)

The pressure vessel design problem is one of the most common problems. Researchers have used optimization problems in many studies to confirm the efficacy of their new optimization algorithms. The comparison of the optimal results obtained for the pressure vessel design problem by CG-GWO and other algorithms mentioned above is presented in Table 8. According to the cost results obtained for the pressure vessel design problem in Table 8, CG-GWO reported the lowest of 5884.6849. And the optimum parameters are 0.779216, 0.396459, 40.265625 and 200.000000.

The comparison of the statistical results for the pressure vessel design problem over 50 independent runs is shown in Table 9. It can be seen from results that the stability of CG-GWO is better, which presented excellent results in terms of Ave and Std values.

This work proposed CG-GWO algorithm aiming at the slow convergence and easy to fall into local optimal problems of traditional Grey Wolf Optimization algorithm. The Cauchy-Gaussian mutation operator was introduced, which acted on the leader wolves. The dominant degree of Cauchy mutation and Gaussian mutation in the algorithm was dynamically adjusted to improve the global search ability of the algorithm according to the current iteration period. At the same time, the greedy mechanism was added to perform Cauchy-Gaussian mutation in the lead wolves. Outstanding individuals during mutation were retained to avoid the high diversity of the algorithm and ensured the convergence speed of the algorithm. CG-GWO added an improved search strategy to avoid the algorithm from falling into the local optimum and improve the convergence accuracy of the algorithm. In the improved strategy, new solutions were generated around random solutions or optimal solutions. Experimental results showed that our approach effectively improved the accuracy and speed of convergence. In the accuracy experiments, CG-GWO showed several times superiority. The convergence performance was also able to converge to the optimal value relatively quickly, while the stability of the algorithm was also shown to be high by the box plot. Although CG-GWO did not have much advantage in terms of runtime, the superiority of the algorithm was also demonstrated by the comparison of the pressure vessel design problem.CG-GWO was able to find the optimal value more consistently than other algorithms mentioned in the paper. In conclusion, CG-GWO showed good optimization ability and stability on unimodal functions, multimodal functions, and fixed-dimension multimodal functions, which could effectively avoid falling into the local optimum and expand the individual search space.

Although CG-GWO algorithm showed good convergence accuracy, convergence speed and stability in most cases, its stability is slightly poor in some specific situations. At the same time, Cauchy-Gaussian mutation will take more time, which is also a limitation of the algorithm because of its long running time. The proposed algorithm has some limitations in practical application. If the scale of the problem is too large, the calculation of \(X_{avg} (t)\) in the improved search strategy is relatively more complicated, which will affect the optimization progress of the algorithm. Also, if multiple variables in the actual problem affect each other, it is difficult to make the selection of the leader wolves, and the determination of variable \(\sigma\) in the Cauchy-Gaussian mutation will become complicated, which also bring great challenges to the proposed algorithm. In the future work, we should pay attention to the stability improvement of the algorithm and improve the efficiency of the algorithm. The CG-GWO algorithm will be applied to more complex practical engineering optimization problems to help optimizers determine the final optimization plan more quickly and accurately.

The datasets generated and/or analyzed during the current study are not publicly available due that the benchmark functions used in the article have been explained in the experimental part but are available from the corresponding author on reasonable request.

Mirjalili , S. , Mirjalili , M. & Lewis , A. The Gray Wolf optimizer.Adv.Eng.Softw.Rev. 69, 46–61 (2014).

Kennedy, N., Eberhart, C. Particle swarm optimization. In Proceedings of ICNN'95—International Conference on Neural Networks, vol. 1–6 1942–1948 (2002).

Mirjalili , S. & Lewis , A. The Whale Optimization Algorithm .Adv.Eng.Softw.Rev. 95, 51–67 (2016).

Xue, J. & Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 8(1), 22–34 (2020).

Salgotra, R., Singh, U. & Sharma, S. On the improvement in Grey Wolf Optimization. Neural Comput. Appl. 32, 3709–3748 (2020).

Miao, D. et al. Parameter estimation of PEM fuel cells employing the hybrid Grey Wolf Optimization method. Energy 193, 571–582 (2020).

Kulkarni, O. & Kulkarni, S. Process parameter optimization in WEDM by Grey Wolf optimizer. Mater. Today Proc. 5(2), 4402–4412 (2018).

Luo, K. & Zhao, Q. A binary grey wolf optimizer for the multidimensional knapsack problem. Appl. Soft Comput. 83, 105645 (2019).

Zewen, L. et al. A hybrid grey wolf optimizer for solving the product knapsack problem. Int. J. Mach. Learn. Cybern. 12, 201–222 (2020).

Kamboj, K., Bath, K. & Dhillon, S. Solution of non-convex economic load dispatch problem using Grey Wolf Optimizer. Neural Comput. Appl. 27(5), 1301–1316 (2015).

Kadali, S. et al. Economic generation schedule on thermal power system considering emission using grey wolves optimization. Energy Procedia 117, 509–518 (2017).

Qiu, J. et al. Planning and optimal scheduling method of regional integrated energy system based on Gray Wolf Optimizer algorithm. IOP Conf. Ser. Earth Environ. Sci. 546, 022059 (2020).

Yang, Z. & Liu, C. A hybrid multi-objective gray wolf optimization algorithm for a fuzzy blocking flow shop scheduling problem. Adv. Mech. Eng. 10(3), 168781401876553 (2018).

Jiang, T. A hybrid Grey Wolf optimization for job shop scheduling problem. Int. J. Comput. Intell. Appl. 17(03), 1850016 (2018).

Zhang, X. et al. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis. J. Sound Vib. 418, 55–78 (2018).

Zeng, B. et al. A transformer fault diagnosis model based on hybrid Grey Wolf Optimizer and LS-SVM. Energies 12(21), 4170 (2019).

Jiang, Y. Fault diagnosis of subway plug door based on Isomap and GWO-SVM. ICIEA 2020, 106–110 (2020).

Emary, E., Zawbaa, M. & Hassanien, E. Binary Grey Wolf optimization approaches for feature selection. Neurocomputing 172, 371–381 (2016).

Pei, H., Pan, J. & Chu, S. Improved binary Grey Wolf optimizer and its application for feature selection. Knowl.-Based Syst. 2020, 195 (2020).

Kitonyi, M. & Segera, R. Hybrid gradient descent Grey Wolf optimizer for optimal feature selection. Biomed. Res. Int. 2021, 1–33 (2021).

Kumaran, N., Vadivel, A. & Kumar, S. Recognition of human actions using CNN-GWO: A novel modeling of CNN for enhancement of classification performance. Multimedia Tools Appl. 77(18), 23115–23147 (2018).

Yao, X. et al. Multi-threshold image segmentation based on improved Grey Wolf optimization algorithm. IOP Conf. Ser. Earth Environ. Sci. 252, 042105 (2019).

Bharanidharan, N. & Harikumar, R. Modified Grey Wolf randomized optimization in dementia classification using MRI images. IETE J. Res. 2020, 1–10 (2020).

Long, W. et al. Inspired grey wolf optimizer for solving large-scale function optimization problems. Appl. Math. Model. 60, 112–126 (2018).

Article  MathSciNet  MATH  Google Scholar 

Mittal, N., Sohi, S. & Singh, U. Modified Grey Wolf optimizer for global engineering optimization. Appl. Comput. Intell. Soft Comput. 2016, 1–16 (2016).

Gupta, S. & Deep, K. Enhanced leadership-inspired grey wolf optimizer for global optimization problems. Eng. Comput. 36, 1777–1800 (2019).

Bansal, C. & Singh, S. A better exploration strategy in Grey Wolf Optimizer. J. Ambient. Intell. Humaniz. Comput. 12(1), 1099–1118 (2020).

Mirjalili, S. et al. Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization. Expert Syst. Appl. 47, 106–119 (2016).

Gharehchopogh, S. An improved tunicate swarm algorithm with best-random mutation strategy for global optimization problems. J. Bionic Eng. 19, 1177 (2022).

Bengag, A., Bengag, A. & Elboukhari, M. A Novel Greedy forwarding mechanism based on density, speed and direction parameters for Vanets. Int. J. Interact. Mobile Technol. (iJIM) 14(08), 196 (2020).

Heidari, A. & Pahlavani, P. An efficient modified grey wolf optimizer with Lévy flight for optimization tasks. Appl. Soft Comput. 60, 115–134 (2017).

Shayanfar, H. & Gharehchopogh, S. Farmland fertility: A new metaheuristic algorithm for solving continuous optimization problems. Appl. Soft Comput. 71, 728–746 (2018).

Xiaobing, Y. et al. Evaluate the effectiveness of multiobjective evolutionary algorithms by box plots and fuzzy TOPSIS. Int. J. Comput. Intell. Syst. 12(2), 733–743 (2019).

Yang, S. & Xingshi, H. Firefly algorithm: Recent advances and applications. Int. J. Swarm Intell. 1(1), 36 (2013).

This work was supported by grants from the National Natural Science Foundation of China (Major Program, No. 51991365), and the Natural Science Foundation of Shandong Province of China (ZR2021MF082).

College of Computer Science and Technology, China University of Petroleum (East China), Qingdao, 266580, China

Kewen Li, Shaohui Li, Zongchao Huang, Min Zhang & Zhifeng Xu

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

You can also search for this author in PubMed  Google Scholar

K. L. and S. L. wrote the main manuscript text, Z. H. and M. Z. prepared figures, Z. X. prepared tables.All authors reviewed the manuscript.

The authors declare no competing interests.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Li, K., Li, S., Huang, Z. et al. Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy. Sci Rep 12, 18961 (2022). https://doi.org/10.1038/s41598-022-23713-9

DOI: https://doi.org/10.1038/s41598-022-23713-9

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Scientific Reports (Sci Rep) ISSN 2045-2322 (online)

Grey Wolf Optimization algorithm based on Cauchy-Gaussian mutation and improved search strategy | Scientific Reports

Metal Fabricator Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.