Elite Opposition-Based Selfish Herd Optimizer

: Selfish herd optimizer (SHO) is a new metaheuristic optimization al-gorithm for solving global optimization problems. In this paper, an elite opposi-tion-based Selfish herd optimizer (EOSHO) has been applied to functions. Elite opposition-based learning is a commonly used strategy to improve the performance of metaheuristic algorithms. Elite opposition-based learning enhances the search space of the algorithm and the exploration of the algorithm. An elite opposition-based Selfish herd optimizer is validated by 7 benchmark functions. The results show that the proposed algorithm is able to obtain the more precise solution, and it also has a high degree of stability.

Selfish herd optimizer (SHO) is based on the simulation of the widely observed selfish herd behavior manifested by individuals within a herd of animals subjected to some form of predation risk has been proposed by Fernando Fausto and Erik Cuevas.et.al [8]. The algorithm is inspired by the Hamilton's selfish herd theory [9]. In this paper, an elite opposition-based selfish herd optimizer (EOSHO) has been applied to functions optimization. Improve the search ability of the population by doing elite opposition-based learning to the selfish herd group. EOSHO is validated by 7 benchmark functions. The results show that the proposed algorithm is able to obtain precise solution, and it also has a high degree of stability.
The remainder of the paper is organized as follows: Section 2 briefly introduces the original selfish herd optimizer；This is followed in Section 3 by new elite oppositionbased selfish herd optimizer (EOSHO); simulation experiments and results analysis are described in Section 4. Finally, conclusion and future works can be found and discussed in Section 5.

Initializing the population
The algorithm begins by initialize the set A of N individual positions. SHO models two different groups of search agents: a group of prey and a group of predators. As such, the number of prey ( h N ) and the number of predators ( p N ) are calculated by the follow equations: According to the theory of selfish groups, each animal has its own survival value. Therefore, we distribute the value of each individual's survival as follows： where ) ( i f a is the fitness value that is obtained by the evaluation that evaluates

Herd movement operator
According to the selfish herd theory, we can get the gravity coefficient between the prey group member i and the prey group member j , are defined as follows: The prey will avoid the predator, so there will be exclusion between them. The repulsion factor is shown as follows: The leader's position in the next generation is updated as follows: In the herd, followers and deserters in the next generation of location updates are as follows: where k SV  h represents the mean survival value of the herd's aggregation.
where r denotes unit vector pointing to a random direction within the given n-dimensional solution space.

Predators movement operators
A calculation of the probability The predator's position updating formula can be obtained as follows:

Predation phase
For each predator, we can define a set of threatened prey as follows: When multiple preys enter the range of the predator's attack radius, the predator will choose to kill prey based on the probability of roulette.

Restoration phase
Mating operations can not produce new individuals in the set of individuals, depends on each mating candidate students go, as follows: In order to obtain a new individual, we should first consider the random acquisition of n individuals in a set of mating candidates. Replacing the hunted individual with a new individual.
( ) Specific implementation steps of the Standard Selfish herd optimizer (SHO) can be summarized in the pseudo code shown in Algorithm 1.
Initialize each animal populations; 3. Define the number of herd members and predators within A by equation (1) and (2); 4. While (t < iterMax) 5. Calculate the fitness and survival values of each number by equation (4); 6. Update the position of herd numbers by equation (7) and (8); 7. Update the position of herd numbers by equation (13); 8. Define the predator will choose to kill prey based on the probability of roulette by equation (14); 9. Define the random acquisition of n individuals in a set of mating candidates by equation (16); 10. End While 11. Memorize the best solution achieved; 12. END

Elite opposition-based selfish herd optimizer (EOSHO)
Opposition-based learning is a technique proposed by Tizhoosh [10], has been applied to a variety of optimization algorithms [11,12,13]. The main purpose of it is to find out the candidate solutions that are closer to the global optimal solution. The elite opposition-based learning has been proved that the inverse candidate solution has a greater chance of approaching the global optimal solution than the forward candidate solution. The elite opposition-based learning has successfully improved the performance of many optimization algorithms. At the same time, it has been successfully applied to many research fields, such as reinforcement learning, morphological algorithm window memory and image processing using opposite fuzzy sets.
In this paper, we apply the elite opposition-based learning to the movement of a group of H (the group of prey): select the best fitness value of individuals in H as elite prey, hoping that this elite individual can guide the movement of the whole H group. To explain the concept of elite opposition-based learning, we introduce an example: We x h (17) Where N is the size of H, n is the dimension of H, ) 1 , 0 (   . j da and j db is the dynamic boundary of the j dimension of an individual. We define the dynamic boundary j da and j db as follows: In order to prevent the elite opposition-based learning point from jumping out of the dynamic boundary, we set that when the elite opposition-based learning point jumps out of the dynamic boundary range, we will reset the reverse point, which is as follows: By this method, the algorithm in the global search process is enhanced and the diversity of the population is improved.

Simulation experiments and result analysis
In this section, we used 6 standard test functions [14,15] to test to get the performance of EOSHO. The rest of this section is organized as follows: the experimental setup is given in Section 4.1, and the performance of each algorithm is compared to section 4.2.The space dimension, scope, optimal value and the iterations of 6 functions are shown in Table 1. Table 1 Benchmark test functions.

Benchmark Test Functions
Dim Range

Experimental setup
All of the algorithms was programmed in MATLAB R2016a, numerical experiment was set up on AMD Athlont (tm) II*4640 processor and 2 GB memory.

Comparison of each algorithm performance
The proposed elite opposition-based selfish herd optimizer compared with swarm intelligence optimization algorithms, such as CS [6], PSO [1], MVO [16], ABC [17], SHO [7], we compare their optimal performance by means of mean and standard deviation respectively. The set values of the control parameters of the algorithm are given in Table 4.
For the standard reference function in Table 1, the comparison of the test results is shown in Table 3-4. In this paper, the population size is 50, the maximum number of iterations is 1000, and the results have been obtained in 30 trials. The Best, Mean and Std. represent the optimal fitness value, the average fitness value and the standard deviation. We compared the results of EOSHO and other algorithms to test the function, and listed the results ranking of EOSHO on the right side of the Table 2. where D denotes dimension of the problem, trial denotes internal information in ABC. According to the results of Table 3. It can be noted that the single unimodal functions is suitable for the benchmark development. Compared with the original SHO algorithm, the calculation accuracy of the EOSHO algorithm has been greatly improved. For can obtain higher optimization precision. The results show that for the 3 test functions, the EOSHO algorithm can provide good results and have smaller variance, which means that EOSHO has more advantages than MVO, PSO, CS, ABC and SHO algorithm in solving the problem of single unimodal function. EOSHO in the convergence speed and accuracy is relatively fast. Overall, the EOSHO algorithm accelerates the convergence speed and enhances the calculation accuracy.  Fig.1 The convergence curves of 1 f Fig.2 The convergence curves of 2 f Fig.3 The convergence curves of 3 f Fig.4 Standard deviation for 1 f Fig.5 Standard deviation for 2 f Fig.6 Standard deviation for 3 f According to the results of Table 4, the performance of the EOSHO algorithm in the multimodal function is much better than that of the other comparison algorithms. f The ranking of optimal fitness value for EOSHO algorithm is first, which indicates that EOSHO algorithm has been substantially improved. Fig.9 The convergence curves of 5 f Fig.10 Standard deviation for 5 f Fig.11 The convergence curves of 6 f Fig.12 Standard deviation for 6 f According to Fig 9 and Fig 11, we can see that EOSHO has faster convergence speed and higher optimization accuracy. According to Fig 10 and Fig 12, we can get that EOSHO has strong stability. It can be seen that EOSHO has higher convergence accuracy and stronger robustness. On the whole, the SHO algorithm and elite opposition-based strategy improves accuracy of the SHO algorithm, which is helpful to find the global optimal solution.

Conclusions and future works
In this paper, to improve the convergence speed and calculation accuracy of the SHO algorithm, we add the elite opposition-based learning strategy to prey movement operators. The EOSHO algorithm is proposed, which can better balance exploration and exploitation, the elite opposition-based learning strategy increases the diversity of population to avoid the search stagnation. The EOSHO algorithm is testing for 7 benchmark functions. The optimization performance of the EOSHO algorithm has been greatly improved compared with the basic SHO algorithm. The optimal fitness valued of EOSHO algorithm is relatively small in the six algorithms.
For EOSHO, there are various idea that still deserve in the future study, Firstly, there exists many N-P hard problems in literature, such as planar graph coloring problem, radial basis probabilistic neural networks; Secondly, it is suggested to apply it to more engineering examples.