Abstract
In the last decade, numerous multi/many-objective evolutionary algorithms (MOEAs) have been proposed to handle multi/many-objective problems (MOPs) with challenges such as discontinuous Pareto Front (PF), degenerate PF, etc. MOEAs in the literature can be broadly divided into three categories based on the selection strategy employed such as dominance, decomposition, and indicator-based MOEAs. Each category of MOEAs have their advantages and disadvantages when solving MOPs with diverse characteristics. In this work, we propose a Hybrid Selection based MOEA, referred to as HS-MOEA, which is a simple yet effective hybridization of dominance, decomposition and indicator-based concepts. In other words, we propose a new environmental selection strategy where the Pareto-dominance, reference vectors and an indicator are combined to effectively balance the diversity and convergence properties of MOEA during the evolution. The superior performance of HS-MOEA compared to the state-of-the-art MOEAs is demonstrated through experimental simulations on DTLZ and WFG test suites with up to 10 objectives.
Similar content being viewed by others
Introduction
Multi-objective optimization problems (MOPs) contain two or more conflicting objectives that need to be optimized simultaneously. MOPs with more than three objectives are referred to as many-objective optimization problems (MaOPs). Due to the conflicting nature of the objectives, the optimization of MOPs results in a set of trade-off solutions instead of a single optimal solution. However, for better decision-making, the trade-off solutions are expected to be optimal and well-spread to cover the entire decision range. Multi/Many-objective Optimization Evolutionary Algorithms (MOEAs), due to their population-based nature, can provide the entire set of trade-off solutions in a single run and are extensively used to solve MOPs. MOEAs try to achieve this with the help of three leading operators, namely mating selection, recombination, and environmental selection. Mating selection is responsible for selecting high-quality parents that form a mating pool and are employed to generate offspring solutions. Recombination process generates quality offspring solutions by combining various solutions (crossover) and/or through perturbation of a single solution (mutation) from the mating pool. Finally, the goal of environmental selection is to select prospective solutions from the combination of parent and offspring solutions for further evolution. As the performance of MOEAs significantly depends on environmental selection, various environmental selection operators have been proposed depending on which MOEAs are classified as Pareto-based1, Decomposition2,3 or Reference Vector-based, and Indicator-based MOEAs4. Each class of MOEAs possesses its advantages and disadvantages in achieving a proper balance between convergence and diversity as the number of objectives increases (i.e., MaOPs) and characteristics of the problem.
Since the output of MOEA is a set of Pareto optimal solutions, the Pareto dominance relation naturally becomes a criterion to distinguish solutions during the evolutionary process. In Pareto-based MOEAs (PMOEAs)5,6,7,8, when the solutions are not distinguishable due to Pareto dominance, then a diversity criterion is employed where diverse solutions are preferred overcrowded ones. In MaOPs, as the number of objectives increases, the proportion of non-dominated solutions in a population tends to increase exponentially9, resulting in the failure of Pareto dominance relation-based criterion in distinguishing solutions. This places more emphasis on density-based secondary selection criterion referred to as ‘active diversity promotion’10. The ineffectiveness of Pareto dominance in providing the required selection pressure as the number of objectives increases limits the scalability of PMOEAs. In addition, as the number of objectives increases, the diversity preserving mechanisms such as crowding distance and clustering become computationally expensive. Therefore, reference vectors are employed to reduce the computational complexity and hence the diversity11.
Unlike PMOEAs that compare individuals using two criteria (i.e., dominance relation and density), Indicator-based MOEAs (IMOEAs)12 adopt a single value referred to as an indicator to measure both convergence and diversity (IBEA12, ISDE+13). However, developing an indicator that balances both diversity and convergence is challenging. The ability of the indicator to balance the convergence and diversity gets challenging as the number of objectives increases. Some indicators12 are biased towards convergence, while some favour diversity14. Therefore, in 14, a stochastic combination of both convergence- and diversity-biased indicators are considered. IMOEA based on hypervolume15 is effective but computationally expensive. ISDE+ is computationally efficient but fails to preserve the essential corner solutions.
In Reference Vector-based MOEAs (RV-MOEAs)16,17, the population members are guided towards the optimal Pareto Front (PF) in the direction specified by the weight or reference vectors. In general, the reference vectors are selected by sampling a uniform set of points on a hyperplane \(\sum_{i}{f}_{i}=1\) in the normalized M-objective space referred to as Normal Boundary Intersection (NBI) method18. In other words, it is implicitly assumed that the optimal PF is bounded by a unit simplex of reference vectors that is non-degenerate, continuous, and smooth without significant nonlinearities. However, there exist several MOPs characterized by degenerate and discontinuous PFs. As a result, several of the uniform weight vectors fail to get associated with any of the solutions and are referred to as ineffective weight vectors. In addition, it has been observed that the number of non-dominated solutions obtained by MOEA/D19, a primitive RV-MOEA, is often much smaller than the number of weight vectors as—(1) multiple weight vectors can share a single good solution, and (2) all solutions are not always non-dominated. Therefore, the Pareto dominance criterion has been integrated into RV-MOEAs (NSGA-III11,20, RVEA21, MOEA/DD22, TDEA8, PMEA23). On the other hand, a set of uniform weight vectors may not be able to approximate the different sizes and shapes of PFs. In other words, the initialization of the weight vectors should depend on the shape and size of the PF, which may not be known in advance. Therefore, RV-MOEAs24,25,26,27 with weight vector adaptation during the evolution were proposed to effectively handle MOPs with regular as well as irregular PFs. Instead of adapting the weight vectors, a combination of uniform weight vectors and a secondary criterion (e.g., polar-metric23) in to select the solutions corresponding to the ineffective weight vectors have also been investigated. Motivated by the work in23, we propose a hybridized framework, referred to as HS-MOEA, that employs ISDE+ as the secondary criterion to select solutions corresponding to the ineffective weight vectors, in addition to the Pareto dominance. In other words, the aim of this study is to develop a new environmental selection strategy that benefits from the advantages of Pareto-, decomposition- and indicator-based approaches. First, Pareto dominance alleviates the selection of dominated solutions. Second, weight vectors assist in the selection of well-diversified and convergent solutions in each generation. Third, if the weight vector fail to differentiate the high-quality parent solutions then the indicator assists the selection process by considering both convergence and diversity. The ability of ISDE+ to select a set of converged and diverse solutions from unselected ones with respect to a set of already selected solutions is expected to aid the uniform weight vectors in achieving better convergence and diversity.
The rest of this paper is organized as follows. The second section presents the preliminaries. The third section introduces related work and motivation for the current study. The fourth section contains details of HS-MOEA. The fifth section presents experimental setup and comparison results of HS-MOEA with a number of state-of-the-art MOEAs. The last section presents the conclusions and future directions.
Preliminaries
Generally, MOP is formulated as:
where \(x\) represents an \(D\) dimensional decision vector in \(\Omega \), and \(M\) is the number of objectives.
In multi-objective optimization, the following concepts have been well defined and widely applied.
-
1.
Pareto Dominance:
For any two solutions \(x\) and \(y\), \(x\) is said to dominate \(y,\) denoted as \(x\prec y\) iff \({f}_{i}(x)\le {f}_{i}(y)\) \(\forall \) \(i\in \{\mathrm{1,2},\dots ,M\}\) and \({f}_{j}(x)<{f}_{j}(y)\) for at least one \(j\in \{\mathrm{1,2},\dots ,M\}\).
-
2.
Pareto Optimality:
A solution \({\mathrm{x}}^{*}\) is said to be Pareto-optimal if there is no other solution \(\mathrm{x}\in\Omega \) such that \(\mathrm{x}\prec {\mathrm{x}}^{*}\).
-
3.
Pareto-optimal Set (PS):
It is the set of all Pareto-optimal solutions and is defined as \(\mathrm{PS}=\left\{\mathrm{x}\in\Omega \right|\mathrm{x is Pareto optimal}\}\).
-
4.
Pareto-optimal Front (PF):
It is the set of all Pareto-optimal solutions and is defined as \(PF=\left\{f(x)\in {R}^{M}\right|x\in PS\}\).
-
5.
Ideal point:
Ideal point is a vector \({\mathrm{z}}^{*}=({\mathrm{z}}_{1}^{*},{\mathrm{z}}_{2}^{*},\dots ,{\mathrm{z}}_{\mathrm{M}}^{*} )\), which is the infimum of \({f}_{i}\) for each \(\mathrm{i}\in \{\mathrm{1,2},\dots ,\mathrm{M}\}\) in the PF.
-
6.
Nadir point:
Nadir point is a vector \({\mathrm{z}}^{\mathrm{nad}}=({\mathrm{z}}_{1}^{\mathrm{nad}},{\mathrm{z}}_{2}^{\mathrm{nad}},\dots ,{\mathrm{z}}_{\mathrm{M}}^{\mathrm{nad}} )\), which is the supremum of \({f}_{i}\) for each \(\mathrm{i}\in \{\mathrm{1,2},\dots ,\mathrm{M}\}\) in the PF.
-
7.
Weight vector:
A weight vector \(w\) is a \(M\) dimensional vector \(w=({w}_{1},{w}_{2},\dots ,{w}_{M})\) such that \({\sum }_{i=1}^{M}{w}_{i}=1\) and \({w}_{i}\ge 0 \forall i\in \{\mathrm{1,2},\dots ,M\}\). The Normal Boundary Intersection (NBI) method is a systematic approach that places points on a normalized hyper-plane, i.e., on a \((M-1)\)-dimensional unit simplex. It generates \(\left(\genfrac{}{}{0pt}{}{H+M-1}{M-1}\right)\) number of finite weights where \(M\) is the number of objective of problem and \(H\) is the number of divisions considered along each objective coordinate. Moreover, all the generated points are equally and uniformly distributed in that latex structure.
-
8.
Penalty-based Boundary Intersection (PBI) Operator:
Let \(\mathrm{w}=({\mathrm{w}}_{1},{\mathrm{w}}_{2},\dots ,{\mathrm{w}}_{\mathrm{M}})\) be a direction vector, where \(\sum_{\mathrm{i}}{\mathrm{w}}_{\mathrm{i}}=1\). The PBI operator is defined as:
$$ \begin{aligned} g^{pbi} \left( {x,w,z^{*} } \right) &= d_{1} + \theta d_{2} \hfill \\ {\text{where}},\,d_{1} &= \left( {F\left( x \right) - z^{*} } \right)^{T} w/\|w\| \hfill \\ d_{2} &= \| F\left( x \right) - z^{*} - \left( {d_{1} /\|w\|} \right)w\| \hfill \\ \end{aligned} $$(2)where \(\| \cdot \| \) denotes \({L}_{2}\) norm and \(\theta \) is a penalty parameter.
Related work and motivation
In RV-MOEAs, the objective vector corresponding to each solution is converted into a scalar value based on a series of uniformly distributed weight vectors. To maintain population diversity during the evolution, RV-MOEAs assign the same search space preference to each direction vector. However, the performance of RV-MOEAs strongly depends on the shape of the PF28, which is not known in advance. In addition, the size and shape of PF vary over generators. Therefore, it is essential to adapt the weight vectors during the evolution process or employ a secondary selection criterion to aid the uniform weight vectors.
In the literature, attempts have been made to improve the performance of RV-MOEAs on MaOPs with both the regular and irregular PFs. In NSGA-III11, significant changes to the selection operator were performed compared to its predecessor NSGA-II, where the diversity promotion among population members is achieved by a set of well-spread reference points. The employment of reference vectors improves the scalability of the algorithm by reducing the computational complexity that arises due to the increase in the number of objectives. NSGA-III is further modified (referred to as A-NSGA-III20) where ineffective reference points are re-allocated based on the distribution and association of the solution. Ineffective reference vectors are the ones that do not have any population members associated with them. RVEA21 employs a scalarization approach, termed as Angle Penalized Distance, that assesses convergence by calculating the distance between the candidate solution and the weight vector. In27, a weight vector adaptation strategy was employed to enhance the performance of RVEA. TDEA8 enhances the convergence of NSGA-III in high dimensional objective spaces by—(1) incorporating a new dominance scheme and (2) employing the aggregation function-based fitness evaluation scheme of MOEA/D. In Polar Metric based Evolutionary Algorithm (PMEA)23, a metric inspired from the PBI operator referred to as polar-metric (p-metric) is proposed to measure the convergence and diversity. During environmental selection, a weight vector adjustment strategy is employed to select the well-diversified solutions. The environmental selection of PMEA is demonstrated in Fig. 1 where xi and wi represent the solutions and weight vectors, respectively. The values on the perpendicular lines represent the p-metric values of the solutions to the corresponding weight vectors. As in Fig. 1a, PMEA assigns \({x}_{1},{x}_{4}, {x}_{5}\) to \({w}_{2},{w}_{3}, {w}_{1}\), respectively and selects for future evolution. Solutions \({x}_{2}\) and \({x}_{3}\) are not selected. In addition, \({w}_{4}\) does not have any associated solutions and is considered as ineffective. The ineffective weight vector (\({w}_{4}\)) is re-initialized (\({w}_{5}\)) to pass through the nearest non-selected solution (x3) as shown in Fig. 1b. After the re-initialization, solution x3 associated with weight vector \({w}_{5}\) is selected. In other words, starting with uniform weight vectors, at first, PMEA selects solutions based on p-metric. Later, a weight vector adaptation is made to select the rest of the solutions necessary to create the population for the next generation. However, during the next generation, the weight vectors are uniformly initialized before evaluating the p-metric. From Fig. 1b, it can be noticed that the weight vectors are not well diversified can affect the population diversity during the evolution process. In other words, adapting weights to select the solution in each generation based on the distribution of population is not appropriate.
Motivated by these observations, we propose to combine the advantages of Pareto, decomposition, and indicator methods in a single framework. In the proposed framework, the uniform weight vectors are assisted by ISDE+ indicator, which serves as a secondary criterion.
Hybrid selection based multi/many-objective evolutionary algorithm (HS-MOEA)
In this section, Hybrid Selection based Multi/Many-Objective Evolutionary Algorithm (HS-MOEA) is introduced. The different steps of HS-MOEA are detailed below.
Initialization
A set of uniform weight vectors (\(w\)) are generated using the NBI method, then subsequently, a population of size \(N\) (\(\left|w\right|\)) is initialized within the permissible boundaries as shown in Line 01 of Algorithm 1.
Mating selection and offspring generation
ISDE+ indicator values corresponding to each solution in the population (Pt), corresponding to the tth iteration is evaluated using Eq. (3) (Line 04, Algorithm 1). The solutions with the highest ISDE+ values are considered to be better. Using ISDE+ values, binary tournament selection is performed to generate the mating pool (Line 05, Algorithm 1). Then, the offspring population is generated using the variation operators such as polynomial mutation6 and simulated binary crossover6 (Line 06, Algorithm 1).
In HS-MOEA, the mating selection is performed using ISDE+ indicator. The indicator values corresponding to each solution in the population is evaluated using Eq. (3) (Line 04, Algorithm 1).
where \(P_{SB\left( x \right)} \in P_{t}\) and \(y \in P_{SB} \left( x \right)\), such that \(SB\left(y\right)<SB\left(x\right).\) \(SB\) represent the sum of normalized objectives. \({N}_{SB(x)}\) represents the size of \({P}_{SB(x)}\).
The solutions with the highest ISDE+ values are considered to be better. Then, the binary tournament selection is performed based on ISDE+ values to generate the matting pool (Line 05, Algorithm 1). After the mating selection, the offspring population is generated using the variation operators such as mutation and crossover, as shown in Line 06, Algorithm 1. In the current, the mutation and crossover operators employed are Polynomial Mutation (PM)2 and Simulated Binary Crossover (SBX)2.
Normalization
Normalization (Line 07, Algorithm 1) is an essential tool to map the unscaled search space to scaled one so as to characterize the badly scaled objectives. In HS-MOEA, the normalization (of the jth population member is given in Eq. (5).
where, \({z}_{i}^{*}\) and \({z}_{i}^{nad}\) are considered as the lowest and highest values of \({i}^{th}\) objective function.
Environmental Selection
The environmental selection selects a set of N converged but diversified solutions from a combined population (R) of size \(2N\) (Line 08, Algorithm 1). The working mechanism is detailed in Algorithm 2. Non-dominated sorting6 (Lines 01 ~ 02, Algorithm 2) is performed to classify the population \(R\) into several fronts (Fr) and identify the population \({P}^{ND}=\bigcup_{l=1:L}{Fr}_{l}\) (where \(L\) satisfies \(\left|\bigcup_{l=1:L}{Fr}_{l}\right|\ge N\) and \(\left|\bigcup_{l=1:L-1}{Fr}_{l}\right|<N)\).
Association
In HS-MOEA, the association procedure (Line 04, Algorithm 2) is performed in the normalized objective space, where the ideal point \({z}^{*}\) is shifted to origin. At each generation, the individuals of the \({P}_{t}^{ND}\) population is associated with the reference vectors (\(w)\). For the association operator, the norm of each solution x in \({P}_{t}^{ND}\) is evaluated as:
Then, the angle between \(F(x)\) and \({w}^{i}\) is defined as:
where, \(F\left(x\right)\circ {w}^{i}=\sum_{i=1}^{M}{F}_{i}(x)\cdot {w}_{i}^{j}\) is the dot product of \(F\left(x\right)=\left({F}_{1}\left(x\right), {F}_{2}\left(x\right),\dots , {F}_{M}\left(x\right)\right),\) and \({w}^{i}=({w}_{1}^{i}, {w}_{2}^{i},\dots , {w}_{M}^{i})\).
During association, each solution is assigned to its closest reference vector. Ki is the number of solutions associated with the weight vector wi during the association process, which can range from \(0\) to \(N\). Figure 2 represents the association operator, where the filled circles are the associated solutions with the corresponding nearest weight vector.
Two step selection
For each \(x\) and \(y\) \(\in \) \({K}_{i}\), \(x\) is preferred to \(y\) iff \(PBI\left(x,{w}^{i}\right)<PBI(y,{w}^{i})\) where \(PBI\left(x,w\right)={d}_{1}+\theta {d}_{2}\), \({d}_{2}=\) perpendicular distance of \(x\) over \(w\), \({d}_{1}=\) distance between the origin and the projected point of \(x\) over \(w\), \(\theta =\) penalty parameter. PBI refers to the penalty-based boundary intersection16.
First, select one solution from each non-empty \({K}_{i}\) \(\forall i=\mathrm{1,2},\dots ,N\) based on \(PBI\) value and save them in \(S\) (referred to as a set of selected solutions). The remaining solutions are stored in \(U\) (referred to as a set of unselected solutions). If the size of \(S\) is \(N\) then the whole set \(S\) is declared as a parent population of the next generation (Line 06, Algorithm 2); otherwise, go for the second round of selection. In the second round, \((N-\left|S\right|)\) solutions are to be selected from \(U\) using ISDE+ indicator. For each \(x\in U\) the values of ISDE+ referred to as IUSDE+ is calculated (Line 08, Algorithm 2). To evaluate the indicator, the solutions in U are sorted in the ascending order of the normalized sum of objectives (SB) (Line 09, Algorithm 2). The solution with the least SB is assigned the highest possible indicator value of one. To evaluate the IUSDE+ of a given solution \(x\in U\), the solutions in U that are better in convergence with the least SB compared to x and solutions in set S are shifted as in Eq. (8). Then \((N-\left|S\right|)\) solutions from U with the largest IUSDE+ are selected (P1) (Line 10, Algorithm 2) and added to S (Line 11, Algorithm 2), which becomes the population (P) for the next generation.
where \(A\) is (\({U}_{SB}\left(x\right)+S)\). \({U}_{SB}\left(x\right)\in U\) and \(y\in A.\) For all \(y\in {U}_{SB}\left(x\right)\) such that \(SB\left(y\right)<SB\left(x\right)\), \(S\) and U represent a set of selected and unselected solutions respectively by weight vector association.
In other words, the use of ISDE+ indicator enables the selection of converged, yet diverse solutions with respect to the already selected solutions (S). First, Pareto dominance alleviates the selection of dominated solutions. Second, weight vectors assist in the selection of well-diversified and convergent solutions in each generation. Third, if the weight vector fails to differentiate the high-quality parent solutions, then the indicator assists the selection process by considering both convergence and diversity. The advantage of employing ISDE+ is that it enables the selection of the solutions considering the solutions that are already selected through weight vector association in the second step. In other words, HS-MOEA gets benefitted from both the reference vectors and indicators.
Computational complexity
The non-dominated sorting (Line 01, Algorithm 2) of a population of size 2 N having M-dimensional objective vectors requires \(O(Nlo{g}^{M-2}N )\) computations11. All operations in Algorithm 2 in associating a maximum of \(2N\) population members to \((H\approx N)\) reference points would require \(O(MNH)\). Thereafter, for calculating the ISDE+ of maximum \(N\) population members would require \(O({N}^{2})\). Therefore, the computational complexity of one generation of HS-MOEA is \(O({N}^{2}lo{g}^{M-2}N )\) or \(O(M{N}^{2})\), whichever is larger.
Experimental setup, results and discussion
Experiments were conducted on 16 scalable test problems from DTLZ29 and WFG30, test suites comprising of 7 and 9 problems, respectively. For each test problem, 2-, 4-, 6-, 8- and 10-objectives are considered. The parameter values employed are present in Table 1. On each instance, 30 independent runs were performed for each algorithm on a PC with a 3.30 GHz Intel (R) Core (TM) i7- 8700 CPU and Windows 10 Pro 64-bit operating system with 16 GB RAM. As a stopping criterion, the maximum number of generations for DTLZ1 and WFG2 is set to 700 and for DTLZ3 and WFG1 it is set as 1000. For the other problems (DTL2, DTLZ4–7, and WFG3–9) it is set to 250. All algorithms considered employ population size (N) of 100, 165, 182, 240 and 275 for 2-, 4-, 6-, 8-, 10-objectives, respectively13. Simulated binary crossover and polynomial mutation with distribution indices and probabilities set to \({n}_{m}=20\), \({n}_{c}=20\), \({p}_{c}=1.0\) and \({p}_{m}=1/D\), respectively, are employed.
In order to compare the efficiency of HS-MOEA with the state-of-the-art algorithms such as PMEA, TDEA, RVEA, NSGAIII, MOEA/D, ISDE+, IBEA, ONEBYONE31, a quantitative indicator, namely HyperVolume (HV) is being employed. The larger value of HV implies the superiority of the algorithm. The experimental results (mean and standard deviation values of normalized HV) on benchmark suites are presented in Table 2. In addition, the statistical tests (t-test) at a 5% significance level were conducted to compare the significance of the difference between the mean metric values yielded by HS-MOEA and state-of-the-art algorithms. The signs “ + ”, “−” and “≈” against the HV values indicate that the HS-MOEA is statistically “better”, “worse” and “comparable” with the corresponding algorithm, respectively. The last row of Table 2 represents the overall performance of HS-MOEA in terms of the number of instances it is better (Win-W), comparable (Tie-T) and worst (Loss-L) with respect to the corresponding algorithm.
As shown in Table 2 and Fig. 3, HS-MOEA significantly outperforms or comparable to PMEA, TDEA, RVEA, NSGAIII, MOEA/D, MOEA/DD, ISDE+ and ONEBYONE in 72/80 ≈ 90%, 62/80 ≈ 77.5%, 67/80 ≈ 83.75%, 60/80 ≈ 75%, 65/80 ≈ 81.25%, 65/80 ≈ 81.25%, 47/80 ≈ 58.75% and 64/80 ≈ 80% of cases, respectively. It can be observed that HS-MOEA consistently performs better or is similar to PMEA. Similarly, HS-MOEA outperforms ONEBYONE on most of the problems. However, in case of WFG1 and WFG2, ONEBYONE performs better over HS-MOEA. On the other hand; TDEA, RVEA and NSGA-III performs better than HS-MOEA on DTLZ1, WFG1 and WFG2 test problems. However, the improvements seems to be minimal compared to the advantages HS-MOEA achieves on other problems such as DTLZ5, DTLZ7, WFG3 ~ 9. MOEA/D and MOEA/DD seem to perform similarly compared to HS-MOEA, performing slightly better on DTLZ1. On WFG3, MOEA/D performs better than all the state-of-the-art algorithms, including HS-MOEA. However, the degraded performance of MOEA/D on the remaining 15 problems seems to outweigh the superior performance on WFG3.
Among the state-of-the-art algorithms, ISDE+ exhibits competitive performance compared to HS-MOEA. The superiority of ISDE+ compared to HS-MOEA can be seen on DTLZ1, WFG1, WFG6 and WFG9. The performance improvement is significant; however, HS-MOEA is also close. HS-MOEA has a slight advantage over ISDE+ on problems such as DTLZ7 and WFG2 that have disconnected PF.
To demonstrate the effectiveness of HS-MOEA, a more detailed analyse corresponding to DTLZ1 and DTLZ7 is presented. GD and Delta indicators that indicate the convergence and diversity performance of MOEAs are summarized in Tables 3 and 4, respectively. Lower values of both GD and Delta values indicate the superiority of the algorithm. The convergence (GD) of HS-MOEA is consistently better than PMEA. However, the convergence of HS -MOEA lags behind ISDE+ on DTLZ1, which was designed to test the convergence performance of MOEAs. However, the diversity (Delta) of HS -MOEA is consistently better than ISDE+. On the other hand, HS -MOEA fails to maintain the diversity with respect to PMEA on DLTZ7, which has discontinuous PF. In other words, the convergence of HS-MOEA is better or comparable to PMEA, while the diversity is better or comparable to ISDE+.
The improved performance of HS-MOEA is because it gets benefitted from both the best qualities of each component—(1) Pareto dominance’s ability to eliminate low-quality solutions, (2) Uniform weight vectors maintain the diversity, and (3) ISDE+ indicator enable both convergence and diversity in problems with MOPs with irregular or discontinuous PFs. Therefore, the performance of HS-MOEA is competitive or better in most cases. The significance is more visible in problems with discontinuous PFs such as DTLZ7.
Figures 4 and 5 present the parallel coordinates that describe the distribution of the solutions corresponding to PMEA, ISDE+ and HS-MOEA on 8-objective instances of DTLZ1 and DTLZ7. From the figures, it is evident that HS-MOEA is able to provide a well converged and diverse set of solutions compared to PMEA on both the DTLZ1 and DTLZ7 instances. However, the parallel coordinates of ISDE+ and proposed HS-MOEA seem nearly identical on both DTLZ1 and DTLZ7. From the results in Table 4, it is evident that ISDE+ slightly outperforms HS-MOEA on DTLZ1, which has continuous linear PF, while HS-MOEA performs better on DTLZ7 that has discontinuities in the PF.
Conclusion
This paper proposes a Hybrid Selection based Multi/Many-objective optimization, named HS-MOEA. In HS-MOEA, a new environmental selection that benefits from the advantages of Pareto dominance, reference vectors and an indicator is proposed. HS-MOEA is compared with seven state-of-the-art MOEAs on a number of widely used test instances. Experimental results demonstrate the superiority of HS-MOEA among all compared algorithms, mainly on problems with discontinuous PFs such as DTLZ7. In the future, we would like to investigate the possibility of weight vector adaptation using the ISDE+ indicator. In other words, new positions of the ineffective weight vectors and the consequent adjustment of the effective weight vectors can be estimated by employing the indicator values.
References
Dutta, S. & Das, K. N. A survey on pareto-based eas to solve multi-objective optimization problems. Soft Comput. Prob. Solving 807–820. https://doi.org/10.1007/978-981-13-1595-4_64 (2019).
Mashwani, W. K. & Salhi, A. Multiobjective memetic algorithm based on decomposition. Appl. Soft Comput. 21, 221–243. https://doi.org/10.1016/j.asoc.2014.03.007 (2014).
Mashwani, W. K. & Salhi, A. Multiobjective evolutionary algorithm based on multimethod with dynamic resources allocation. Appl. Soft Comput. 39, 292–309. https://doi.org/10.1016/j.asoc.2015.08.059 (2016).
Hisao, I., Noritaka, T. & Yusuke, N. in 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence). 2419–2426.
Zitzler, E., Laumanns, M. & Thiele, L. TIK-report 103 (2001).
Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 182–197. https://doi.org/10.1109/4235.996017 (2002).
Laumanns, M., Thiele, L., Deb, K. & Zitzler, E. Combining convergence and diversity in evolutionary multiobjective optimization. Evol. Comput. 10, 263–282. https://doi.org/10.1162/106365602760234108 (2002).
Yuan, Y., Xu, H., Wang, B. & Yao, X. A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 20, 16–37. https://doi.org/10.1109/TEVC.2015.2420112 (2016).
Dutta, S., M, S. S., Mallipeddi, R., Das, K. N. & Lee, D.-G. A Mating Selection Based on Modified Strengthened Dominance Relation for NSGA-III. Mathematics 9. doi:https://doi.org/10.3390/math9222837 (2021).
Purshouse, R. C. & Fleming, P. J. On the evolutionary optimization of many conflicting objectives. IEEE Trans. Evol. Comput. 11, 770–784. https://doi.org/10.1109/TEVC.2007.910138 (2007).
Deb, K. & Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: Solving problems with box constraints. IEEE Trans. Evol. Comput. 18, 577–601. https://doi.org/10.1109/TEVC.2013.2281535 (2014).
Zitzler, E. & Künzli, S. in International conference on parallel problem solving from nature. 832–842 (Springer).
Pamulapati, T., Mallipeddi, R. & Suganthan, P. N. ISDE+—An indicator for multi and many-objective optimization. IEEE Trans. Evol. Comput. 23, 346–352. https://doi.org/10.1109/TEVC.2018.2848921 (2019).
Li, B., Tang, K., Li, J. & Yao, X. Stochastic ranking algorithm for many-objective optimization based on multiple indicators. IEEE Trans. Evol. Comput. 20, 924–938. https://doi.org/10.1109/TEVC.2016.2549267 (2016).
Bader, J. & Zitzler, E. HypE: An algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 19, 45–76. https://doi.org/10.1162/EVCO_a_00009 (2011).
Zhang, Q. & Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 11, 712–731. https://doi.org/10.1109/TEVC.2007.892759 (2007).
Zhang, Q., Liu, W. & Li, H. in 2009 IEEE Congress on Evolutionary Computation. 203–208.
Das, I. & Dennis, J. E. Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems. SIAM J. on Optimization 8, 631–657. https://doi.org/10.1137/s1052623496307510 (1998).
Ishibuchi, H., Doi, K., Masuda, H. & Nojima, Y. in 2015 IEEE Symposium Series on Computational Intelligence. 861–868.
Jain, H. & Deb, K. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part ii: Handling constraints and extending to an adaptive approach. IEEE Trans. Evol. Comput. 18, 602–622. https://doi.org/10.1109/TEVC.2013.2281534 (2014).
Cheng, R., Jin, Y., Olhofer, M. & Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 20, 773–791. https://doi.org/10.1109/TEVC.2016.2519378 (2016).
Li, K., Deb, K., Zhang, Q. & Kwong, S. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 19, 694–716. https://doi.org/10.1109/TEVC.2014.2373386 (2015).
Xu, H., Zeng, W., Zeng, X. & Yen, G. G. A polar-metric-based evolutionary algorithm. IEEE Trans. Cybern. 51, 3429–3440. https://doi.org/10.1109/TCYB.2020.2965230 (2021).
Liu, H., Chen, L., Zhang, Q. & Deb, K. in 2016 IEEE Congress on Evolutionary Computation (CEC). 4763–4769.
Asafuddoula, M., Singh, H. K. & Ray, T. An enhanced decomposition-based evolutionary algorithm with adaptive reference vectors. IEEE Trans. Cybern. 48, 2321–2334. https://doi.org/10.1109/TCYB.2017.2737519 (2018).
Cai, X., Mei, Z. & Fan, Z. A decomposition-based many-objective evolutionary algorithm with two types of adjustments for direction vectors. IEEE Trans. Cybern. 48, 2335–2348. https://doi.org/10.1109/TCYB.2017.2737554 (2018).
Liu, Q., Jin, Y., Heiderich, M. & Rodemann, T. in 2019 IEEE Congress on Evolutionary Computation (CEC). 1726–1733.
Ishibuchi, H., Setoguchi, Y., Masuda, H. & Nojima, Y. Performance of decomposition-based many-objective algorithms strongly depends on pareto front shapes. IEEE Trans. Evol. Comput. 21, 169–190. https://doi.org/10.1109/TEVC.2016.2587749 (2017).
Deb, K., Thiele, L., Laumanns, M. & Zitzler, E. in Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600). 825–830 vol.821.
Huband, S., Hingston, P., Barone, L. & While, L. A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans. Evol. Comput. 10, 477–506. https://doi.org/10.1109/TEVC.2005.861417 (2006).
Liu, Y., Gong, D., Sun, J. & Jin, Y. A many-objective evolutionary algorithm using a one-by-one selection strategy. IEEE Trans. Cybern. 47, 2689–2702. https://doi.org/10.1109/TCYB.2016.2638902 (2017).
Acknowledgements
This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2021R1I1A3049810).
Author information
Authors and Affiliations
Contributions
Conceptualization, S.D., R.M.; methodology, S.D. and R.M.; software, S.D.; validation, S.D., and R.M.; formal analysis, S.D. and R.M.; investigation, K.N.D.; resources, K.N.D.; data curation, S.D. and R.M.; writing—original draft preparation, S.D.; writing—review and editing, R.M., K.N.D.; visualization, S.D.; supervision, R.M.; project administration, K.N.D.. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Dutta, S., Mallipeddi, R. & Das, K.N. Hybrid selection based multi/many-objective evolutionary algorithm. Sci Rep 12, 6861 (2022). https://doi.org/10.1038/s41598-022-10997-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-022-10997-0
- Springer Nature Limited