Keywords

1 Introduction

China's construction industry is undergoing huge changes. Traditional construction methods in China consume high energy and produce large amounts of waste and carbon emissions. In 2020, the construction industry contributes 46.5% of total energy consumption and 51.3% of the total carbon emissions of the country [1], and the number is still rising as the housing demand grows. Considering the sustainable development of the country, China has announced its goal of cutting carbon emissions from construction and providing higher housing quality, which requires new construction methods for the industry.

Prefabricated construction (PC) differs from the traditional construction methods. It transfers a part of cast-in-situ work (mainly component manufacturing) into factories, which is known as off-site work. Therefore, the PC is more efficient, clean and has better quality. Compared with traditional construction methods, PC can reduce construction wastes, shorten project periods, reduce carbon emissions, save energy, and raise productivity [2,3,4,5,6].

Recently, China has introduced a series of policies including standards, initiatives, guidance and incentives to develop PC [7]. Yet the Chinese PC is still facing many challenges, including the immaturity of technology, the lack of experienced workers and the unfamiliarity of working process, etc. [8,9,10]. These problems can lead to reworks, which increases costs and worktime. Some other countries also faced the problems before, North America [11] once encountered the transportation issues of PC components; PC in Australia [12] development was hindered by unfriendly research environment, deficient planning and marketing; PC in South Korea [13] was subjected to the high construction costs, due to the immature design, workforce related problems and insufficient market size. China is still facing these problems in varying degrees.

Rework [14] (probably due to the errors or the nature of the process) is one of the major reasons for the high cost and prolonged working time in the Chinese practice of prefabricated construction. The ideal way of PC is closer to a concurrent process [15], but the common practice adopted in China is a linear process, which leads to reworks. DSM is an optimization method suitable for the manufacturing industry [16], and it mainly focuses on dealing with the coupled processes which cause wide ranges of reworks, and can make the linear process concurrent from a macro perspective. As PC has similar characteristics as manufacture in production methods, DSM can be applied to optimize PC process. Some researchers found that the design stage is a critical part, and it can cause a wide range of reworks when things go wrong in this phase [14].Therefore, it is necessary to confirm the design work and optimize the process through DSM. This paper proposes an optimization model with DSM to reduce costs and worktime, and the model can improve the optimization methods of PC processes, and be valuable to PC planning and control.

The structure of the contents is as follows: Sect. 2 briefly reviews the relevant research; Sect. 3 introduces DSM method and the DSM model building; Sect. 4 shows the optimization process and discusses the results of the work with a case study, and Sect. 5 summarizes the work.

2 Literature Review

2.1 Concurrent Engineering

Traditional Chinese building design is a linear process, while PC design is more suitable for a concurrent one [15, 17]. The comparison is shown in the Fig. 1.

Fig. 1
A set of two workflows. A is the concurrent workflow that has a structural design that is conversely linked to the functional design by information input and feedback. B is the linear workflow that includes the cyclic relationship between structural design and functional design.

The comparison of PC design process and traditional design process

The PC design needs to be integrated [18]. The PC component manufacturing and construction can be ideally implemented as early as in the design stage [19]. However, the PC design process now adopted in most projects in China is the linear type [17], which means that the work teams of different phases have little communication at early stages. The communication gap exists through the PC designing [15], causing increasing chance of errors. Hence, reworks can occur widely and increase costs and worktime.

Concurrent Engineering (CE) [20] is an idea proposed by IDA (Institute for Defense Analysis), widely adopted in manufacturing industry. CE emphasizes the idea of customer-oriented design (integrated design) and whole lifecycle design and communication through whole design stages [15]. It rearranges the process of product design and integrates the processes with strong interactions, in clusters [21]. The information exchanges can be intensive, but in smaller groups, because the information exchanges occur in different clusters concurrently [22]. This enhances the communication between different work teams within the clusters, which avoids a wide range of reworks. Therefore, it can provide higher working efficiency and better products. The current studies [23, 24] indicate that the concurrent way of construction integrates project teams, improves the quality of buildings and reduces time and costs of the projects, which gives the company competitive edge. Therefore, to reduce rework, optimization of design process needs to be carried out to make the process concurrent.

2.2 Optimization Methods

There are a lot of studies on the optimization methods of complex processes, e.g., Critical path method (CPM), IDEF method and DSM method.

CPM [25] is usually used in combination with PERT as evaluations. It optimizes the process by judging the critical path within the process system and optimizing the work on the path to minimize the overall worktime and resource usage. CPM is widely used in project management. Sroka, et al. [26] linearized CPM-Cost model for construction projects and tested it with an example. Mazlum, et al. [27] used the fuzzy CPM and PERT method to plan and improve a project of online internet branch. These studies show the utility and compatibility of CPM. However, CPM does not focus on the interactions between the work. Therefore, it is not an ideal way to solve the process coupling problems.

IDEF method [28] can represent the process systems by images. It is a comprehensive modeling method, including 16 sets of specific methods to represent the system in terms of functions, information exchanges and process obtaining, etc. In a product research, Li, et al. [29] used three methods of the IDEF to build the functional model, the message model and the model of its semantic relationships. However, IDEF method cannot show the interaction strength between processes in an explicit way. It is not an ideal way to deal with the coupling processes, either.

DSM method can show the interdependent strength between the processes in a matrix. It provides a simple but effective tool to model the interactions in a system and evaluate the coupling strength of the system. The DSM model is compatible with multiple approaches of optimization to improve the coupling relations. Therefore, it can be an ideal way to address the coupling problems. PEI, et al. [30] used DSM model and a genetic algorithm (GA) to cluster the process of complex product. The model can find the optimum results with the least coupling strength. It also can evolve with iterations to ensure the credibility of the optimal results. The DSM is often used in manufacturing industry [31] for project management and it is compatible with software simulation [32].The PC shares some similar features with manufacturing, which includes component production and shipment, and multiple involvers [14]. Unlike the traditional construction method, component production and erecting should also be considered. That means an integrated design [33] for PC is needed to coordinate different project teams to fulfill the needs of involvers and to manage this concurrent process. Therefore, DSM is suitable for PC processes, because the method has shown its utility in the integration of manufacturing projects and in the concurrent processes [34].

Some research used DSM for the PC process optimization in various degrees of depth. Chen [35], et al. built a model of QFD (Quality Function Deployment) to form the DSM matrix of the process in the PC component design, and an evaluation system was built. With the model, GA can be used to find the most reasonable working order. WANG [34], et al. and WANG [22] decomposed the PC design process, and set up a process system. These studies introduced the combination of DSM method and GAs to cluster process. Shen, et al. [14] proposed an optimizing model and an evaluation system of PC processes based on the DSM method, and built a risk management system for dealing with rework risks. In summary, the research shows the utility of the combination of the DSM optimization with the GA, and found that design stage is the most influential part of work [14, 33]. However, the current research only clusters the process, which is not practical for the PC design process. It is still necessary to optimize the sequence of the detailed design process considering rework impacts like costs and time.

3 Methodology

This paper aims at optimizing the ideal sequence of the design work. First, a questionnaire survey was implemented to verify the system of PC design processes, which was based on literature review [17, 33, 36, 37]. Then, a matrix representing the process system was built and the clustering was carried out based on graphic theory. A DSM model with an improved GA was used for process optimization. The optimization model considers the optimum sequence of work within the clusters and evaluates the optimization effects.

3.1 DSM Model

DSM (Design Structure Matrix) [38] is a method using matrices to optimize projects. There are two types of DSMs. One type is the Boolean type, or binary matrix, which contains only "0" and "1" as elements in the matrix. And the "0" indicates that the subjects do not have any relationships, while the "1" indicates that they do. The other is called Numeric Design Structure Matrix (NDSM), using numbers to show the strength of the relationships. Based on the directed graphs (digraphs) of the process chains, the DSM can be used to represent a complex system and show the interactions between the processes. The digraphs can be represented by binary DSM or NDSM, as in the Fig. 2. (digraph) and in the Fig. 3. (the DSM based on the digraph), since the DSM and the adjacency matrices of digraphs have the same structures [39].

Fig. 2
A schematic diagram of a digraph labels nodes A, B, and C in a double-layer circle. Node A points to nodes B and C and node C points to A.

Digraph

Fig. 3
A schematic diagram of the D S M digraph has a 4 by 4 matrix. The first cell is blank, and the first row and column are labeled with A, B, and C. The diagonal cells have cross marks.

DSM based on the Fig. 5

3.2 Questionnaire Survey

This study investigated some of the experienced experts in the PC industry to get a general evaluation of the relationships between the work for this optimization.

There are two questionnaires designed to identify the design processes and to build the DSM model of the PC design process. One questionnaire contains the background information of the experts and a five-point scale of recognition, and are sent to confirm the process system. The results of 11 questionnaires are shown in Fig. 4. The experts were from different cities such as Beijing, Shanghai, Chongqing, etc., and these cities can represent the development of PC in China. The scores of their recognition for the process system are shown in the Table 2. The average score of that is 4.27/5, which presents positive feedback for the process system. Some modifications were also mentioned in the questionnaires, including “The deepening design requires the complete involvement of the general contractor”, “The component design will lag behind”, “In the scheme stage, some contents were suggested should be advanced, and local product research can also be added”. Then, the confirmed process system is shown in the Table 1. Another questionnaire used a four-point scale for the designers in the PC industry to determine the strength of relationships between the design processes. The scale looks into the information exchanges between the work, such as the frequency of communication between designers at different stages and the number of the transferred documents and files. The scores range from 0 to 3. Zero represents that there is no connection between the work, while 3 means a strong relationship between them, and 1 represents a mild relationship, and 2 represents a moderate relationship. Then, the NDSM was built based on the results of the four-point scale. A five-point scale was also used to evaluate the importance of the work as for rework impact.

Fig. 4
A set of three pie charts. A is of job titles with intermediate engineer having the highest value. B is of years of working in P C industry where 2 to 5 years comprise 55%. C is of the job description where P C design comprises 73%.

Back ground information of the participants

Table 1 The confirmed system of the PC design processes
Table 2 Recognition scores for the system

3.3 Optimization Model Building

Two approaches were considered to optimize the process. One is based on graphic theory, the other is a simulation using GA. The first approach clusters the processes, so the work in different clusters can be done currently. The second approach finds the optimal workflow in the clusters.

Optimization Based on the Graphic Theory. In graphic theory, coupled processes have two-way links in digraph. Therefore, recognizing coupled processes can be done by finding the strongly connected paths in the digraph. As mentioned before, the digraph can be used to represent the DSM model of the designing process. Given that there is a path from work i to work j, and there is also a path from j to i, it is called a strongly connected path. The processes on the strongly connected paths are coupled processes (they have two-way links), as shown in Fig. 5.

Fig. 5
A flow diagram of the cyclic path from work i to work j.

Strongly connected path between i and j

An Accessibility Matrix (when there is a path from i to j, the element (i, j) is 1, otherwise it is 0) can be built from the digraph of the PC design process to find the strongly connected paths. If work i strongly connects work j, the multiplication of the element (i, j) and the element (j, i) is 1. Therefore, the matrix D of the Eq. (1) can determine the strongly connected paths, and so the clusters of coupled work.

$$D = P.*P^{T}$$
(1)

In the equation, the P is the accessibility matrix of the NDSM (Fig. 6.), and the PT is the transpose of the P.

Fig. 6
A matrix of the N D S M has 23 columns and 22 rows. The diagonal cells are blank whereas the other cells contain numbers.

The strength of relationships between PC design work (NDSM)

Optimization through GA. Genetic algorithm [40] is an intelligent algorithm for solving multi-objective problems. Its idea originates from genic behaviors (inheritance, crossover, mutation) and the theory of "Survival of Fittest". It introduces the idea by coding the initial group, choosing the fittest groups by a function. And the winners inherit, crossover and mutate the codes. Then, it chooses the fittest and continues the operation again, until there is a final solution or it reaches the predetermined number of iterations. Yet, GA has the nature that it might premature and lead us to an unwanted result. An improved algorithm [14] can solve the problem, which changes the rate of crossover and mutation according to the convergence of the results. When the results of the offspring group are scattered, the crossover and mutation rate are low to accelerate the algorithm converging. When the results are highly converged, the rate goes high and the results can be better, as shown in the Eqs. (2) and (3). Then, the result of the GA can avoid the prematurity and the model can get the optimal result. The “roulette wheel selection” was used to select the fittest groups.

$$\text{P}_{\text{c}} = \left\{ {\begin{array}{*{20}c} {Pc^{\prime}\left( {1/\beta } \right), \alpha > a\, and\, \beta > b} \\ {Pc^{\prime},\, else} \\ \end{array} } \right.$$
(2)
$$\text{P}_{\text{m}} = \left\{ {\begin{array}{*{20}c} {Pm^{\prime}\left( {1/\beta } \right), \alpha > a\, and\, \beta > b} \\ {Pm^{\prime},\, else} \\ \end{array} } \right.$$
(3)

Here, the Pc is the rate of the crossover, and the Pm is the rate of the mutation, in the improved algorithm. The Pc’ and the Pm’ are the presupposed values of the rate of the crossover and the mutation. The α equals the average of the fitness value divided by the greatest fitness value, in the groups, and the β equals the minimum fitness value divided by the greatest fitness value. The fitness value is the result of the Eq. (10) for every subject in the groups. The a and b, are presupposed numbers, ranging from 0.5 to 1 and 0 to 1 respectively. The lower of the value of the a and b, the easier the algorithm converges.

Based on DSM, real encoding is explicit to represent the sequence using sequential numbers. According to the project goals, the functions were designed to find results with lowest working time and costs. Based on the Iteration Risk matrix (IR matrix) [41, 42], the Rework Factor matrix (RF matrix), the Core Work evaluation (CW(j)), the functions are built for multi-objective optimization. Iteration Risk matrix (IR matrix) [43] is built to show the impact of the sequence of the process on reworks (errors that happen in the latter part will cause more reworks). The elements in IR matrix are calculated according to Eq. (4), in which the i and j represent the different processes and the n is the total number of the processes.

$${\text{IR}}(i,j) = \exp (j/n - (i - j)/n)$$
(4)

Based on the research on rework propagation, a Rework Factor matrix (RF matrix) [44] shows the overall probability of the rework. In Eq. (5), the m represents the number of steps for every path available from process i to process j, in the digraph. Therefore, the value of “m” ranges from 1 to i-j-1. While the “\(\mathop C\nolimits_{i - j - 1}^{m - 1}\)” is the total number of all the possible paths from i to j, and the “k” represents the specific path. The PR (i, j) is the elements from the Rework Probability matrices [42] and it shows the probability of the rework when the information is delivered from work i to j. The PRk(m) is the multiplication of all the PR (i, j) on the path “k”.

$$RF(i,j) = 1 - \prod\limits_{m = 1}^{i - j} {\prod\limits_{k = 1}^{{\mathop C\nolimits_{i - j - 1}^{m - 1} }} {(1 - \mathop {PR}\nolimits_{k}^{(m)} (i,j))} }$$
(5)

The Core Work (CW) considers the importance of each work as for the rework impact. In Eq. (6), the Wj1 scores the probability of reworks, the Wj2 scores the costs to solve the reworks, and the Wj3 scores the extra worktime caused by rework.

$$\mathop w\nolimits_{j} = \mathop w\nolimits_{j1} \times \sum\limits_{k = 2}^{3} {\mathop w\nolimits_{jk} }$$
(6)
$$CW(j) = \frac{{\mathop w\nolimits_{j} }}{{\sum\nolimits_{j}^{n} {wj} }}$$
(7)

Then, the multi-objective optimization model considers the minimum costs and time based on the aforementioned parameters and variables The main equation f(x) and the relevant equations are shown as Eqs. (8), (9) and (10). Here, the ωERC and the ωERT are weight coefficients.

Main functions:

$$ERC = \sum\nolimits_{i = 1}^{{\text{n}}} {\sum\nolimits_{j = i + 1}^{n} {(RF(i,j) \times \sum\nolimits_{u = i}^{n} {(\mathop {{\text{Cos}} t}\nolimits_{u} \times RF(u,i)) \times CW(j)} } } \times IR(i,j))$$
(8)
$$ERT = \sum\nolimits_{i = 1}^{{\text{n}}} {\sum\nolimits_{j = i + 1}^{n} {(RF(i,j) \times \sum\nolimits_{u = i}^{n} {(\mathop {Time}\nolimits_{u} \times RF(u,i)) \times CW(j)} } } \times IR(i,j))$$
(9)
$$f(x) = \min TEL = \omega_{ERC} \times ERC + \omega_{ERT} \times ERT$$
(10)

4 Optimization Results and Discussion

4.1 Cluster Result with Graphic Theory

A real case is used to estimate the model. According to the expert investigation, the NDSM of the design process is shown in the Fig. 6. Based on the graphic theory and the NDSM, the accessibility matrix is built, as in the Fig. 7, and the result of the Eq. (1), which determine the clusters of the coupled processes, is shown in the Fig. 8.

Fig. 7
A matrix of N D S M accessibility has 22 columns and 22 rows. The cells contain the numbers 1 except for the bottom row and the rightmost column contains the number 0 with the endmost cell as the number 1.

Accessibility Matrix

Fig. 8
A matrix of D that work in the P C design process has 22 columns and 22 rows. The cells contain the numbers 1 except for the bottom 2 rows and the rightmost 2 columns contain the number 0.

The matrix D

The result D shows that the work in the PC design process is highly connected with each other. All the processes, except for the work No. 21 (PC molds design) and the work No.22 (Completion design), are coupled work and is put in a cluster.

4.2 Optimization Model Based on GA

By the given data of the time and costs of a practical case and the functions mentioned, we gained the optimal sequence of design processes through the GA. The result is shown in Fig. 9. The Fig. 11. shows that the maximum value of 1/TEL (minimum value of TEL) finally converged after 1200 times iterations, providing a credible answer. And the result also shows that the optimization reduces the cost (ERC) by 13.9% and the worktime (ERT) by 14.81%.

Fig. 9
A table with 22 columns and 2 rows of the result of the optimal sequence of design processes through the G A. The last column contains the complete ion design.

Sequence of PC design process after optimization

The optimal result shows that the optimization cut down the cost and worktime by sequencing. However, there is a logical error in the sequencing manipulation, probably due to the subjectivity of the data. The work of Structural calculation should start before the Design of structural components, because the latter work relies on the information provided by Structural calculation. Therefore, rectification has to be done to the optimization result, as in the Fig. 10. And the calculation shows that the final result accomplished 12% decrease on cost (ERC) and 12.43% on working time (ERT). In the optimized work sequence, some processes which originally tended to be in the later part were put ahead, such as Environment protection and energy saving design, Fire design and Collision detection, because they are strong information-output processes. Accordingly, the strong information-receiving processes were put behind, as “Civil air defense design” and “Prefabricated rate calculation”. The information-outputting processes start first and then the information receiving processes start later. Therefore, the information flow is strong forward rather than backward, and reduces the occurrence of reworks. Generally, the optimized sequence of the work is more reasonable and has positive impacts on costs and worktime reduction. According to the scores on the importance, the process 1 (Prefabrication planning) and the process 2 (Building design) are the most important work, with the highest costs and worktime increase if reworks occur. The process 3 (Mechanical and electrical design) was rated to the process with the biggest probability of rework. Therefore, these processes have to be carefully implemented, and the corresponding involvers should closely communicate with each other.

Fig. 10
A table with 22 columns and 2 rows of the result of the maximum value of 1 slash T E L. The last column contains the complete ion design.

The optimization result after rectification (final result)

Fig. 11
A line graph plots min T E L versus iteration number. Values are estimated. (0, 4.38), (200, 4.73), (400, 4.79), (600, 4.86), (800, 4.9), (1000, 4.92), (1200, 4.92), (1400, 4.92), (1600, 4.92), (1800, 4.92), (2000, 4.92).

The change of the minimum TEL in each population group with iterations

5 Conclusions

The PC design is a critical part in the prefabrication process. Reworks occurring in the linear design process causes the increase of costs and time. Therefore, it is necessary to optimize the process. A system of detailed PC design processes was confirmed through literature review and expert investigation. A matrix of interactions between processes was constructed, and clusters of coupled processes were found based on DSM and graphic theory. A DSM model based on the GA was built to find the optimal sequence of the PC design processes in the concurrent way. Then, a case study was used to demonstrate the optimization effect. Besides positive outcomes, it is found that PC design has concurrent features, but the majority of the processes cannot be done in a separate way, as indicated by the matrix D (the result of the dot product). Therefore, the optimized PC design is concurrent between clusters, and the processes are linear in the clusters. In the logical sequence of the detailed work in the clusters, the information flow is strong forward, and the work of the different phases can be done concurrently. Therefore, the communication is enhanced and the probabilities of errors are reduced, which improves the impacts of reworks.

In summary, the optimization model provides a perspective to view the PC design, and the results and the findings can offer some new insights of the development of the PC in China. The model extends existing process optimization methods and has value for planning and controlling PC management. The main contributions are as follows:

  • A process system in the PC design stage was analysed, and the detailed processes were confirmed according to the literature review and expert investigation.

  • Necessary data were measured, including the relationship strength between the processes, the probability of reworks, and the importance evaluation on each work in terms of rework prevention.

  • Based on the DSM and GA, the entire optimization processes were simulated using GA by Matlab. The results reduced the costs and work time.

This paper also needs further research. In the future, more practical information and objective data should be collected to identify more detailed processes and improve the model. Some other optimization methods, such as Particle swarm optimization, Discrete event system simulation method, and Euler net, can be used to compare the results and find an optimal one.