Simulation-Optimisation of a Granularity Controlled Consumer Supply Network Using Genetic Algorithms

A R T I C L E I N F O A B S T R A C T Article history: Received: 29 August, 2018 Accepted: 05 December, 2018 Online: 20 December, 2018 The decision support systems regarding the Supply Chains (SCs) management services can be significantly improved if an effective viable method is utilised. This paper presents a robust simulation optimisation approach (SOA) for the design and analysis of a granularity controlled and complex system known as Consumer Supply Network (CSN) incorporating uncertain demand and capacity. Minimising the total cost of running the network, calculating optimum values of orders and optimum capacity of the inventory associated with each product family are the objectives pursued in this study. A mixed integer nonlinear programming (MINLP) model was formulated, mathematically described, simulated and optimised using Genetic Algorithms (GA). Also, the influence of the problem’s attributes (e.g. product classes, consumers, various planning horizons), and controllable parameters of the search algorithm (e.g. size of the population, crossover rate, and mutation rate) as well as the mutual interaction of various dependencies on the quality of the solution was scrutinised using Taguchi method along with regression. The robustness of the proposed SOA was demonstrated by a series of representative case studies.


Introduction
The main challenges affecting today's Supply Chains (SCs) are globalisation, environmental and technological turbulences and rapid changes in economy capacity. They have provoked companies to recognise that, in order to remain competitive in the global market, they need to gain more from their SCs.
Supply Chains are defined as links (relationships) between every unit (enterprise) in a manufacturing process from raw materials to customers. Traditionally, products were made and flowed to consumers through SCs. However, due to globalisation and complexity of the economy, today's SCs are better characterised as Supply Networks (SNs).
Consumer Supply Networks (CSNs) refer to complex networks consisting of sets of companies working in unison to supply, manufacture, distribute and deliver final products and services to end-users (Figure 1), being controlled by information flow.
CSNs are examples of industrial systems that are naturally large, complex, stochastic, and dynamic. These attributes translate into difficulties in representing the actual behaviour and in planning, optimising and anticipating performance. Also, the combination of these attributes makes the choice of an appropriate solution methodology difficult at best, if not simply impossible at this point in time [1]. Simulation is a powerful tool for modelling, analysis, and validation of CSNs. However, its major disadvantage is that it will produce a very detailed analysis but strictly for a given configuration. Simulation cannot change the configuration of the system, and any optimisation would be searching for the best combination of variables for a given system. A recurrent, key issue when attempting to optimise CSN is the granularity of the model. An appropriate granularitythe size of the smallest indivisible unit (of product, part, flow, time, etc.) of the processmakes the difference between a successful implementation of the optimisation methodology and an algorithm that does not converge or gets consistently stuck in local optima. Additionally, the choice of the granularity of the model has to be easy to translate in practicea purely theoretical solution that cannot be implemented in real life is of little help. This paper is an extension of the work initially has been presented in Intellisys Conference [2] in which a unique simulation optimisation approach (SOA) within an integrated methodology was developed. A small-scale Multi-Period, Multi-Product Consumer Supply Network (MPMPCSN) model, using mixed integer non-linear programming (MINLP) was designed. Then, the optimum quantity of orders was determined incorporating GA optimisation algorithm which simultaneously results in the total inventory cost minimisation. This way, the unique advantages of simulation were incorporated with optimisation method and higher quality solutions were achieved. Also, the quality of the solutions that were obtained by the proposed framework was checked by fine-tuning of the search algorithm's parameters combining the simulation model with the Taguchi method. Hence, in this study, a series of computational trials on realistic test problems are designed and analysed to demonstrate the generalisability of the proposed SOA for problems of similar size at different granularity levels.
The rest of this article is organised as follows: Section 2 is devoted to reviewing modelling methodologies that were used to solve CSNs problems. Section 3 presents the proposed MINLP model. Section 4 provides details about granularity. The optimisation module of the SOA methodology is described in Section 4. The numerical examples are given in Section 6 and discussed in Section 7. Section 8 concludes the paper.

Literature Review
A number of potential solution methods for the class of problems of similar size and complexity have been developed in the literature ranging from classical mathematical programming to hybrid and systematic methods [1,3].
Optimisation methodologies combined with mathematical models are mainly contributed to solutions validation. A stable optimal solution can be obtained by a given objective function subject to several constraints. However, they are unable to provide the gradient of design space over time [4]. The extent of the optimisation problem cannot be expanded beyond a certain limit as the complexity of the problem adversely affects the computational costs which make less efficient and less practical [5]. This concern can be addressed by using, simulation methodologies.
Simulation models can deal with all attributes of CSNs problems which makes them a powerful analytical tool in this area [6]. In particular, CSN simulation provides a model that suitably represents, processes associated with specific business units such as ordering system, manufacturing plant, distribution centres, etc.
in the presence of uncertainty [7,8]. Simulation modelling methods alongside with mathematical and models based on algorithms almost always come together. The main advantage of simulation approaches is a possibility to explore what-if scenarios that provide a deeper understanding of the dependencies in a system. The operations of a real system that are usually very dangerous, expensive, or impractical to implement can be evaluated according to their resilience and robustness subject to various predefined inputs (e.g. time horizon, resources, etc.) and at any desired granular level via simulation modelling. Using computer programming, the performance of a real system subject to controlled and environmental changes can be simulated. Therefore, many input values and their combinations can be explored through simulation models [9]. Also, simulation models offer flexibility in developing and assessment of different scenarios, with reasonably high-speed processing. In addition, an embedded standard reporting system make them unique in modelling, analysing, and validating of complex systems.
As pointed out, independent deployment of optimisation and simulation methodologies has some benefits. However, it also has limitations. The main drawback of simulation models is that they can only work with a set configuration of a solution. On the other hand, finding the optimal solution by independently using the traditional optimisation approaches incurs heavy computational cost. Therefore, the integration of the two methods may lead to a uniquely efficient optimisation.
SOA is a key factor of modern design across industries [3]. SOA is often used in the design, modelling and in analyses of systems. It can provide an optimal setting for set of parameters for a simulation model [10]. But due to high computational requirements, scientists have not given much attention to the use of SOA in CSNs [10][11][12]. Consequently, SOA turns into a hot research topic for optimisation of CSNs. The optimisation core together with a simulation model in SOA, can search the solution space globally (ergodicity of GA) whereas the simulation module acts as a quality assessment unit.
Following the advances in computational power, increased efforts have been made to leverage simulation for optimisation/simulation-based optimisation of hybrid systems with behaviours that can be discrete or continuous [13]. CSNs are hybrid systems with a high level of complexity.
Inventory control planning problems have been tackled using many metaheuristic algorithms [5,[14][15][16][17][18]. GA was widely used to solve related problems [19]. Through exploring the solution space, GA finds optimal or near optimal solutions. But, like in other evolutionary algorithms (EA), GA cannot carry out self-validation. GA risks to converge to local optima [20]. Hence, a valid question is whether or not the obtained solution is a high-quality candidate.
The parameters of the search algorithm -population size, crossover and mutation and rates, as well as the interaction between these parameters have significant impacts on the quality of solutions. As the entire search population or its fitness function might be highly affected by variation of these parameters. This necessitates implementation of a mechanism that can offer parameters tunning is essential. However, it is very hard to perform perfect tuning due to complexity among the interactions of EA's parameters. Most often, trial and error of EA's parameters is used in OR studies. However, experimentally tuning the parameters is less practical and very expensive [21]. We thus, propose using statistical methods based on experiments as a more robust approach [22].
In [23], the authors present a multi-echelon SN simulationbased optimisation model for a multi-criteria P-D design. The model offers concurrent optimisation of the network's structure, the set of the control strategies, and the quantitative parameters of the strategy for control. The modelling, simulation and then optimisation of networked entities are performed using a graphical interface designed in C++ programming. In this study, the candidate solutions are evaluated by a discrete-event simulation (DES) module. A multi-objective GA algorithm is developed aiming at finding compromised solutions regarding structural, qualitative and quantitative variables. The toolbox developed in the research considers a real Production-Distribution model which makes it a unique decision support system. However, there is no evidence shown with regards to parameters tuning of the GA algorithm.
In [24], the authors describe a two-phase Mixed Integer Linear Programming model addressing planning and scheduling systems of a build-to-order SN system. They use GA to optimise the aggregate costs of both subsystems. Three different scenarios were developed, in which distinct recombination rates for genes was used to improve the quality of solutions.
In [25], the researcher model a P-D network over a tactical planning horizon with uncertain demands and capacity. The proposed algorithm incorporates a simulation and an optimisation module; each calculates the total costs of the network for P-D. The problem is mathematically formulated by a MILP, and the fitness function (total cost) is evaluated via the simulation core. Then the solution resulting from the optimisation module is compared with the obtained output from the simulation module recursively. This procedure iterates until there is a set difference between two solutions. This study reports on data obtained from the implementation of the proposed SOA on a SN problem of a reduced scale. Although the simulation and the optimisation modules are both included in the proposed approach, there is no interaction or connection between them. The application of the simulation module is used to produce initial values for the parameters of the mathematical model. Also, the capacity to generalise the model for similar or larger problems was not addressed. Moreover, no evidence was shown around approaching a solution with better quality if different configurations were chosen for the optimisation parameters.
In [15], the authors developed a modified Particle Swarm Optimisation model (MPSO) for a location-allocation Supply Network problem. They formulated a two-echelon Distribution Network (DN) considering multi-product and multi-period inventory, subject to uncertainty of seasonal demands. The determination of the orders quantity and the vendors' location are pursued as the main objectives in this paper. They use Taguchi to tune the parameters of the MPSO. They considered parameter tuning in their model and they performed a sensitivity analysis for similar problems with different granularity levels.
In a similar study, In [26], the researcher developed a PSO algorithm attempting to find the maximum profit for a channel of a two-echelon SN for a single product. Sales quantity and production rate were used as decision variables of their model. Using a combination of GA, PSO, and simulated annealing (SA), they conduct a detailed sensitivity analysis. However, the improvement of the proposed heuristic is computed by using another heuristic. This seems very inefficient.
In [27], the authors proposed a simulation optimisation approach to reduce the number of delayed customer orders while costs are kept under control for an integrated productiondistribution supply chain. The hybrid modelling combined linear programming and discrete event simulation. This research is a great potential of using SOA approach; however, no effort was made considering the tunning of the control factors of the GA algorithm.
In [28], the researchers developed an agent-based simulation optimisation model through which an online auction policy within the context of the agricultural supply chain was optimised. Three different scenarios namely, oversupply, balance and insufficient supply with different demand and supply quantities were presented to obtain the optimal lot-size and to determine the optimum online auction policy to control inventory. The investigation towards improving the solution quality derived from the proposed methodology was not provided.
An important observation concerning SOA studies is that, in almost all studies, the tuning of the model's variables (e.g. lead time, production rate, etc.) was only attempted in the optimisation module for small problems. Good examples are included in [20] and [22]. On the other hand, evidence in this regard seems to be missing in some studies [23,29]. Furthermore, very few ( [15,24]) indicated efforts for tuning the optimisation parameters -selection methodology, mutation, and crossover in GA or swarm's cognitive and social components in PSO. They reported that this had been done by trial and error -a typical approach used in the majority of OR studies [21]. The simulation model is run several times, then the better solution is selected. Due to the complexity of the interaction of parameters of the search algorithm as well as the high computational cost, it is unclear how many iterations would be sufficient for a given size problem. Besides, as the scale of the problem increases, the complexity of interactions increases exponentially. Therefore, the difficulties corresponding to this class of SNP problems will further escalate if a more detailed model is simulated. So, it is necessary to study in more depth the variation of the solution quality. This paper presents an integrated simulation-optimisation approach to solve a class of CSN problem using GA. The objective is to minimise the total cost while an optimum/near optimum inventory level associated to each product family is obtained. An important feature of the under-investigated problem is that both demand and the inventory capacity are uncertain. The randomness of the uncertain parameters is captured by the simulation model. The optimal quantities are searched by GA. Also, a fine-tunning mechanism for the optimisation algorithm's controllable parameters is applied using Taguchi experimental design and ANOVA to improve the quality of the solution. In Section III, the mathematical model, parameters and notations of the proposed problem are summarised.

Mathematical Model
This section presents a mathematical model for a multi-product multi-period consumer supply network. The mathematical model presented here consider a planning period of T (indexed by t), a set of product family P (indexed by i) and a set of retailers R (indexed by j) with the limited budget and inventory restrictions.
The parameters in the model are the following: The backorder intensity rate for product family at the end of period σ 2 The capacity severity rate for product family at the end of period The objective function (1) comprises the minimisation of the total CSN costs, consisting of ordering costs, purchasing costs, transportation costs from manufacturing plants (MP) to retailers (RE), inventory holding and handling costs at the distribution centre (DC), and backordering costs subject to a set of constraints present in (2)(3)(4). Constraint (1) represents the quantity of order of a product family in a period bounded by the upper and the lower limits. Note, the maximum quantity of an order for product family from retailer cannot exceed maximum folds of the maximum quantity of the demand for the entire planning period . Constrain (2) is the capacity of the inventory denoted by . The order quantity is a positive integer that is normalised between 0 and 1 by (4) denoted by ́. Table 1 and Table 2 shows a numerical representation of ,́ and for = 3, =5 and = 2.
ℱ ( ) = . Note: 11 presents the quantity of product family 1 to be manufactured for consumer 1 in time interval = 1 is 259 unit.
The and are related to the decisions regarding the inventory level and the quantity of orders that are calculated by (5).
is the main decision variable, since is obtained recursively from . The demand quantity, , is unknown but bounded. It can be expressed by probabilistic distribution functions such as normal or uniform distribution functions. In this model, a uniform distribution is used to model using (6), where , are the lower and upper bounds, respectively.
Also, each product family has a set volume ( ) so the total volume of the order i.e. the total volume occupied by the inventory, , is calculated by (7) If a solution breaks any constraint ( ) it is infeasible and therefore the associated evaluation should be penalised in proportion to how violently they break the constraints. In this problem ∝ 1 and ∝ 2 are defined and assigned to the fitness function via (8). The problem size and substantially the changes in the planning period result in changes of ∝ 1 , ∝ 2 .
Also, the average backlogged orders, and the average volume occupied by the inventory are denoted by 1 and 2 , respectively. In associate with the planning policy in-use, the values of 1 , 2 may vary. For example, if the customer satisfaction rate is %100, which means shortages are not allowed and 1 = 0. Conversely, if a company unable to deliver their promises on time then 1 can be set according to the safety stock level. Note, in both cases, the inventory capacity cannot be exceeded, thus 2 = 0. So, a solution candidate is regarded feasible if both conditions are satisfied.

Granularity
In systems engineering literature, granularity translates into the level of detail one can decide to consider in a model or decisionmaking process where the same functionality is expressed with different 'sized' designs [30]. In SN, the size of the problem determines the granularity level of the problem which has a significant influence on the computation time and the algorithm's efficiency. Measures such as the number of product families, the number of facilities, planning periods, etc. are some important factors which affect the granularity level [31]. In this study, in order to verify the robustness of the proposed methodology, three case studies with different granularity levels are considered for the design of experiments represented by a tree structure with two levels 1 and 2 ( Figure 2). The leaves at 1 denoted by [ , , ] , correspond to an individual scenario with a distinct problem size, known as Small, Medium and Large-scale problems. 1 is developed based on the problem size categories proposed by Mousavi, Bahreininejad, Musa and Yusof [15], shown in Table 3. The roots at 2 are the number of experiments considered for each category. This is determined according to the number of parameters and the levels of variation of a specific parameter which will be developed using Taguchi method (see Section 6).  Note: a problem with = 7, = 6, = 11 and = 2 is counted as a Medium-scale problem.

Solution Approach
To solve the MPMPCSN problem discussed in this paper, GA optimisation method is used. GA are based on principles of natural selection and genetics to evolve better solutions through multiple consecutive generations. Selection, Crossover and Mutation are implementations in GA of similar phenomena occurring in the natural world. [23]. Based on the quality of solutions, they have a probability to be selected and evolve in new generations and converge towards optimality. Finally, the solutions are tested against termination criteria (evolving procedure). A good search space and genetic operators must maintain an equilibrium between exploration and exploitation and this is key in reaching optimality [32][33][34]

Generation and Initialisation
The first step in implementing the GA is to generate a random population of solutions (chromosomes). Chromosomes are resizable according to problem's attributes and vary based on the problem type, level of complexity, number and type of variables, granularity, etc. Each chromosome consists of several atomic structures -genes representing the characteristics of the solution (e.g. number of suppliers, position of manufacturing plants, types of products considered, etc.) [35]. Real coding has been used for this type of problem ( Figure 3).
generations are produced. GA is designed to evolve over a number of generations. Hence, having a large population has a serious impact on the computation time. A carefully selected population size that offers sufficient variety but does permit a fast-enough evolution is needed.

Genetic Operators: Selection, Crossover, and Mutation
Genetic operators may affect the optimal fitness value for the designed algorithm. The GA operators presented in this paper are selection, crossover and mutation. Roulette Wheel, Tournament and Ranked are the most popular selection mechanisms that are used in this study [33,36].
In the following step, the offspring population is created by applying single point crossover and mutation. So, new offsprings are produced by combining the characteristics of two parents that can be better than both parents if they take the best characteristics from each of the parents. This mechanism should be performed with a certain probability. Throughout this study, and are refferring to crossover and mutation probabilities respectively. Two individuals are produced per randomly selected parents followed by mutating gens of offspring population with specified probability. The mutation is implemented to preserve the variety of the solution pool and prevent GA getting stuck in local optima by exploring the entire search space and maintaining diversity in the population [37]. It is likely that some randomly lost genetic information recovered through mutation. P m should be set carefully too as such the diversity in the population is preserved but does not negatively affect the overall, fitness of the current population by removing good solutions. Mutation can finely tune the balance between exploration and exploitation. Typically, the mutation rate is small (<2-5%).

Simulation
After initialising the first population, each chromosome is evaluated for fitness. Fitness function is a metric used to measure the quality of the represented solution. The fitness value of a chromosome is the most important factor in GA evaluation that is always problem dependent [38]. The fitness function defined for MPMPCSN is the minimum cost of running the network. So the lower the fitness value, the higher is the survival chance of a chromosome.

Stopping Conditions
The optimal/near optimal solution is achieved through an iterative procedure until the stopping condition is satisfied. Choosing the termination criteria depends on the complexity of the problem structure as well as the size of the solution pool [39]. Often, the maximum number of generation is adopted which is the case in this study.
The traditional GA has several shortcomings. As a result of premature convergence, the search parameters (selection, crossover, mutation) may not be very useful towards the end of a search procedure [40]. Also, obtaining an absolute global optimum is not guaranteed, however providing good solutions within a reasonable time is generally expected [41,42]. Also, GA may not be effective if the starting point in search space was at a great distance from optimal solutions [43]. This deficiency limits the use of GA in real-time applications. However, it can be overcome if GA is hybridised with other local search methods where a closedform expression of the objective function can be appropriately performed [42]. Simulation tools are unique methods that are tightly integrated with mathematical and algorithmic based models. Overall, to improve GA performance and obtain accurate solutions, the population size, selection mechanism, crossover and mutation rates and the computational time are required to be turned. Further validation and evaluation of the proposed model and the solution approach is discussed in the following section.

Computational Experiments
This section provides experimental results obtained from applying the proposed SOA methodology on practical tests associated to MPMPCSN problems with different granularity levels.
A manufacturing CSN with a central distribution centre is considered in which orders received from consumers are being processed. The demand quantities for , and were randomly generated first and remained unchanged throughout the rest of the optimisation algorithm (see. Appendix A, Table 22-Table 24), because the variation of causes changes of other parameters. Also, associated purchase cost per unit of product family and the corresponding volume for , and are given in Appendix A (Table 21). All other related costs of running the network consist of ordering cost, backordering cost, holding cost, handling cost and transportation cost are computed via (9)- (14). In addition, the fixed parameters of the model are presented in Table 4.

Results and Discussion
As discussed above, the performance of the GA optimisation algorithm is mostly influenced by its controllable parameters. These parameters are selection method ( ), crossover and mutation rate ( , ), population size ( ) and the maximum number of iteration ( ). Thus, though utilising Taguchi Orthogonal Array Design along with Regression Analysis and Optimisation Solver the optimal parameter set was determined. More details are given in the following sections.

Process of Experiment Design
The main two components of the Taguchi method are the number of parameters and their variation levels. In order to analyse the results obtained from ANOVA (analysis of variance) and S/N ratio (signal to noise), it is required to create a set of tables of numbers known as orthogonal arrays. These tables are then used first to reduce the number of experiments, next to determine the most critical parameters with high impact on the outcomes. In this study, we consider the GA controllable parameters as significant factors in 3 levels ( Table 7). The Taguchi Orthogonal Array Design ( 27 − 3 5 ) shown in Table 6  , and referred to Roulette Wheel, Tournament and Ranked Selection method respectively The layout of the orthogonal array for 5 factors in 3 levels

Signal-to-Noise (S/N) Ratio Method
S/N ratios evaluate the size of the apparent effect (signal) against the size of random fluctuations (noise) witnessed in the data. The higher this indicator, the better the compromise is which can be calculated in different ways according to the optimisation problem (minimisation/maximisation) [44]. In this study, S/N ratio values are calculated to determine the best combination of GA control factors. The proposed optimisation algorithm was run four times for each parameter set to obtain more refined solutions. The numerical results for the Small, Medium and Large-scale problem are reported in Table 7, Table 8 and Table 9, respectively. This problem is aimed to minimise the response value (y). Therefore, to minimise the mean-square deviation (MSD) from the target value 0 and maximise the S/N ratio, MSD has to be calculated using (15). The signal to noise (S/N) ratio, in this case, is defined by (16), where is the sample size.
The example of the calculation of S/N ratio for the control parameter is shown below (column 1 of Table 10) and the results correspond to each case study are summarised in Table 10,  Table 11 and Table 12. The difference between the levels of factors in the Table 10-Table 12 determines which parameter has more effect on the quality characteristics (the total cost of the network). As it can be seen from Table 10, the control factor , by far is the most important factor that impacts on S/N ratio (1.19), , , and are also significant factors. Table 11  shows , and are approximately double of and . Also, in Table 12 while control factor has a negligible effect in influencing the S/N ratio in problem, the contribution of all other four parameters ( , , and ) to the S/N is more than 10%.
The S/N ratios computed for the data set , , , and (Table 10-Table 12) are essential for sketching the S/N ratio response diagrams for , and problems (0). So, a higher S/N ratio is related to a data set with the minimum variation which is considered as the best data set. Therefore, the best values associated with , , , and corresponding to , and problems are as follows: for , level 1(Roulette Wheel selection), level 1 (90% crossover), level 1 (10% mutation), level 2 (120 chromosomes) and level 1 (200 iterations), respectively; for level 2 (Tournament selection), level 2 (85% crossover), level 1 (10% mutation), level 1 (200 chromosomes) and level 1 (500 iterations), respectively; For level 2 (Tournament selection), level 1 (90% crossover), level 1 (10% mutation), level 3 (300 chromosomes) and level 1 (3500 iterations), respectively. This can be observed from S/N ratio response diagrams too (Figure 4). The rows show difference values in Table 10-Table 12 determine the contribution level of each parameter in obtaining lower cost. So, the total cost of running the network, for example for problem, is mostly affected by the number of generation, mutation rate, the selection method, population size and crossover rates of the GA algorithm.
To determine the significant level of these parameters, ANOVA method is utilised for which the data given in Table 7-Table 9 are going to be used again. Results obtained from ANOVA are summarised in Table 13-Table 15.

ANOVA Method
From ANOVA, the percentage contribution ratio (PCR) of each parameter can be calculated. PCR indicates the significance of all main factors and their interactions on the output. The calculation is performed by comparing the mean square ( ) against an estimate of the experimental errors at specific confidence levels. The total sum of squared deviations ( ) from the total mean S/N ratio is calculated via (17).
where is the number of experiments in the orthogonal array and is the mean S/N ratio for the ℎ experiment. The ANOVA tables for S/N ratios corresponding to the data in Table 10-Table 12 are summarised in Table 13-Table 15. The  terms and are corresponding to the total sum of squared and the total mean square, respectively. Also, the F-ratios and Pvalues provided in "F" and "P" columns are calculated via (18) and (19), respectively. F-ratio indicates which parameter ( , , , ) have a significant effect on the quality characteristic ( ) and P-value determines the significant percentage of the parameters on the quality characteristic ( ). It can be observed from Table 13 that the difference between the mean values of the level of the control factor (selection method) is insignificant (0.68 > = 0.05). Therefore, any selection strategy can be chosen for implementation of the proposed SOA for small-scale problem. However, the difference between the mean values of crossover rates ( ), mutation rate ( ) and the number of iteration (Ma ) is significant (0.006, 0.002 and 0.002 < = 0.05). Thus, the best control factor setting for maximising the S/N ratio is at level 1, at level 1, at level 2 and at level 1. In the Medium-scale problem, all of the control factors are highly contributing to the performance of the SOA (Table 14). According to Table 15, only , and are significantly influenced on the performance of the SOA in Large-scale problem, while there is no restriction in choosing the selection strategy and the crossover rate.

Confirmation test
The final step of the verification phase is to perform the confirmation test with the optimal level of the GA parameters drawn based on the Taguchi's design approach for each case study ( The results obtained from the proposed methodology and GA solver associated with , and problems along with the average of the best and the worst results are summarised in Table  17. The quality measurement of the solution is determined according to the value of standard deviation ( ). Therefore, the solution candidate with the maximum is considered as the worst solution and the one with the minimum value is regarded as the best solution. Hence, the experiments No. 15  As can be seen from Figure 5, the proposed algorithm shows better performance compared with the best and the worst solutions acquired from GA solver (5% ≅ $ 2498). A similar improvement was also experienced in Medium-scale and the Large-scale problem with 4% ≅ $ 129835.5 and 2% ≅ $ 130522 , respectively.  Table 18-Table 20 present the optimum quantities associated with each product family to be manufactured for consumers over the given planning horizon.

Conclusion and outlook to future
In this paper, an advanced decision-making system for a class of CSN problems was proposed. A novel SOA algorithm incorporating GA as its optimisation module was designed for MPMPCSN problem. The robustness and effectiveness of the proposed methodology was verified through performing twentyseven computational trials on three practical test problems at different granularity levels (small-scale, medium-scale, largescale). In addition, a tuning mechanism was recommended to improve the quality of the obtained solutions that was affected by controllable parameters of the optimisation module. To this end, two statistical techniques known as ANOVA and Taguchi methods were utilised. The optimum levels associated to the controllable parameters of GA were determined as following: for , level 1(Roulette Wheel selection), level 1 (90% crossover), level 1 (10% mutation), level 2 (120 chromosomes) and level 1 (200 iterations), respectively; for level 2 (Tournament selection), level 2 (85% crossover), level 1 (10% mutation), level 1 (200 chromosomes) and level 1 (500 iterations), respectively; For level 2 (Tournament selection), level 1 (90% crossover), level 1 (10% mutation), level 3 (300 chromosomes) and level 1 (3500 iterations), respectively. The proposed SOA was resulted in 5%, 4% and 2% improvement in total cost of CSN associated to , and problems respectively, in contrast to only using GA solver. Also, it was observed that the computational cost and time were reduced significantly.

Conflict of Interest
The authors declare no conflict of interest.