A Comparative Study of a Hybrid Ant Colony Algorithm MMACS for the Strongly Correlated Knapsack Problem

Article history: Received: 15 August, 2018 Accepted: 22 October, 2018 Online: 01 November, 2018


Introduction
MMACS algorithm is a hybrid metaheuristic that was proposed in a previous work [1] and employed to solve one of the most complex variants of the knapsack problem which is the Strongly Correlated Knapsack Problem (SCKP). The proposed approach combines a proposed Ant Colony Optimization algorithm (ACO) with a 2-opt algorithm. The proposed ACO scheme combines two ant algorithms: the MAX-MIN Ant System and the Ant Colony System. At a first stage, the proposed ACO aims to solve the SCKP to optimality. In case an optimal solution is not found, a proposed 2-opt algorithm is used. Even if the 2-opt heuristic fails to find the optimal solution, it would hopefully improve the solution quality by reducing the gap between the found solution and the optimum. An optimal resolution of a combinatorial optimization problem by applying an approximate method requires an adequate balance between exploitation of the best available solutions and wide exploration of the research space. On the one hand, the aim of exploitation is to intensify the research around the most promising areas of the research space, which are in most cases close to the best-found solutions. On the other hand, it comes to diversifying the research by encouraging the exploration in order to discover new and better areas of the research space. The behavior of ants in relation to this duality between exploitation and exploration can be affected by the adjustment of the parameter values. A comparative study was conducted on the hybrid ant colony algorithm MMACS. Firstly, this study is intended to present the behavior of MMACS algorithm and its dependencies on the values given to parameters while solving SCKP. Secondly, a comparison of performances of MMACS algorithm and two well-known Ant Colony Algorithms: the Max-Min Ant System (MMAS) and the Ant Colony System (ACS) was provided. Finally, MMACS algorithm was compared with two recent state of art algorithms that show significant results when solving the SCKP to optimality. The paper has been organized as follows. In the next section, we define the Strongly Correlated Knapsack Problem. We present the studied Ant Colony Opti-

Strongly Correlated Knapsack Problem (SCKP)
The SCKP problem is a NP-hard problem, whose goal is to find a subset of items that maximizes an objective function while satisfying resource constraints. In SCKP, the profit of each item is linearly related to its weight, in other words, the profit of an item is equal to its weights plus a fixed constant. The complexity of this problem compared to the classical knapsack problem resides in the strong correlation between the variables that characterize the problem. According to Pisinger in [2], the strongly correlated instances are hard to solve for two reasons. First, they are badly conditioned in the sense that there is a large gap between the continuous and integer solution of the problem. Then, sorting the items according to decreasing efficiencies corresponds to a sorting according to the weights. Thus, for any small interval of the ordered items (i.e. a "core"), there is a limited variation in the weights, making it difficult to equally satisfy the capacity constraint. SCKP can be formulated as follows: subject to constraint: where x i is a decision variable associated with an item i, which has value 1 if the item is selected and 0 otherwise, w i is the weight of the item i uniformly random [1, R], k is a positive constant, c is the knapsack capacity and n is the number of items. The capacity of the knapsack c is proposed by Pisinger in [2] and it is obtained as follows : where S is the series of instances as such, for each instance, a series of S = 100 instances is performed and i = 1, .., S corresponds to the test instance number. From equation (1), the profit prof it(x) of a solution x can be described as follows: where b is the number of items in x. According to equation (3), the maximization of the prof it(x) means the maximization of the number of the selected items. In other words, items with the lowest weights should be first selected until the sum of weights is about to exceed the capacity c. This can be achieved through the use of greedy algorithms [3,4]. However, greedy does not guarantee optimal solutions since it chooses the locally most attractive item with no concern for its effect on global solutions. The convergence to local optima caused by greedy algorithms, called stagnation, should be avoided, hence the idea of alternation between greedy and stochastic approaches.
The proposition of Pisinger in [5] is one of the most well-known works that shows significant results when solving the SCKP to optimality. Pisinger proposed a specialized algorithm for this problem where he used a surrogate relaxation to transform the problem into a Subset-sum problem. He started the resolution by applying a greedy algorithm, then he used a 2-optimal heuristic and a dynamic programming algorithm to solve the problem to optimality. More recently, Han [6] proposed an evolutionary algorithm inspired by the concept of quantum computing. The study in [6] shows that the proposed algorithm, called Quantum-Inspired Evolutionary Algorithm (QEA), can find high quality results when solving the strongly correlated knapsack problems.

Ant Colony Optimization (ACO)
The ACO [7,8] is a constructive population-based metaheuristic inspired from the real ants' behavior, seeking an adequate path between their colony and a food source, which is often the shortest path. The communication between ants is mediated by trails of a chemical substance called pheromone. Several ant colony optimization algorithms have been proposed in the literature. In this section, we present the Max-Min Ant System proposed by Stützle and Hoos [9,10], the Ant Colony System proposed by Gambardella and Dorigo [11] and a recent hybrid ant colony algorithm called MMACS.

MMAS
The Max-Min Ant System [9,10] is one of the most effective solvers of certain optimization problems. In MMAS, ants apply a random proportional rule to select the next item. The probabilistic action choice rule is defined as follows: where τ and η are successively the pheromone factor and the heuristic factor, α and β are two parameters that determine the relative influence of the pheromone trail and the heuristic information and N k i is the feasible neighborhood of an ant k that selected an item i and chooses to select an item j. Besides, MMAS exploits the best solutions found by letting only the best www.astesj.com ant deposit pheromone. This best ant can be the one which found the best solution during the last iteration or the one which found the solution from the beginning of the execution. The pheromone update can be formulated as follows: 1. Pheromone evaporation applied to all components: 2. Pheromone update applied to components selected by the best ant: Then, MMAS introduces bounds to limit the range of pheromone trails to [τ min , τ max ] in order to escape a stagnation that can be caused by an excessive growth of pheromone trails. These pheromone trails are initialized, at the beginning, to upper pheromone trail limit to ensure the exploration of the research space, and reinitialized when system approaches stagnation.

ACS
The ACS algorithm [11] achieves performance improvement through the use of a more aggressive action choice rule. In ACS, ants choose items according to an aggressive action choice rule called pseudorandom proportional rule given as follows: where q is a random variable uniformly distributed in [0, 1], q 0 (0 ≤ q 0 ≤ 1) and S is a random variable selected according to the probability distribution given in equation (4). Besides, only one ant called the bestso-far-ant is allowed to deposit pheromone after each iteration. Thus, the global pheromone trail update is given as follows: This pheromone trail update is applied to only components in the best-so-far solution, where the parameter ρ represents pheromone evaporation. In addition to the global pheromone update, ants use a local pheromone update rule that is applied immediately after choosing a new item during the solution construction, given as follows: where 0 < < 1 and τ 0 are two parameters such that the τ 0 value is equal to the initial value of the pheromone trails which is 0.1. The local update happens during the solution construction in order to prevent other ants to make the same choices. This increases the exploration of alternative solutions.

MMACS
The MMACS algorithm combines Max-Min Ant System with Ant Colony System and an algorithm based on the 2-opt heuristic. In fact, the scheme of MMACS is based on ACO scheme presented in [12] and ACS scheme presented in [11] in a way that it uses an MMAS pheromone update rule and a choice rule inspired from the ACS aggressive action choice rule. In MMACS, the minimum and the maximum pheromone amounts are limited to an interval [τ min , τ max ], like MMAS, in order to avoid premature stagnation. Initially, the pheromone trails are set to τ max . After the construction of all solutions in one cycle, the best ant updates the pheromone trails by applying a rule similar to MMAS pheromone update rule. Indeed, once all ants finish the solutions construction, the pheromone trails are decreased to simulate evaporation by multiplying each component by a pheromone persistence ratio equal to (1 − ρ) where 0 ≤ ρ ≤ 1 as given by equation (5). After that, an amount of pheromone is laid on the best solution found by ants by applying the pheromone update rule given in equation (6), where the amount of pheromone trails is calculated as follows: S best represents the best solution built since the beginning and S k is the best solution of a cycle. Besides, in MMACS, each ant constructs a solution by applying the choice rule, where the decision making is based on both: 1. A random proportional rule that selects a random item using the probability distribution.

2.
A guided selection rule that chooses the next item as the best available option.
Like ACS, MMACS balances between greedy and stochastic approaches by applying the pseudorandom proportional rule (7). Actually, at each construction step, an ant k chooses a random variable q uniformly distributed in [0, 1]. If q is less than a fixed parameter q 0 such as 0 ≤ q 0 ≤ 1, the ant makes the best possible choice as indicated by the pheromone trails and the heuristic information (exploitation) else, with a probability 1 − q 0 , the ant applies the random proportional rule (4) to select the next item (biased exploration). The heuristic factor used in the probability rule (4) is given as follows: where d S k is the remaining capacity when an ant k built a solution S k and it is given as follows: As shown in equation (11), the heuristic information value and the item weight are inversely proportional. Consequently, the more the weight value decreases the www.astesj.com more the heuristic information value increases. Added to that, the closer the remaining capacity and the item weight are, the more the heuristic information value increases. This can be helpful at the end of the execution when the knapsack is about to be filled. At last, the execution of MMACS algorithm ends either when an optimum is found, or in the worst cases, it ends after a fixed number of iterations. The pseudocode of MMACS algorithm is represented by algorithm 1.
Algorithm 1 MMACS pseudo-code applied to SCKP Initialize pheromone trails to τ max repeat repeat Construct a solution Update S best until maximum number of ants is reached or optimum is found Update pheromone trails until maximum number of cycles is reached or optimum is found Apply a local search algorithm S best is the best solution found all along the execution.
The construction procedure can be represented by algorithm 2.

Algorithm 2 Construct Solution
Select randomly a first item Remove from candidates each item that violates resource constraints while Candidates Ø do if a randomly chosen q is greater than q 0 then Choose item o j from Candidates with probability P k ij else Choose the next best item end if Remove from candidates each item that violates resource constraints end while

Parameters in ACO
In the ACO algorithm, relevant parameters that request reasonable settings are the heuristic information parameter β, the pheromone parameter α and the pheromone evaporation rate ρ. Those parameters can influence the algorithm performance by improving its convergence speed and its global optimization ability.

The heuristic information parameter β
The ants' solution construction is biased by a heuristic value η that represents the attractiveness of each item.
The parameter β determines the relative importance of this heuristic value. In our case, the heuristic information makes items characterized by little weights as desirable choices. In other words, the increase in the β value can be triggered by the selection of items which have little weights. This behavior is close to that of greedy algorithm. However, the decrease in the value of β makes the heuristic factor unprofitable. As a result, ants fall easily in local optima.

The pheromone parameter α
Besides heuristic value, the ants' solution construction is influenced by the pheromone trails. The pheromone parameter α determines the relative influence of the pheromone trails τ. Indeed, the parameter α reflects the importance of the amplification of pheromone amounts. In other words, the increase of α favors the choice of items associated with uppermost pheromone trails values. In case the value of α is considerable, ants tend to choose the same solution components. This behavior is caused by the strong cooperation between them so ants drift towards the same part of the search space. In such case, the convergence speed of algorithm accelerates and consequently, it causes the fall in local optima. This gives rise to the need to prevent this premature convergence. Then, several tests were conducted to evaluate the influence of α on solutions' quality. Additional experiments were carried out to examine the similarity of solutions in one cycle. The analysis of the similitude of solutions allows appropriately assigning the value of α in order to avoid the premature stagnation of the search that can be caused by the excessive reliance upon pheromone trails at the expense of the heuristic information.

The pheromone evaporation rate ρ
The amount of pheromone decreases to simulate evaporation by multiplying each component by a constant evaporation ratio equal to 1 − ρ. This pheromone trails reduction gives ants the possibility of abandoning bad decisions previously taken. In fact, the pheromone value of an unchosen item decreases exponentially with the number of iterations.

Local search
Local search algorithms are usually used in most applications of ACO to combinatorial optimization problems in order to improve solutions found by ants. Among those algorithms, we cite 2-opt heuristic. The 2-opt [13] is a simple local search algorithm. When applied to knapsack problems, it consists of exchanging an item present in the current solution with another that is not part of this solution in order to improve it. The new solution should satisfy constraints and it would be better or equal to the old one. In other words, the 2-opt algorithm takes a current solution as input and returns a better accepted solution to the problem, www.astesj.com if it exists. The 2-opt algorithm is used once the ants have completed their solution construction, thereby improving the solution by approaching the best one or even reaching it. Our proposed 2-opt algorithm can be written as represented by algorithm 3.
if constraints are satisfied by S best and S best is better than S best then Update best solution end if end for end for until no improvement is made

Computational results
In this section, we study the results of a set of experiments that was carried out to determine the efficacy of the MMACS algorithm. The proposed algorithm was programmed in C++, and compiled with GNU g++ on an Intel Core i7-4770 CPU processor (3.40 GHz and 3.8 GB RAM).
Through the experimentations, we analyze the influence of the parameters' selection on the MMACS performances. Then, we identify the convenient parameter settings that produce better results. Those parameter settings are employed for the rest of the experiments. At a later stage, we compare the results of MMACS with those of the two well-known ant colony algorithms: MMAS [9,10] and ACS [11]. After that, the results of MMACS are compared with those of the evolutionary algorithm QEA in [6] and to the optimal values.

Benchmark instances
In order to evaluate the performance of the MMACS algorithm, experiments were conducted on two sets of instances.

Pisinger Set
The first set contains 100 different instances with n items, where the number of items n varies from 50 to 2000. Those benchmark instances used in comparison with algorithms in [5] and [6] are available at the website (http://www.diku.dk/pisinger/codes.html).

Generated Set
The second set regroups 3 different instances having the number of items equal to 100, 250 and 500, respectively. Those instances used in comparison with the proposed algorithm in [6], were randomly generated using a generator similar to the one in [2].

Parametric analysis of MMACS
In order to evaluate the influence of parameters' values on MMACS performances, we conducted tests for different values of parameters and compared the obtained results. The experiments were realized on the Pisinger set instances of size 50, 100, 200 and 500, and for each instance, we applied 10 runs (10 runs * 100 instances * 4 knapsack problems). In each of these experiments, we fixed the parameters to their default values and we made the variation of the studied parameter. In fact, the MMAS and ACS default parameter settings were recommended by the authors in [14]. The default parameter settings are given in Table 1 and the various values for each parameter are presented in Table 2. Tables 3-29 report the results of MMACS algorithm in response to the variation of the parameters. N is the number of items and R is the range of coefficients. Then, the presentation of the data is visualized using different curves. Figures 1-5 present the effect of the studied parameters on MMACS performances. The abscissa axis of curves presented in those figures shows the instances' size that varies between 50 and 500. In each figure, left and right plots show the behavior of MMACS algorithm while solving the SCKP instances having the range of coefficients equal to 1000 and 10000, respectively. In those curves, we examine the percentage of exact solutions, the relative deviation of the best solution found by MMACS from the optimal solution value and the execution time in terms of the studied parameter. We set the value of β to 5 and ρ to 0.02. After that, we make the change of the value of α in order to study its influence on the solutions' quality and the execution time. Tables 3-6 represent the results after applying MMACS to SCKP where α varies between 1 and 5. Results are visualized using curves in Figure 1. In fact, the differences among the various settings of α are almost insignificant. However, the value of α equal to 1 gives better results in terms of the most reduced execution time. Additional experiments are conducted to analyze the similitude of solutions in one cycle in order to appropriately assign the value of α. In fact, we propose to calculate a similarity ratio proposed in [15]. The similarity ratio associated with a set of solutions S is defined as follows: where V is a set of items and f req In other words, ants are not deeply influenced by pheromone trails and they are obviously focusing in a short time on a very small area of the research space that they explore intensely.

Influence of parameters β and ρ
In this section, we examine the influence of different values of the heuristic information parameter β and the pheromone evaporation rate ρ. In this context, we fix the value of α to 1 then we make a simultaneous variation of the two variables β and ρ. We modify the values of β in order to control the influence of the heuristic information and to examine its effect on the MMACS performances. Besides, we make the variety of ρ values in order to study its influence on the items' selection and consequently on the MMACS performances. Ta The MMACS algorithms with the fixed parameter β3 and β4 behave in a similar way for almost all problems with both ranges of coefficients 1000 and 10000. Results show that the percentages of exact solutions for both algorithms increase for number of items between 50 and 200. Then these percentages decrease considerably for the large number of items 500. Consequently, the gap results progress in a reverse way.
However, the MMACS algorithm with the fixed heuristic parameter β5 shows acceptable results. Results are presented in Figure 3, each curve represents a value of ρ. The curves have almost the same evolutions with insignificant differences between the values. In fact, the percentage of exact solution presents an increase in terms of number of items. Besides, the gap results show a remarkable decrease for large instances. Then, the execution time has the same variation for all values of ρ.
However, the value of ρ2 equal to 0.02 selected by MMACS can be modified to ρ1 in order to make a slight improvement in gap values for large instances with a range of coefficients equal to 10000.

Influence of parameter q 0
In MMACS, we study the effect of q 0 that represents the probability of selecting the best available choices in equation 7. Results are given in Tables 22-25 and presented in curves of Figure 4. The curves in Figure  5 are associated with the values of q 0 : 0, 0.5, 0.75, 0.9 and 0.99. For all instances, among those values only 0.9 and 0.99 show an increase in terms of the percentage of the exact solutions. As regards the other values of q 0 MMACS does not succeed in finding, in most cases, the exact solution. This is clearly represented by its decreasing curves. As to the execution time, the five curves are growing in almost the same way. However, the curve that corresponds to q 0 value equal to 0.9 reaches the lowest values. For large instances, the differences between the curves are significant. The best results that correspond to the lowest gap values is represented by the curve associated to the value of q 0 equal to 0.9. We conclude that MMACS has the same behavior as ACS regarding the q 0 parameter, where the values close to 1 present the good ones as suggested in the literature.

Influence of colony size
Tables 26-29 show the effect of colony size on the quality of the solutions. In these tables, the ant colony size m varies between 1 and 100. Results are compared in Figure 5. As shown by curves in Figure 5, the increase in the ants' number improves the percentage of exact solutions. This increase causes the growth of the execution time, although in practice, we generally seek to www.astesj.com

MMACS experimental settings
We have fixed the parameter values after a set of experimental tests. We have set α to 1, β to 5 and ρ to 0.02, where α and β are the two parameters that determine the relative importance of the pheromone and the heuristic factors and ρ is the evaporation factor. The number of cycles were set to 20 and 20 is the number of ants. As for pheromone trails bounds, we have set τ max to 6 and τ min to 0.01. Finally, the fixed parameter q 0 was set to 0.9.

MMACS results
After fixing the parameter values of MMACS, empirical results are presented in this section. At a first stage, MMACS results are compared with those of MMAS [9,10] and ACS [11]. After that, the performances of MMACS while solving SCKP are evaluated and compared to recent methods in the literature: the 2-optimal heuristic [5] and the QEA algorithm [6].

Comparison of MMACS, MMAS and ACS
We test MMACS and the two ACO algorithms: MMAS [9,10] and ACS [11]. Then, we compare the obtained results.  Table 30, deviation of the best solution found from the optimal solution in Table 31 and the  execution time in Table 32. Then, the three ACO algorithms are compared in Figure 6. Results show that for all instances, MMACS algorithm outperform the two ACO algorithms in terms of percentage of exact solutions, execution time and deviation of the best solution found from the optimal solution.

Comparison of MMACS with state of art algorithms
Experiments were conducted on two sets of instances and presented in this section. In the first part of experiments, MMACS, 2-optimal heuristic and QEA solve the instances of the Pisinger set. In the second part, MMACS an QEA were used to solve the instances of the generated set.
Experimental results on the instances of the Pisinger set: We present in this part the results of the experiments realized on the Pisinger set instances of the Strongly Correlated Knapsack Problems. Table 33 shows that in most cases, MMACS turned out to outperform both state of art algorithms. In fact, QEA could not solve these problems to optimality, unlike 2-optimal heuristic which showed better results than MMACS in one case out of four. Besides, our proposed algorithm MMACS reached one hundred percent of solved problem starting with the number of items equal to 1000 and a range of coefficients equal to 1000. Experimental results on the instances of the generated set: Additional experiments were conducted on the instances of the generated set. We compare the results of the MMACS algorithm with a state of art algorithm QEA [6]. In QEA, the population size, the maximum number of generations, the global migration period in generation, the local group size and the rotation angle were set to 10, 1000, 100, 2 and 0.01 π, respectively. Both algorithms MMACS and QEA were run under the same computational conditions on instances of the generated set. In experiments of this part, the SCKP numeric parameters were set to the following values: R = 10, k = 5 and the number of items are 100, 250 and 500. The generated instances used here are similar to those presented in [6]. The exact solutions of generated instances were obtained using a dynamic programming algorithm [16] that we implemented. Table 34 shows that MMACS found 30/30 exact solutions for all instances where MMACS BFS (best found solutions) are equal to the optima. Those significant results were given within an acceptable execution time when compared with QEA. The execution time (CPU) and the gap between the found solution and the optimum (Gap) were averaged over 30 runs. Besides, MMACS and QEA were compared using Wilcoxon Signed Rank Test [17]. This nonparametric test shows that the two groups of data are different according to z-statistic value and p-value at the 0.01 significance level, as shown in Table 35.

Conclusion
The paper presents a comparative study of the proposed hybrid algorithm MMACS while solving the strongly correlated knapsack problems. We gave an experimental analysis of the impact of different parameters on the behavior of MMACS algorithm. Experiments show that the default parameter settings proposed in the literature, gave the best possible results essentially in terms of execution time. It is also noticed from the results that ants in MMACS construct solutions relying mainly on the heuristic information rather than pheromone trails. In fact, initializing pheromone trails to the upper bound helps ants to start the search in promising zones. Besides, the MMACS balances between exploitation and exploration by the employment of a choice rule that alternate between greedy and stochastic approaches. Then, MMACS results were compared to those of MMAS and ACS, the three algorithms show very different behaviors when solving SCKP. The MMACS algorithm outperforms both ant algorithms. In the second part of experiments, we compared MMACS to other recent metaheuristics. Basically, MMACS gave better quality of solutions. As perspective, we propose to draw more attention to the exploitation of the best solutions, in order to avoid early search stagnation. This can achieve the best performances of MMACS.