Volume 2008, Issue 1 861512
Research Article
Open Access

A Discrete Particle Swarm Optimization Algorithm for Uncapacitated Facility Location Problem

Ali R. Guner

Ali R. Guner

Department of Industrial and Manufacturing Engineering College of Engineering Wayne State University Detroit, MI 48202, USA , wayne.edu

Search for more papers by this author
Mehmet Sevkli

Corresponding Author

Mehmet Sevkli

Department of Industrial Engineering Faculty of Engineering Fatih University 34500 Büyükçekmece, Istanbul, Turkey , fatih.edu.tr

Search for more papers by this author
First published: 08 April 2008
Citations: 55
Academic Editor: T. Blackwell

Abstract

A discrete version of particle swarm optimization (DPSO) is employed to solve uncapacitated facility location (UFL) problem which is one of the most widely studied in combinatorial optimization. In addition, a hybrid version with a local search is defined to get more efficient results. The results are compared with a continuous particle swarm optimization (CPSO) algorithm and two other metaheuristics studies, namely, genetic algorithm (GA) and evolutionary simulated annealing (ESA). To make a reasonable comparison, we applied to same benchmark suites that are collected from OR-library. In conclusion, the results showed that DPSO algorithm is slightly better than CPSO algorithm and competitive with GA and ESA.

1. Introduction

Efficient supply chain management has led to increased profit, increased market share, reduced operating cost, and improved customer satisfaction for many businesses. One strategic decision in supply chain management is facility location [1]. Location problems are classified into categories with some assumptions such as limiting the capacity and open number of sites. The uncapacitated facility location (UFL) problem assumes the cost of satisfying the client requirements has two components: a fixed cost of setting up a facility in a given site, and a transportation cost of satisfying the customer requirements from a facility. The capacities of all the facilities are assumed to be infinite [2].

1.1. Literature Review

There are many different titles for the UFL problem in the literature: the problem of a nonrecoverable tools optimal system [3], the standardization and unification problem [4], the location of bank accounts problem [5], warehouse location problem [6], uncapacitated facility location problem [7], and so on. The academic interest to investigate this mathematical model reasoned different interpretations. UFL problem is one of the most widely studied problems in combinatorial optimization problems thus there is a very rich literature in operations research (OR) for this kind of problem [8]. All important approaches relevant to UFL problems can be classified into two main categories: exact algorithms and metaheuristics-based methods.

There is a variety of exact algorithms to solve the UFL problem, such as branch and bound [6, 9], linear programming [10], Lagrangean relaxation algorithms [11], dual approach (DUALLOC ) of Erlenkotter [12], and the primal-dual approaches of Körkel [13]. Although Erlenkotter [12] developed this dual approach as an exact algorithm, it can also be used as a heuristic to find good solutions. It is obvious that since the UFL problem is NP-hard [14], exact algorithms may not be able to solve large practical problems efficiently. There are several studies to solve UFL problem with heuristics and metaheuristics methods. Alves and Almeida [15] proposed a simulated annealing algorithms and reported they produce high-quality solutions, but quite expensive in computation times. A new version of evolutionary simulated annealing algorithm (ESA) called distributed ESA presented by Aydin and Fogarty [16]. They stated that with implementing it they get good quality of solutions within short times. Another popular metaheuristic, tabu search algorithm, is applied by Al-Sultan and Al-Fawzan in [17]. Their application produces good solutions, but takes significant computing time and limits the applicability of the algorithm. Michel and Van Hentenryck [18] also applied tabu search and their proposed algorithm generates more robust solutions. Sun [19] examined tabu search procedure against the Lagrangean method and heuristic procedures reported by Ghosh [2]. Genetic algorithms (GA) are also applied by Kratica and Jaramillo [20, 21]. Finally, there are also artificial neural network approaches to solve UFL problems in Gen et al. [22] and Vaithyanathan et al. [23].

The particle swarm optimization (PSO) is one of the recent metaheuristics invented by Eberhart and Kennedy [24] based on the metaphor of social interaction and communication such as bird flocking and fish schooling. On one hand, it can be counted as an evolutionary method with its way of exploration via neighborhood of solutions (particles) across a population (swarm) and exploiting the generational information gained. On the other hand, it is different from other evolutionary methods in such a way that it has no evolutionary operators such as crossover and mutation. Another advantage is its ease of use with fewer parameters to adjust. In PSO, the potential solutions, the so-called particles, move around in a multidimensional search space with a velocity, which is constantly updated by the particle′s own experience and the experience of the particle′s neighbors or the experience of the whole swarm. PSO has been successfully applied to a wide range of applications such as function optimization, neural network training [25], task assignment [26], and scheduling problem [27, 28].

Since PSO is developed for continuous optimization problem initially, most existing PSO applications are resorting to continuous function value optimization [2932]. Recently, a few researches applied PSO for discrete combinatorial optimization problems [2628, 33, 34, 35, 36, 37].

1.2. UFL Problem Definition

In a UFL problem, there are a number of customers, m, to be satisfied by a number of facilities, n. Each facility has a fixed cost, fcj. A transport cost, cij, is accrued for serving customer, i, from facility, j. There is no limit of capacity for any candidate facility and the whole demand of each customer has to be assigned to one of the facilities. We are asked to find the number of facilities to be established and specify those facilities such that the total cost will be minimized (1). The mathematical formulation of the problem can be stated as follows [14]:
()
subject to
()
()
where i = 1, …, m; j = 1, …, n; xij represents the quantity supplied from facility i to customer j; yj indicates whether facility j is established (yj = 1) or not (yj = 0).

Constraint (2) makes sure that all customers demands have been met by an open facility and (3) is to keep integrity. Since it is assumed that there is no capacity limit for any facility, the demand size of each customer is ignored, and therefore (2) established without considering demand variable.

It is obvious that since the main decision in UFL is opening or closing facilities, UFL problems are classified in discrete problems. On the other hand, PSO is mainly designed for continuous problem thus it has some drawbacks when applying PSO for a discrete problem. This tradeoff increased our curiosity to apply PSO algorithm for solving UFL problem.

The organization of the paper is as follows: in Section 2, the implementation of both continuous and discrete PSO algorithms for UFL problem is given with the details of how a local search procedure is embedded. Section 3 reports the experimental settings and results. There are three sets of comparisons: the first is between CPSO and DPSO algorithms; the second is between CPSO with local search (CPSOLS) and DPSO with local search (DPSOLS) algorithms; and the third is among DPSOLS with two other algorithms from the literature. Finally, Section 4 provides with the conclusion.

2. PSO Algorithms for UFL Problem

As mentioned Section 1, PSO is one of the population-based optimization technique inspired by nature. It is a simulation of social behaviour of a swarm, that is, bird flocking, fish schooling. Suppose the following scenario: a flock of bird is randomly searching for food in an area, where there is only one piece of food available and none of them knows where it is, but they can estimate how far it would be at each iteration. The problem here is “what is the best strategy to find and get the food?”. Obviously, the simplest strategy is to follow the bird known as the nearest one to the food. PSO inventers were inspired of such natural process-based scenarios to solve the optimization problems. In PSO, each single solution, called a particle, is considered as a bird, the group becomes a swarm (population) and the search space is the area to explore. Each particle has a fitness value calculated by a fitness function, and a velocity of flying towards the optimum, the food. All particles fly across the problem space following the particle nearest to the optimum. PSO starts with initial population of solutions, which is updated iteration-by-iteration. Therefore, PSO can be counted as an evolutionary algorithm besides being a metaheuristics method, which allows exploiting the searching experience of a single particle as well as the best of the whole swarm.

2.1. Continuous PSO Algorithm for UFL Problem

The continuous particle swarm optimization (CPSO) algorithm used here is proposed by the authors, Sevkli and Guner [33]. The CPSO considers each particle has three key vectors: position (Xi), velocity (Vi), and open facility (Yi). Xi denotes the ith position vector in the swarm, Xi = [xi1, xi2, xi3, …, xin], where xik is the position value of the ith particle with respect to the kth dimension (k = 1, 2, 3, …n). Vi denotes the ith velocity vector in the swarm, Vi = [vi1, vi2, vi3, …, vin], where vik is the velocity value of the ith particle with respect to the kth dimension. Yi represents the opening or closing facilities based on the position vector (Xi), Yi = [yi1, yi2, yi3, …, yin], where yik represents opening or closing kth facility of the ith particle. For an n-facility problem, each particle contains n number of dimensions.

Initially, the position and the velocity vectors are generated as continuous uniform random variables, using the following rules:
()
where xmin⁡ = −10.0, xmax⁡ = 10.0, vmin⁡ = −4.0, vmax⁡ = 4.0 which are consistent with the literature [38], and r1 and r2 are uniform random numbers in [0, 1] for each dimension and particle. The position vector Xi = [xi1, xi2, xi3, …, xin] corresponds to the continuous position values for the n facilities, but it does not represent a candidate solution to calculate a total cost (fitness value). In order to create a candidate solution, a particle, the position vector is converted to a binary variable, YiXi, which is also a key element of a particle. In other words, a continuous set is converted to a discrete set for the purpose of creating a candidate solution, particle. The fitness of the ith particle is calculated by using open facility vector (Yi). For simplicity, let fi(YiXi) be denoted as fi.
In order to ascertain how to derive an open facility vector from position vector, an instance of 5-facility problem is illustrated in Table 1. Position values are converted to binary variables using following formula:
()
Table 1. An illustration of deriving open facility vector from position vector for a 6-customer 5-facility problem.
ith particle vectors Particle dimension (k)
1 2 3 4 5
  
Position vector (Xi) 1.8 3.01 −0.99 0.72 −5.45
Velocity vector (Vi) −0.52 2.06 3.56 2.45 −1.44
Open facility vector (Yi) 1 1 0 0 1
In (5), the absolute value of a position value is first divided by 2 and then the remainder is floored to nearest integer number. Then it is assigned to corresponding element of the open facility vector. For example, fifth element of the open facility vector, y5, can be calculated as follows:
()

Considering the 5-facility to 6-customer example shown in Table 2, the total cost of open facility vector (Yi) can be calculated as follows:

Table 2. An example of 5-facility to 6-customer.
Facility locations 1 2 3 4 5
  
Fixed cost 12 5 3 7 9
  
Customers 1 2 3 6 7 1
2 0 5 8 4 12
3 11 6 14 5 8
4 19 18 21 16 13
5 3 9 8 7 10
6 4 7 9 6 0
()

For each particle in the swarm, let define Pi = [pi1, pi2, …, pin], as the personal best, where pik denotes the position value of the ith personal best with respect to the kth dimension. The personal bests are determined just after generating Yi vectors and their corresponding fitness values. In every generation, the personal best of each particle is updated based on its position vector and fitness value. Regarding the objective function, fi(YiXi), the fitness values for the personal best of the ith particle, Pi, is denoted by . At the beginning, the personal best values are equal to position values (Pi = Xi), explicitly Pi = [pi1 = xi1, pi2 = xi2, pi3 = xi3, …, pin = xin] and the fitness values of the personal bests are equal to the fitness of positions, .

Then the best particle in the whole swarm is selected as the global best. G = [g1, g2, g3, …, gn] denotes the best position of the globally best particle, fg = f(YG), achieved so far in the whole swarm. At the beginning, global best fitness value is determined as the best of personal bests over the whole swarm, , with its corresponding position vector Xg, which is to be used for G = Xg, where G = [g1 = xg1, g2 = xg2, g3 = xg3, …, gn = xgn] and corresponding Yg = [yg1, yg2, yg3, …, ygn] denotes the open facility vector of the global best found.

The velocity of each particle is updated based on its personal best and the global best in the following way:
()
where w is the inertia weight used to control the impact of the previous velocities on the current one , t is generation index, r1 and r2 are different random numbers for each dimension and particle in [0, 1], and c1 and c2 are the learning factors which are also called social and cognitive parameters. The next step is to update the positions.
()

After updating position values for all particles, the corresponding open facility vectors can be determined by their fitness values in order to start a new iteration if the predetermined stopping criterion has not met yet. In this study, we employed the gbest model of Kennedy et al. [38] for CPSO , which is elaborated in the pseudocode given below.

2.2. Discrete PSO Algorithm for UFL Problem

The discrete PSO (DPSO) algorithm used here is first proposed by Pan et al. [37] for the no-wait flowshop scheduling problem. We employed the DPSO algorithm for UFL problem. In DPSO, each particle is based on only the open facility vector Yi = [yi1, yi2, yi3, …, yin], where yik represents opening or closing kth facility of the ith particle. For an n-facility problem, each particle contains n number of dimensions. The dimensions of Yi are binary random numbers. The fitness of the ith particle is calculated by fi(Yi).

The open facility vector (Yi) of the particle i at iteration t can be updated as follows [37]:
()
()

Equation (10) consists of three components: the first component (11) is the velocity of the particle. F1 represents the exchange operator (Figure 1) which is selecting two distinct facilities from the open facility vector, , of particle and swapping randomly with the probability of w . In other words, a uniform random number, r, is generated between 0 and 1. If r is less than w then the exchange operator is applied to generate a perturbed Yi vector of the particle by , otherwise current Yi is kept as .

Details are in the caption following the image
Exchange operator.
()

The second component (12) is the cognition part of the particle representing particle’s own experience. F2 represents the one-cut crossover (Figure 2) with the probability of c1. Note that and will be the first and second parents for the crossover operator, respectively. It is resulted either in or in depending on the choice of a uniform random number

Details are in the caption following the image
One-cut crossover operator.
()

The third component (13) is the social part of the particle representing experience of whole swarm. F3 represents the two-crossover (Figure 3) operator with the probability of c2. Note that and Gt−1 will be the first and second parents for the crossover operator, respectively. It is resulted either in or in depending on the choice of a uniform random number. In addition, one-cut and two-cut crossovers produce two children. In this study, we selected one of the children randomly.

Details are in the caption following the image
Two-cut crossover operator.

The corresponding Yi vectors are determined with their fitness values so as to start a new iteration if the predetermined stopping criterion has not met yet. We apply the gbest model of Kennedy and Eberhart for DPSO. The pseudocode of DPSO is given in Algorithm 2.

    Algorithm 1: Pseudocode of CPSO algorithm for UFL problem.
  • Begin

  • Initialize particles (population) randomly

  • For each particle

  •  Calculate open facility vectors (1)

  •  Calculate fitness value using open facility vector

  •  Set to position vector and fitness value

  •  as personal best ()

  •  Select the best particle and its position

  •  vector as global(Gt)

  • End

  • Do {

  •  Update inertia weight

  •  For each particle

  •   Update velocity (8)

  •   Update position(9)

  •   Find open facility vectors

  •   Calculate fitness value using open

  •   facility vector (1)

  •   Update personal best()

  •   Update the global best (Gt) value with

  •   position vector

  •  End

  •  Apply local search (for CPSOLS) to global best

  • } While (Maximum Iteration is not reached)

  • End

    Algorithm 2: Pseudocode of DPSO algorithm for UFL problem.
  • Begin

  •  Initialize open facility vector

  •  Calculate fitness value using Yi(1)

  •  Do

  •   Find personal best ()

  •   Find global best (Gt)

  •   For Each Particle

  •    Apply velocity component (11)

  •    Apply cognition component (12)

  •    Apply social component (13)

  •    Calculate fitness value using Yi(1)

  •   Apply local search (for DPSOLS)

  •   to global best

  •  While (Maximum Iteration is not reached)

  • End

2.3. Local Search for CPSO and DPSO Algorithm

Apparently, CPSO and DPSO conduct such a rough search that they produce premature results, which do not offer satisfactory solutions. For this reason, it is inevitable to embed a local search algorithm to CPSO and DPSO so as to produce more satisfactory solutions. In this study, we have applied a simple local search method to neighbours of the global best particle in every generation. In CPSO global best has three vectors, so local search is applied to the position vector (Xi). Since DPSO has one vector. Local search is applied only this vector (Yi).

The neighbourhood structure with which neighbour solutions are determined to move is one of the key elements in metaheuristics. The performance of the hybrid algorithm depends on the efficiency of the neighbourhood structure. For both algorithms, flip operator is employed as a neighbourhood structure. It is defined as: picking randomly one position value (i) of the global best, then changing its value with using formula (14) for CPSOLS and formula (15) for DPSOLS. Since binary and continuous values are stored in Yi and Xi vectors, respectively, the equations are slightly different
()
()

The local search algorithm applied in this study is sketched in Algorithm 3. The global best found at the end of each iteration of CPSO and DPSO is adopted as the initial solution by the local search algorithm. In order not to lose the best found and to diversify the solution, the global best is modified with two facilities (η and κ) which are randomly chosen. Then flip operator is applied to as long as it gets a better solution. The final produced solution, s, is replaced with the old global best if its fitness value is better than the initial one.

    Algorithm 3: Pseudocode for local search.
  • Begin

  • Set globalbest open facility vector (Yg)

  • to s0 (for DPSOLS)

  • Set globalbest position vector (Xg)

  • to s0 (for CPSOLS)

  • Modify s0 based on η,κ and set to s

  • Set 0 to loop

  • Repeat:

  •  Apply Flip to s and get s1

  •  if (f(s1)≤f(s0))

  •  Replace s with s1

  •  else

  • loop = loop + 1

  • Until loop < n is false.

  • if (f(s)≤f(s0)) Replace Yg with s (for DPSOLS)

  • if (f(s)≤f(s0)) Replace Xg with s (for CPSOLS)

  • End.

3. Experimental Results

The experimental study has been completed in three stages; first, we compared the CPSO and DPSO algorithms without local search, then we compared these algorithms with local search (CPSOLS and DPSOLS) with respect to their solution quality; finally, DPSOLS results are compared with other two metaheuristics, namely, genetic algorithm (GA) and evolutionary simulated annealing algorithm (ESA). Experimental results provided in this section are carried out over 15 benchmark problems well-known by the researchers of UFL problem . The benchmarks are undertaken from the OR library [39], a collection of benchmarks for operations research (OR) studies. There are currently 15 UFL test problems in the OR-library. Among these test problems, 12 are relatively small in size ranging from m × n = 50 × 16 to m × n = 50 × 50. The other three are relatively large with m × n = 1000 × 100. The benchmarks are introduced in Table 3 with their optimum values. Although the optimum values are known, it is really hard to hit the optima in every attempt of optimization. Since the main idea is to test the performance of CPSO and DPSO algorithm with UFL benchmark, the results of both algorithms are provided in Table 4 as the solution quality: average relative percent error (ARPE),hit to optimum rate, (HR) and average computational processing time (ACPU) in seconds . ARPE is the percentage of difference from the optimum and defined as following:
()
where Hi denotes the ith replication solution value, U is the optimal value provided in the literature, and R is the number of replications. HR provides the ratio between the number of runs yielded the optimum and the total numbers of experimental trials.
Table 3. Benchmarks tackled with the sizes and the optimum values.
Benchmarks
Problems Size (m × n) Optimum
  
Cap71 16×50 932615.75
Cap72 16×50 977799.40
Cap73 16×50 1010641.45
Cap74 16×50 1034976.98
Cap101 25×50 796648.44
Cap102 25×50 854704.20
Cap103 25×50 893782.11
Cap104 25×50 928941.75
Cap131 50×50 793439.56
Cap132 50×50 851495.33
Cap133 50×50 893076.71
Cap134 50×50 928941.75
CapA 100×1000 17156454.48
CapB 100×1000 12979071.58
CapC 100×1000 11505594.33
Table 4. Experimental results gained for CPSO and DPSO without local search.
CPSO DPSO
Problem ARPE HR ACPU ARPE HR ACPU
  
Cap71 0.026 0.83 0.1218 0.000 1.00 0.0641
Cap72 0.050 0.83 0.1318 0.000 1.00 0.0651
Cap73 0.034 0.73 0.1865 0.000 1.00 0.0708
Cap74 0.095 0.00 0.1781 0.000 1.00 0.0693
Cap101 0.183 0.00 0.8818 0.000 1.00 0.3130
Cap102 0.135 0.33 0.7667 0.000 1.00 0.3062
Cap103 0.145 0.00 0.9938 0.000 1.00 0.3625
Cap104 0.286 0.60 0.6026 0.002 0.93 0.2021
Cap131 0.911 0.00 3.6156 0.173 0.13 2.5464
Cap132 0.756 0.00 3.5599 0.090 0.17 2.6328
Cap133 0.496 0.00 3.7792 0.042 0.43 2.5292
Cap134 0.691 0.23 3.3333 0.000 1.00 1.7167
CapA 21.242 0.00 29.5739 8.654 0.00 24.8972
CapB 10.135 0.00 27.1318 4.918 0.00 22.0652
CapC 8.162 0.00 27.6149 4.545 0.00 23.1340

Obviously, the higher the HR the better quality of solution, while the lower the ARPE the better quality. The computational time spent for CPSO [33] and DPSO cases are obtained as time to get best value over 1000 iterations, while for CPSOLS and DPSOLS cases are obtained as time to get best value over 250 iterations. All algorithms and other related software were coded with Borland C++ Builder 6 and run on an Intel Centrino 1.7 GHz PC with 512 MB memory.

There are fewer parameters used for the DPSO and DPSOLS algorithms and they are as follows: the size of the population (swarm) is the number of facilities, the social and cognitive probabilities, c1 and c2, are set as c1=c2 = 0.5, and inertia weight, w, is set to 0, 9. Each problem solution run is conducted for 30 replications. There were two termination criteria that have been applied for every run: first, one is getting the optimum solution, the other is reaching the maximum iteration number that is chosen for obtaining the result in a reasonable CPU time.

The performance of CPSO algorithm looks not very impressive as the results produced within the range of time over 1000 iterations. The CPSO search found 6 optimal solutions, whereas the DPSO algorithm found 12 among 15 benchmark problems. The ARPE index which is expected lower for good solution quality is very high for CPSO when applied CapA, CapB, and CapC benchmarks and none of the attempts for these benchmarks hit the optimum value. As come to the ARPE index of DPSO, it is better than the ARPE index of CPSO, but not satisfactory as expected. In term of CPU, DPSO is better than CPSO as well. When the results are investigated statistically with using the t-test with 99% levels of confidence, the DPSO produced significantly better fitness results than CPSO when CapA, CapB, and CapC fitness results are excluded. It may be possible to improve the solutions quality by carrying on with algorithms for a further number of iterations, but, then the main idea and useful motivation of employing the heuristics, that is, getting a better quality within shorter time, will be lost. This fact imposed that it is essential to empower CPSO and DPSO with hybridizing with a local search algorithm. Thus a simple local search algorithm is employed in this case for that purpose, as mentioned before.

The results of CPSOLS and DPSOLS are shown in Table 5. The performance of CPSOLS and DPSOLS looks very impressive compared to the CPSO and the DPSO algorithms with respect of all three indexes. HR is 1.00 which means 100% of the runs yield with optimum for all benchmark except CapB and CapC for CPSOLS and except CapA, CapB, and CapC for DPSOLS. On the other hand, it should be mentioned that DPSOLS found optimum solutions of all instances, while CPSOLS found optimums except for CapC. The ARPE index results of CPSOLS and DPSOLS are very small for both algorithms and very similar to each other thus there is no meaningful difference. When the results are compared statistically with using the t-test with 99% levels of confidence, the CPSOLS and DPSOLS can be considered as equal. In term of CPU, CPSOLS consumed 18% more time than DPSOLS thus we can say that the results of DPSOLS are slightly better than CPSOLS.

Table 5. Experimental results of CPSOLS and DPSOLS.
CPSOLS DPSOLS
Problem ARPE HR ACPU ARPE HR ACPU
  
Cap71 0.000 1.00 0.0146 0.000 1.00 0.0130
Cap72 0.000 1.00 0.0172 0.000 1.00 0.0078
Cap73 0.000 1.00 0.0281 0.000 1.00 0.0203
Cap74 0.000 1.00 0.0182 0.000 1.00 0.0109
Cap101 0.000 1.00 0.1880 0.000 1.00 0.1505
Cap102 0.000 1.00 0.0906 0.000 1.00 0.0557
Cap103 0.000 1.00 0.2151 0.000 1.00 0.1693
Cap104 0.000 1.00 0.0370 0.000 1.00 0.0344
Cap131 0.000 1.00 1.4281 0.000 1.00 0.4922
Cap132 0.000 1.00 1.0245 0.000 1.00 0.2745
Cap133 0.000 1.00 1.3651 0.000 1.00 0.4516
Cap134 0.000 1.00 0.3635 0.000 1.00 0.0594
CapA 0.037 1.00 16.3920 0.051 0.53 14.5881
CapB 0.327 0.63 19.6541 0.085 0.40 17.6359
CapC 0.091 0.00 17.4234 0.036 0.13 15.7685

The experimental study is also carried out as a comparative work in Table 6. A genetic algorithm (GA) introduced by Jaramillo et al. [21] and an evolutionary simulated annealing algorithm (ESA) proposed by Aydin and Fogarty [16] are taken for the comparison. These algorithms were coded and run under the same condition with the DPSOLS algorithm. The performance of DPSOLS algorithm looks slightly better than GA in both two indexes. Especially, in respect of CPU time DPSOLS much more better than GA. Comparing with ESA, both algorithms deviations from optimum are very similar. However, especially for CapA, CapB, and CapC; GA and ESA consume more CPU time than DPSOLS algorithm.

Table 6. Summary of results gained from different algorithms for comparison.
Deviation from optimum [33] Average CPU
Problem GA [21] ESA [16] DPSOLS GA [21] ESA [16] DPSOLS
  
Cap71 0.00 0.00 0.000 0.287 0.041 0.0130
Cap72 0.00 0.00 0.000 0.322 0.028 0.0078
Cap73 0.00033 0.00 0.000 0.773 0.031 0.0203
Cap74 0.00 0.00 0.000 0.200 0.018 0.0109
Cap101 0.00020 0.00 0.000 0.801 0.256 0.1505
Cap102 0.00 0.00 0.000 0.896 0.098 0.0557
Cap103 0.00015 0.00 0.000 1.371 0.119 0.1693
Cap104 0.00 0.00 0.000 0.514 0.026 0.0344
Cap131 0.00065 0.00008 0.000 6.663 2.506 0.4922
Cap132 0.00 0.00 0.000 5.274 0.446 0.2745
Cap133 0.00037 0.00002 0.000 7.189 0.443 0.4516
Cap134 0.00 0.00 0.000 2.573 0.079 0.0594
CapA 0.00 0.00 0.051 184.422 17.930 14.5881
CapB 0.00172 0.00070 0.085 510.445 91.937 17.6359
CapC 0.00131 0.00119 0.036 591.516 131.345 15.7685

4. Conclusion

In this paper, one of the recent metaheuristics algorithms called DPSO is applied to solve UFL benchmark problems. The algorithm has been tested on several benchmark problem instances and optimal results are obtained in a reasonable computing time. The results of DPSO with local search (DPSOLS) are compared with results of a CPSO [33] with local search (CPSOLS) and two other metaheuristics approaches, namely, GA [21] and ESA [16]. It is concluded that the DPSOLS is slightly better than the CPSOLS and competitive with GA and ESA.

The main purpose of this paper is testing performance of CPSO and DPSO algorithms under the same condition. When compared CPSO, DPSO proves to be a better algorithm for UFL problems. It also should be noted that, since CPSO considers each particle based on three key vectors; position (Xi), velocity (Vi), and open facility (Yi). So, CPSO allocates more memory than DPSO for each particle. In addition, to the best our knowledge, this is the first application of discrete PSO algorithm applied to the UFL problem.

    The full text of this article hosted at iucr.org is unavailable due to technical difficulties.