International Journal of Electrical and Computer Engineering (IJECE) Vol. No. April 2013, pp. ISSN: 2088-8708 The Effect of Rearrangement of the Most Incompatible Particle on Increase of Convergence Speed of PSO Abbas Fadavi*. KarimFaez** * Department of Mechatronics. Science and Research Branch. Islamic Azad Univercity. Semnan. Iran ** Department of Electrical Eng. Amirkabir University of Tech. Tehran Article Info ABSTRACT Article history: This article presents a new method for increasing the speed of Particle Swarm Optimization (PSO) method. The particle swarm is an optimization method that was inspired by collective movement of birds and fish looking for food. This method is composed of a group of particles: each particle tries to move in one direction that the best individual and best group of particles occur in that direction. Different articles tried to expand PSO so that global optimization is gained in less time. One of the problems of this model that occurs in most cases is falling of particles in local optimum. By finding the most incompatible particle and its rearrangement in the searching space, we increase convergence speed in some considered methods. Different tests of this method in standard searching space demonstrated that this method takes account of suitable function of increasing the convergece speed of particles. Received Dec 27, 2012 Revised Feb 25, 2013 Accepted Mar 16, 2013 Keyword: Convergence speed PSO Public optimum Standard searching space Copyright A 2013 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Abbas Fadavi. Department of Mechatronics. Science and Research Branch. Islamic Azad Univercity. Semnan. Iran. Email: abbas_fadavi@yahoo. INTRODUCTION PSO is an efficient method based on collective wisdom in solving the problems of optimization. This method was presented by Eberhart and Kennedy in 1995 . The plan was inspired by the collective behavior of birds and fish in searching for food. A group of birds is looking for food in a space accidentally. There is only a small amount of food in this space. None of the birds knows where the food is. One of the best strategies can be chasing the bird that is closest to the food. Each particle in the PSO equals one bird in the pattern of collective movement of birds. Each particle has a merit value that is calculated by a Fitness The closer a particle is to the purpose Ae the food in the model of movement of birds, the worthier it After chasing the optimum particles in the present case, each particle keeps on moving in the searching In fact, each particle tries to set its direction and movement to the best individual and group experience in order to run toward the final solution. Different methods have been presented about PSO algorithm. Standard PSO . in this method, the equation of the movement of each particle follows the equations of . Xi. : the position of the particle i at the time of t pbest i. : the best position of the particle i at the time of t Journal homepage: http://iaesjournal. com/online/index. php/IJECE . IJECE ISSN: 2088-8708 : the best position of particles at the time of t vi. : the speed of the movement i at the time of t vi . : the speed of the movement i at the time of t 1 r1 & r2 : two random variables with constant distribution in interval . , . : the position of the particle i at the time of t 1 : the regulator of the effect value of the speed of time t on speed of time t 1 : the regulator of the effect value of the best result gained by each particle in the position of particle at the time of t 1 : the regulator of the effect value of the best result gained by particles in the position of particle at the time of t 1 In this method, each particle chooses its next position according to its experience and consulting with other particles. The particles repeat this method and become convergent towards the response. Turbulent PSO . - according to attention to the speed of each particle, a method was presented in the reference. This method finds lazy particles and changes them from a static state. If the speed of each particle becomes less than the minimum amount, this method attributes a higher speed to it. This results in leaping of the particles that are captured in the optimum local points and turning them into active particles by a high speed. Multiplicate PSO . - this method uses a combination of five different rules randomly. In each step, particles use different w, c1 and c2. This brings about high flexibility among particles due to using different The particles also become congruent to one optimum point. Random arrangement w . - in this method, a different w is chosen in each repetition for each particle accidentally. If a particle is captured in optimum of a place, it will be out of local optimum due to different w. APSO . - Zhan and coworkers presented a way to manifest PSO parameters based on fuzzy rules. Firstly, his method calculates a parameter named learning factor. Then, this factor is used based on four fuzzy rules to set the amount of PSO parameters. SAPSO . - in this method, the amount of error of each particle in each iteration is analyzed. pbest of each particle has not been improved compared with previous time, w, c1 and c2 of that particle will The amount of these changes consistent with c1, c2 and w is when the particle is in pbest. SPSO . -Eberhard presented a method based on dynamic arrangement of w parameter. Due to iteration, the amount of w reduces in this method. In fact, particles have a high speed search in the first However, they end up in w drop if they keep on speeding. In fact, it can be said that particles search publicly in the first iterations and locally in upper iterations. IPSO . - this method increases the efficiency of SPSO by its optimization. SPSO method decreases the amount of w based on iteration. The extent of closeness of particles to the particle of gbest influences w drop. The closer the particles are to the particle gbest, the more congruent they are to a local So, when the particles focus on the particle gbest, w drop reduces in SPSO. Different methods of PSO were analyzed in the first section. The particles being captured in local optimum will be examined in the second section. In the third section, the presented method will be explained. The results of the presented method on standard searching space will be illustrated in the fourth section. PARTICLES BEING CAPTURED IN THE LOCAL OPTIMUM In the article . , one of the cases of particles being captured in the local optimum has been This case has been called oscillation. Better explain the oscillation. Equation . can be shown as Equation . In this relationship it was assumed that w=0, c1 = c2= 1. r1 and r2 were also ignored. Figure 1. Movement of the particle xi to gbest The Effect of Rearrangement of the Most Incompatible Particle on Increase of ConvergencA (Abbas Fadav. A ISSN:2088-8708 Figure 2. Movement of the particle xi to pbest In Figure 1, the particle xi is analyzed at the time of t. The best experience of the particles gbest and particle i, which ispbesti, are shown in Figure 1. As demonstrated in Figure 1, gbest . -xi . part is greater than pbest . Ae xi . So, xi moves to gbest. In Figure 2, particle xi was shown in t 1. Here the earlier iteration picture, pbesti . Ae xi . is greater than gbest . Ae xi . Therefore, xi moves to gbest. After analyzing the two iterations, we can see that the particle xi is oscillating between the two positions of gbest and pbest. So, it does not move toward the optimum point. THE PRESENTED METHOD The examination of the particles in PSO method shows that some of the particles are captured. These particles like other particles have calculating cost. However, they do not help in finding the global Therefore, they decrease PSO working output. Working output of PSO is increased by activating the particles. Best response of particles must be paid attention to determine the position of particles in PSO. It is called gbest. We find the particle that its position at the time of t has the worst response in Fitness function and in each iteration and call it gworst . Particles in different iterations move toward the optimum point according to their pbest and gbest of Because this, the particle, that has a worse position in relation to other particles, has probably been captured in a local optimum to a great extent since it could not move toward public optimum based on gbest. Through rearrangement, we send the worst particle out of local optimum. For example, if we intend to calculate the speed of the particle x. at the time of t 1, we use the following algorithm: If xi. == gworst. Else. End In that r is the random amount in searching space. That way, the particle gworst is out of local optimum and activates like an active particle in the following iterations. As an example. Figure 3 shows the amount of errors of a hypothetical particle in standard PSO Searching space of Rastrigin . has been chosen. For better understanding, the display of the rest of particles has been ignored. The amount of error is shown by a dark line. As we can see in the figure, a particle has been captured in the local optimum after searching for some moments. During the time, its error did not decrease. The dark line shows the amount of error of that particle if it is rearranged in time=30. As seen, the rearrangement of the particle causes the particle to be out of local optimum and moves toward local optimum. In the first case, the examining particle applies its costs in the accounts to the algorithm. However, it does not help other particles to reach the local optimum. In the second case, the particle is known as gworst by the algorithm in time=30. Through rearrangement, it changes into an active particle that helps other particles in finding the response. IJECE Vol. No. April 2013: 238Ae245 IJECE ISSN: 2088-8708 Figure 3. The error of a particle in standard PSO algorithm in relation to different iterations - Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle RESULT We worked on six different methods by software. Then, the worst particle in each iteration was rearranged by the presented method. We did the experiments in three standard spaces (Rastrigin. Griewank and Ackle. In each experiment, we used ten particles for search. w= 1, c1= 2, c2= 2 and dimensions of each of these three spaces 30 were assumed. Each experiment was repeated 100 times. In fact, the graphs of figures are the mean of a hundred-time experiment. Standard PSO The graph of the amount of error in relation to iterations in the method of standard PSO was shown in Figure 4. Then, the worst particle ineach iteration was rearranged by the presented method. After that, the graph of the amount of error was specified in relation to iterations. As shown in the figure, the amount of error with rearrangement of the worst particle has decreased in relation to the amount of error without rearrangement of the worst particle. Figure 4. The error of algorithm test of standard PSO in relation to different iterations Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle- a is the searching space of Ackley- b is the searching space of Griewankc is the searching space of Rastrigin. As shown in the figure, the amount of error with rearrangement of the worst particle has decreased in relation to the amount of error without rearrangement of the worst particle. Turbulent PSO The amount of error in relation to iterations in the method of the mass of scattered particles was shown in Figure 5. Then, the worst particle in each repetition was rearranged by the presented method. The graph of the amount of error was specified in relation to iterations. The Effect of Rearrangement of the Most Incompatible Particle on Increase of ConvergencA (Abbas Fadav. A ISSN:2088-8708 Figure 5. The error of algorithm test of the mass of the scattered particles in relation to different iterations Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle- a is the searching space of Ackley- b is the searching space of Griewankc is the searching space of Rastrigin As shown in the figure, the amount of error with rearrangement of the worst particle has decreased in relation to the amount of error without rearrangement of the worst particle. Random arrangement The graph of the amount of error in relation to iterations was shown in the method of accidental Then, it was turned over by the presented method of the worst particle in each iteration. Next, the graph of the amount of error was specified in relation to iterations. Figure 6. The error of algorithm test of random arrangement in relation to different iterations Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle- a is the searching space of Ackley- b is the searching space of Griewankc is the searching space of Rastrigin As shown in the figure, the amount of error with rearrangement of the worst particle has decreased in relation to the amount of error without rearrangement of the worst particle. APSO The amount of error in relation to iterations in APSO method is shown in Figure 7. Then, the worst particle in each iteration was rearranged by the presented method. After that, the graph of the amount of error was specified in relation to iterations. IJECE Vol. No. April 2013: 238Ae245 IJECE ISSN: 2088-8708 Figure 7. The error of algorithm test of APSO in relation to different iterations. Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle- a is the searching space of Ackley- b is the searching space of Griewankc is the searching space of Rastrigin As shown in the figure, the amount of error with rearrangement of the worst particle has decreased in relation to the amount of error without rearrangement of the worst particle. SAPSO The amount of error in relation to iterations based on SAPSO method is shown in Figure 8. Then, it was rearranged by the presented method of the worst particle in each iteration. After that, the graph of the amount of error was specified in relation to iterations. Figure 8. The error of algorithm test of SAPSO in relation to different iterations Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle- a is the searching space of Ackley- b is the searching space of Griewankc is the searching space of Rastrigin As shown in the figure, the amount of error with rearrangement of the worst particle has decreased in relation to the amount of error without rearrangement of the worst particle. IPSO The amount of error in relation to iterations in IPSO method is shown in Figure 9. Then, it was rearranged by the presented method of the worst particle in each iteration. After that, the graph of the amount of error was specified with reference to iterations. The Effect of Rearrangement of the Most Incompatible Particle on Increase of ConvergencA (Abbas Fadav. A ISSN:2088-8708 Figure 9. The error of algorithm test of IPSO in relation to different iterations Dark line indicates without rearrangement of the worst particle and minor boundary indicates with rearrangement of the worst particle- a is the searching space of Ackley- b is the searching space of Griewank-c is the searching space of Rastrigin The figure shows that the amount of error with rearrangement of the worst particle has decreased in comparison with the amount of error without rearrangement of the worst article. As observed in the above figures, the presented method has turned the particles captured in local optimum into active particles and increased convergence speed of different algorithms of the particle optimization. In fact, the presented method of the article is not an independent method, but it is a method that is added to other methods of algorithm of particle optimization to decrease their average error in different iterations. As observed in the above figures, there is a slight difference between the amount of error with rearrangement of the worst particle and the amount of error without rearrangement of the worst particle in primary repetitions. Because particles are not still captured in local optimum in primary iterations, as they are discovering better points. Therefore, rearrangement of the particles does not lead to decrease of error. REFERENCES