Skip to main content

Large-scale global optimization through consensus of opinions over complex networks

Abstract

Purpose

Large-scale optimization tasks have many applications in science and engineering. There are many algorithms to perform such optimization tasks. In this manuscript, we aim at using consensus in multi-agent systems as a tool for solving large-scale optimization tasks.

Method

The model is based on consensus of opinions among agents interacting over a complex networked structure. For each optimization task, a number of agents are considered, each with an opinion value. These agents interact over a networked structure and update their opinions based on their best-matching neighbor in the network. A neighbor with the best value of the objective function (of the optimization task) is referred to as the best-matching neighbor for an agent. We use structures such as pure random, small-world and scale-free networks as interaction graph. The optimization algorithm is applied on a number of benchmark problems and its performance is compared with a number of classic methods including genetic algorithms, differential evolution and particle swarm optimization.

Results

We show that the agents could solve various large-scale optimization tasks through collaborating with each other and getting into consensus in their opinions. Furthermore, we find pure random topology better than small-world and scale-free topologies in that it leads to faster convergence to the optimal solution. Our experiments show that the proposed consensus-based optimization method outperforms the classic optimization algorithms.

Conclusion

Consensus in multi-agents systems can be efficiently used for large-scale optimization problems. Connectivity structure of the consensus network is effective in the convergence to the optimum solution where random structures show better performance as compared to heterogeneous networks.

AMS subject classification

15A04, 54A20, 60J20, 92D25

Background

Networks are everywhere and we confront many networks in our daily life; they are practically present where any kind of information is transmitted or exchanged. Networks such as the Internet, the World Wide Web, engineering, social, biological and economical networks have been subject to heavy studies in the last decade and many applications have been developed based on network science Albert & Barabasi (2002, 1999; Barabasi & Albert 1999; Boccaletti et al. 20062006; Newman 2003; Newman & Park 2003; Strogatz 20011998). The progress in network science accelerated after the seminal work of Watts & Strogatz (1998) on collective behavior of small-world networks Watts & Strogatz (1998) and Barabasi & Albert (1999) on scale-free graphs (Barabasi & Albert 1999). Watts and Strogatz discovered that many real-world networks have small-world property in that their characteristics path length scales logarithmically with network size (Watts & Strogatz 1998) – a property that is observed in random networks. At the same time, these networks show high levels of transitivity (clustering coefficient) (Watts & Strogatz 1998) – much higher than corresponding random networks. Furthermore, many real networks from different disciplines were shown to have a power-law degree distribution (Barabasi & Albert 1999); the probability of having a node with degree k is k-γ with γ being in the range 2–3. Real networks have been shown to have more complex properties such as motifs (Milo et al. 2002) and community structure (Girvan & Newman 2002). These structural features influence dynamics and functionality of networks. For example, synchronization and consensus properties of networks largely depend on their structure (Belykh et al. 2005; Lu et al. 2004).

The most striking pattern of networked structures appear when a number of agents (each with simple behavior) interact leading to complex behaviors as a result of collective motion. Synchronization of interacting agents - as the most striking form of collective behavior - has many applications in science and engineering. For example, techniques available in network theory can be used for efficient distrusted inference in sensor networks (2007; Scutari et al. 2008). In this work we used tools available in network science to perform a numerical optimization task. Optimization is an approach that iteratively improves the performance of a system, which is formulated as a single standard measurement equation called cost (or objective) function.

In order to use network theory for solving an optimization task, we used the concept of consensus formation in the opinions of multi-agent systems. Let us consider a network of agents where a (discrete or continuous) opinion value is associated to each agent. Agents can influence each others’ opinions through the connections existing between them, i.e., the edges of the network. Considering some simple update rules and if certain conditions are met, the agents can reach a consensus in their opinions through a number of opinion updates (Kozma & Barrat 2008; Carletti et al. 2006). In this work we considered the evolution of continuous opinions based on the modified version of bounded confidence model (Deffuant et al. 2001), which has been extensively studied in recent years (Gandica et al. 2010; Weisbuch 2004; Urbig et al. 2008). The previous studies of continuous opinion formation have been mainly performed considering uniform agents (Gandica et al. 2010; Weisbuch 2004; Urbig et al. 2008). However, in reality the agents are diverse in their wealth and social status, and hence, have diverse influence on others (Holyst et al. 2001; Lewenstein et al. 1992; Jalili 2013a; Jalili 2013b). Therefore, we associated specific weight for each agent resulting in faster consensus.

The paradigm proposed in this manuscript was applied on a number of benchmark problems. We first considered a simple function with many local optima and showed that the proposed optimization strategy could successfully find the optimum. We then applied the method on a number of benchmark problems from CEC 2010 competition benchmark set (Tang et al. 2009). We compared the performance of the proposed consensus-based optimization approach with that of a number of classic optimization methods including genetic algorithms, differential evolution and particle swarm optimization.

Methods

Optimization through consensus in the network

The paradigm we have proposed for large-scale optimization task is based on consensus in networked structures. In opinion formation models, there is a population of agents, each with a (discrete or continuous) opinion value representing its information about a subject (Deffuant et al. 2001; Gandica et al. 2010; Weisbuch 2004; Urbig et al. 2008). The term opinion is not easy to define in reality; however, it can be considered as a discrete or continuous value expressing the individuals’ degree of desire or preference. This opinion is often represented as a real number when the model is unimodal or as a vector of real numbers when the model is multimodal. In this paper we aimed at optimizing an objective function, and therefore, each agent will have an opinion value containing all the input parameters of desired objective function, i.e., a multimodal model.

Opinion formation in multi-agent systems

The agents update their opinions as a result of interactions with their neighboring agents. Consider two neighboring agents i and j with opinions as x i and x j , respectively. Their opinions at time n + 1 will be a function of their previous opinions, i.e. x i (n + 1) = f1(x i (n), x j (n)), x j (n + 1) = f2(x i (n), x j (n)). If certain conditions are met, after a number of updates in these values, the agents can reach a consensus in their opinions (Kozma & Barrat 2008; Carletti et al. 2006). The collective behavior of the agents over complex networks largely depends on the structural properties of the networks (Amblard & Deffuant 2004), and minor modification in the structure of the network can have drastic effects on the behavior of opinion formation (Nardini et al. 2008).

There are a number of rules for modelling opinion formation in complex networks. For example, considering discrete opinions, in the voter model, randomly selected agents exchange their opinions by that of one of their neighbours (Krapivsky & Redner 2003). The agents might influence their neighbouring agents to change their opinions based on their strength and the neighbours’ threshold (Leskovec et al. 2006). In the evolution of continuous opinions on a network, the opinions of two connected agents are updated if their difference is less than a threshold, i.e. the agents have evolving opinions (Deffuant et al. 2001; Amblard & Deffuant 2004; Lorenz 2007; Kurmyshev et al. 2011; Hegselmann & Krause 2002; Guo & Cai 2009).

In this work we considered a specific form of continuous bounded confidence model in which each agent has an opinion in the range [−1,1] - denoted by opinion space – and update its opinions based on a specific rule (Deffuant et al. 2001; Fortunato et al. 2005). First, each agent takes a random value from opinion space. Then, at each proceeding step, each agent finds its best-matching neighbour, i.e., the one that optimizes the objective function furthest among the neighbours, and then updates its opinion value with this best-matching adjacent. The update rule for agent i is as

x i n + 1 = x i n + μ x j n x i n if f ( x i ( n ) ) > f ( x j ( n ) ) , i = 1 , 2 , , N , j = arg min , k f ( x k n ) , k N i
(1)

where f is the desired cost (or objective) function to be optimized, N is the network size and N i is the set of neighbours of agent i. μ is the convergence (or influence) parameter, which often takes a value between 0 and 1. This parameter controls the speed of convergence in such a way that small values of μ corresponds to slow but smooth convergence, while the large values of μ corresponds to faster but wavy convergence.

To some extent, the above model for opinion formation imitates the behavior of agents in real social networks. A person may know many individuals in the society; however, he/she is only influenced by his/her closest friends (i.e., neighbors in the network). In many cases, individuals get the maximum influence through their best (closest) friends and try to make themselves similar to them, i.e., making their opinion closer to their closest friends. People try to behave like their best friends for establishing and maintaining their friendships and they influenced by them more than the others in their life. Sometimes these changes will happen because people want to preserve their connections and friendships and they will act or behave like their close friends (Barry & Wentzel 2006). They project their own attitudes and habits to their friends. Furthermore, research showed that, in general, the influence of the very best friend approximately is equal or comparable to the influence of multiple friends (Berndt & Murphy 2003).

In our model, each agent finds one of its neighboring agents that have the best value in the objective function – which is denoted by best-matching neighbor. For example, if the objective function is an energy function; the neighbor with minimal energy function is selected. The agents then update their opinions using equation (1). It has been shown that considering proper connection weights can enhance the consensus properties of the network, i.e., the network reaches to consensus in a shorter time (Jalili 2013a; Jalili 2013b; Yang et al. 2009; Brunetti et al. 2012). Therefore, we also took proper weights while updating the opinions. The update equations read

x i n + 1 = x i n + μ f x j n + ϵ f x i n + ϵ x j n x i n if f x i n > f x j n , i = 1 , 2 , , N , j = arg min , k f x k n , k N i
(2)

where ϵ is a small value (in order to make the denominator non-zero). The above weighted update rules can be justified as follows. Let us suppose that an objective functions is to be minimized. As the best-matching neighbor is found for each agent, it influences the agent according to its fitness, i.e., its value in the objective function. To this end, the weight for the update rule of an agent gets as the fitness function at that agent divided by the fitness value of the best-matching neighbor, often resulting in a value in the range 0–1 (note that the opinions are updated only when the fitness of best-matching neighbor is better than that of the agent). It is worth mentioning that in some cases, the opinions are in multi dimensions, i.e., x is vector, in which the best matching agent is obtained separately for each dimension.

The method largely depends on the diffusion of good opinions (i.e., those that are good in terms of the objective function) in the network. Agents with opinion values close to the optimal objective function disseminate their opinions through communicating with their neighbors, i.e., getting into consensus with them. Indeed, influence of opinions is in two folds. Indeed, closer opinions to the optimal value of the objective function have a better chance to be selected as best-matching neighbors.

The above rule for opinion formation is somehow inspired by communication in human societies. Our friends influence our behavior in daily life; however, we are usually affected only when our friends are better than us. Here, similarly, for each agent, first, the best matching agent is found, and then, its opinion is updated (using equation (2)) if the fitness of the best-matching neighbor is better (i.e., it results in a lower value in the objective function) than that of the agent.

It is worth mentioning that the consensus (or synchronization) properties of dynamical networks largely depend on their structure and some topologies are favored for fast consensus (Belykh et al. 2005; Ajdari Rad et al. 2008). Network topology plays also important role in the evolution of other dynamical phenomena over complex networks, such as evolution of cooperative behavior among interacting agents (Perc & Szolnoki 2010; Perc 2009).

A pseudo-code of the proposed consensus-based optimization algorithm is illustrated in section Pseudo-Code as follows.

Pseudo-Code for the proposed consensus-based optimization method

Function CBO

N: number of agents in the population (network size)

M: number of attributes of opinion vector

Boundaries: the range of the opinions

F: desired objective function which is needed to be optimized (minimized in this case)

Begin

  • Initialize N * M matrix X by a random normal distribution for the opinion values in Boundaries;

  • net = Create a structured network;

  • Repeat

  • for each agent i in population do

  • for each attribute a do

  • neighbors_opinion = mask other attributes of the opinions x in neighbors of agent i in network net by a dummy value;

  • self_opinion = mask other attributes of the opinions x agent i;

  • j = find the best agent in neighbors_opinion resulting in the best value for F;

  • if neighbor_opinion of agent j optimizes F better than self_opinion then

    weight = F neighbors _ opinion j + ϵ F self _ opinion + ϵ ; x i , a = x i , a + μ · weight · x j , a x i , a ; x i , a = mod x i , a , Boundaries ;
  • end if

  • end for

  • end for

  • Until stopping condition(s) has/have been met

  • End

In the beginning of the process, the agents are initialized by some random values in a range acceptable by their opinion values. As indicated by Watts and Dodds (2007) “a minority of individuals who influence an exceptional number of their peers” (Watts & Dodds 2007), there is often a minority of agents that have a significant influence on others, which is mainly due to their specific position in the network The hypothesis of influential agents demonstrated that initiating influential individuals will be explicitly different from initiating non-influential ones in the size and likelihood of a cascade (Watts & Dodds 2007). This means that initial opinions for influential agents probably would bias the result of the consensus. This phenomenon will not happen in proposed method, since CBO is not based on the bounded confidence model. Every agent selects his/her best-matching neighbor regardless of its great social power and degree.

Consensus of opinion values

In this section, we provide a mathematical proof that the update rule expressed in equation (2) leads the opinions to converge. To this end, let us rewrite it as follows (Hegselmann & Krause 2002):

x t + 1 = A t , x t . x t ,
(3)

where x(t) = [x1(t), x2(t), …, x n (t)] is the opinion vector at time t and A is a time-dependence state transition matrix that also depends on the opinion vector. We would like to verify that starting from an initial opinion values x(0), whether or not all opinions converges to a single value, that is lim t → x i (t) = x for i = 1, 2, …, N. Let us define diameter d of the opinions as

d x = max 1 i , j N x i x j ,
(4)

Lemma 1 (Krause 2000): Consider a stochastic matrix A (i.e., a nonnegative matrix with row-sums equal to 1 is defined as a stochastic matrix), then, one has

max i , j x i t + 1 x j t + 1 d A . max i , j x i t x j t ,
(5)

or equivalently,

d x t + 1 = d A t , x t . x t 1 min 1 i , j N k = 1 n min a ik , a jk d x t ,
(6)

The above lemma was proved in (Seneta 1981); however, we also give another proof using a simpler method.

Proof: Expression (6) can be written as

d Ax = max 1 i , j N A i x A j x = max 1 i , j N k = 1 n a ik x k a jk x k = max 1 i , j N k = 1 n a ik min a ik , a jk + min a ik , a jk a jk x k = max 1 i , j N k = 1 n a ik min a ik , a jk x k min 1 i , j n k = 1 n a jk min a ik , a jk x k 1 min 1 i , j N k = 1 n min a ik , a jk max 1 i N x i min 1 j n x j = 1 min 1 i , j N k = 1 n min a ik , a jk max 1 i , j N x i x j = 1 min 1 i , j N k = 1 n min a ik , a jk d x

Our proposed weighted update rule for opinion formation, as expressed by equation (2), can be rewritten as

x i t + 1 = x i t + μ w j x j t x i t = 1 μ w j x i t + μ w j x j t = A . x t ,
(7)

It is clear that in the above representation, matrix A is a stochastic matrix.

Theorem 1: The product of two stochastic matrixes is a stochastic matrix.

Proof: Let A and B are two stochastic matrices, and C = A.B is their product. Entries of C are multiplications of entries of A and B. Since A and B are stochastic, their elements are non-negative and, thus, the entries of C are also non-negative. The row-sum of C (C i for i = 1, 2, …, N) can be obtained as

C i = j C ij = j = 1 N k = 1 N A ik B kj = j = 1 N A ik k = 1 N B kj = j = 1 N A ik * 1 = 1 ,
(8)

Therefore, C is a matrix with nonnegative entries and row-sums of equal to 1, and thus, it is a stochastic matrix.

Let t1 and t2 represent time steps (t1 < t2) and B(t1,t2) = A(t1-1)A(t1-2)A(t1-3)…A(t2), which models the accumulated weights between time t1 and t2 (Hegselmann & Krause 2002). It can be simply shown that for any r ≥ 0, 1 – rer.

Theorem 2 (Convergence Theorem): Considering opinion update rule (2), suppose we have a matrix B(t1,t2) = [b ij (t1,t2)], which is a stochastic matrix and models accumulated weights where b ij is an element of matrix B the sequences 0 = t0 < t1 < t2 < … ≤ T and δ1, δ2, …, δ i , … are such that 0 ≤ δ t ≤ 1 and t = 0 δ t = . . If k = 1 min b ik t m , tm 1 , b jk t m , t m 1 δ m for all m ≥ 1 and 1 ≤ i, jN, then for any initial condition, there exists a consensus, i.e., lim t → x i (t) = x* for i = 1, …, N.

Proof: see the Appendix section.

Optimization tasks

We applied our optimization procedure on a number of benchmark problems and compared its performance with some well-known methods including genetic algorithms (GA), particle swarm optimization (PSO), differential evolution (DE) and distributed dual averaging (DDA) algorithms. GA has been successfully applied to many optimization problems and was used as a basic paradigm in this work. It starts with a population of some random solutions – denoted by chromosomes. Therefore, the first step is to encode the initial solutions from phenotype to genotype. The objective function is then used for ranking the chromosomes. GA works iteratively and, in each step, uses some operators such as parent selection, recombination or cross over and mutation (Holland 1975). There are a number of parameters that should be tuned in order for a GA to work well. These include crossover probability, mutation probability, population model and parent selection models. Crossover probability P c indicates the probability of creating a new chromosome from two parents. Mutation probability P m indicates the portion of the population that undergoes mutation in each iteration of the algorithm.

DE is one of the best-performing evolutionary algorithms frequently used for optimization tasks, which often results in the optimal solution in shorter steps as compared to other optimization algorithms. DE uses the difference of a randomly selected pair of chromosomes – indicating diversity of the population – and adds it to one of the chromosomes in the population. Then, it uses crossover operators such as binomial and exponential crossover to combine the chromosomes (Storn & Price 1997). The parameters of the algorithm are as follows. β is a real value showing the coefficient of the difference between two selected chromosomes and controls the amplification of differential variation. P r indicates the probability of using the mutant (trial) vector. N v is an integer number indicating the number of couple chromosomes in calculating the mutant vector.

We also compared our algorithm with PSO that is a well-known optimization algorithm based on swarm intelligent (Kennedy & Eberhart 1995). In this algorithm, there is a population of agents called swarms (or particles) interacting with other agents – like our algorithm. PSO has two components: cognitive and social components. The cognitive component is the experience of each particle while the social component is the experience of the community the agents belong to. PSO has shown high degree of flexibility and acceptable speed in solving many optimization problems. Here we used one of the best extensions of PSO that is PSO with Inertia weights (Eberhart & Shi 2000). This feature plays an important role in balancing the powers of exploration and exploitation and making the algorithm more stable. PSO has a number of control parameters. Let us denote the parameters controlling the cognitive and social power of the algorithm as c1 and c2, respectively.

Distributed dual averaging (DDA) algorithm – inspired by Nesterov’s dual averaging algorithm (Xiao 2010; Nesterov 2009) – has been proposed for optimizing convex functions (Duchi & Wainwright 2012). Similar to CBO optimization algorithm, DDA is a network-based optimization method in which each node computes sub-differential of a local function while receiving information from its neighboring nodes. There is also a weight matrix to model the weighting process of the method. In any iteration, each node updates its solution vector by multiplying the stochastic weight matrix by the summation of its neighbors’ parameters and the sub-gradient of the objective function. DDA is computationally efficient and the convergence time depends on properties of the objective function and underlying network topology. Expander graphs have been proposed as efficient connection topology for DDA optimization algorithm (Duchi & Wainwright 2012). Alternating direction method of multipliers (ADMM) is another optimization method which uses properties of dual decomposition and augmented Lagrangian methods simultaneously (Boyd & Vandenberghe 2004). The Lagrange dual function is obtained by convex conjugate definition and the dual problem is solved using gradient ascent.

Benchmark problems

We evaluated the performance of the proposed optimization strategy on a number of benchmark problems. As the first problem, we considered the following cost function

F 1 x = e 2 ln 2 x 0.1 0.8 2 . sin 6 5 πx ,
(9)

which is a function with many local optima. The optimal point for which the minimum 1 is achieved for this function is at x* = 0.1.

We used a number of multiple competitive functions which have been introduced as benchmarks in optimization problems (Tang et al. 2009). The first function of this type is Shifted Rastrigin’s function that is defined as

F 2 x = F rastrigin x = i = 1 D x i 2 10 cos 2 π x i + 10 ,
(10)

which is a multimodal, shifted, separable and scalable function. The other function of this type considered here is Shifted Ackley’s function, which is defined as

F 3 x = F ackley x = 20 exp 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos 2 π x i + 20 + e .
(11)

We also considered Shifted Schwefel’s function, defined as

F 4 x = F schwefel x = i = 1 n ( j = 1 i x i ) 2 ,
(12)

and Shifted Elliptic function, defined as

F 5 x = F elliptic x = i = 1 D 10 6 i 1 D 1 x i 2 .
(13)

In all above functions except F1, x [−5, 5]D. Furthermore, the global optimum - which is F2* = F3* = F4* = F5* = 0 - is achieved at x* which is a random and different vector of real numbers in each run.

Network structures

One of the key ingredients of the proposed optimization algorithm is the graph structure used for connecting the agents, which is kept unchanged during the optimization process. In other words, the set of neighbours are not changed for the agents. In this work, we used a number of well-known graph structures including, Erdős-Rényi random, Watts-Strogatz small-world and Barabasi-Albert scale-free networks.

We used the model introduced by Erdős and Rényi for construction of pure random networks (Erdős & Rényi 1960). In this model, N nodes are considered and each pair is connected with probability P. Research showed that real networks are neither random nor regular but somewhere in between; they are indeed small-world. In order to construct small-world networks, we used the original model proposed by Watts and Strogatz, as follows (Watts & Strogatz 1998). Starting with a regular ring graph in which each node is connected to its k-nearest neighbours, each edge is rewired with probability P, provided that self-loops and duplication of edges are prohibited. They showed that for some intermediate values of the rewiring probability P, we obtain a network with low characteristic path length, comparable to that of random networks, and high clustering coefficient (i.e., transitivity) that is much higher than corresponding random networks.

Erdős-Rényi and Watts-Strogatz models result in networks with almost homogeneous degree distribution. However, it was shown that many real networks have heterogeneous degree distribution; there are many low-degree nodes in the network, while a few nodes are hubs with high degrees (Albert & Barabasi 2002; Barabasi & Albert 1999; Barabási 2009). Barabasi and Albert proposed a preferential attachment growth model for constructing such networks, which is used in this work (Barabasi & Albert 1999). The model starts with a k + 1 all-to-all connected nodes. In each step, a new node with k links is added to the network. This node tips to the old nodes with probability that is proportional to their degree, i.e., the higher is the degree of an old node in the network, the higher the probability of the making connection with the new node. The model results in scale-free networks whose degree distribution obeys a power-law (Barabasi & Albert 1999).

Results and discussion

In this section, we report the performance of the proposed Consensus-Based Optimization (CBO) method as compared to other tools including DE, GA and PSO. In order to assess the performance of the methods, they were applied to five benchmark functions as expressed by equations (9)-(13). Function F1 is a simple function consisting of many local optima and with a minimum of F1(x*) = −1 at x* = 0.1 (Figure 1a). All classic optimization tools can solve such a simple task. We investigated whether or not the proposed CBO algorithm could find the optimum for this function. Figure 1b shows the performance of CBO when Barabasi-Albert (BA), Watts-Strogatz (WS) and Erdős-Rényi (ER) networks were used as interaction topology. We set the network parameters as N = 1000 and k = 4 for BA model; N = 1000, k = 4, and P = 0.1 for WS model; N = 1000 and P = 0.1 for ER model. As it is seen, the optimization strategy of CBO is efficient and could find the optimum value in all topologies. For this simple problem, GA resulted in the best performance followed by CBO method. However, in order to better compare the performance of the methods, we applied them on more difficult objective functions.

Figure 1
figure 1

a) Function F 1 , b) The optimum value as a function of iteration steps using CBO based on Barabasi-Albert (BA), Watts-Strogatz (WS) and Erdős-Rényi (ER) models, DDA based on Cycle (C) and Expander (E) network model, GA, DE and PSO algorithms. The networks are with N = 1000 and the results show averages over 50 runs.

In order to assess the ability of CBO on solving more complex optimization tasks, we considered the connection graphs with structural properties as above. Figures 2 and 3 show the influence of parameters on the performance of CBO algorithm in optimizing the objective function F2. μ controls the speed of convergence; we obtained the value of μ = 0.6 as optimal for the optimization task (Figure 2). Large values for μ result in zigzag convergence which is clearly indicated in large variance for such cases. Like other optimization algorithms, CBO is sensitive to the number of agents (i.e., network size N), in which as the number of agents increases, the performance gets better (Figure 3). However, increasing the size of the network means increase in the computational complexity of the algorithm. For our simulations, we fixed the network size as N = 1000, since for larger networks than this size, the improvement is not significant. We also set the number of agents in other optimization algorithms at N = 1000; which makes their results comparable. For the parameters of GA, we set Pc = 0.85 and Pm = 0.05. Furthermore, we set steady-state form for population model and fitness proportionate selection policy for parent selection model. The parameters of DE algorithm was set as β = 0.9, P r = 0.9 and N v = 1 with binomial crossover and random parent selection policy. We set c1 = 0.1 and c2 = 0.9 for PSO resulting in balanced framework for both exploration and exploitation power.

Figure 2
figure 2

Effect of μ in the performance of CBO; the plot shows the mean and standard error of the objective function F 2 at the end of the convergence (i.e., the last iteration) as a function of μ. The connection graph is Barabasi-Albert (BA) with N = 1000 and data show averages over 50 runs.

Figure 3
figure 3

Effect of network size in the performance of CBO; the plot shows the mean and standard error of the objective function F 2 at the end of the convergence (i.e., the last iteration) as a function of the size of Barabasi-Albert (BA) network. Data show averages over 50 runs.

Figures 4,5,6 and 7 show the performance of CBO algorithm using various graph topologies as compared to DE, GA, PSO and DDA methods for optimizing F2-F5, respectively. The results were averaged over 50 runs. Note that some algorithms could not find the optimum solution in the considered number of iterations in some runs, resulting in a drift from the optimum value in their reported profile (which shows average over 50 runs). Majority of the algorithms could find the optimum; however, they showed significantly different performance. CBO with BA and ER as connection structure among the agents, showed the best performance (i.e., finding the optimum in the least iteration steps) as compared to other algorithms for F2 (Figure 4). They could find the optimum solution in about 16 steps, which is much less than the steps of GA and DE needed to converge. PSO and CBO with WS topology could not find the solution in some runs resulting in a drift in their performance. Except for F2, DDA based on Expander network worked better than or equals the one based on Cyclic one. Although DDA has been shown to be well-performing optimizer for convex functions, it did not result in good performance for the objective functions considered in this work. This is due to the fact that these functions are widely non-convex and a method specially designed for convex functions might not properly work on them.

Figure 4
figure 4

The objective function F 2 as a function of iterations in different optimization algorithms. The optimization methods are distributed dual averaging (DDA) with two network models Cycle (C) and Expander (E), genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO) and the proposed consensus-based optimization (CBO) with three schemes for the connection topology between the agents: Barabasi-Albert (BA), Watts-Strogatz (WS), and Erdős-Rényi (ER). All methods have a population of 1000 nodes, the desired function has 5 dimensions and the optimum output is 0. Data show averages over 50 runs.

Figure 5
figure 5

Objective function F 3 as a function of iterations steps averaged over 50 runs. Other designations are as Figure 4.

Figure 6
figure 6

Objective function F 4 as a function of iterations steps averaged over 50 runs. Other designations are as Figure 4.

Figure 7
figure 7

Objective function F 5 as a function of iterations steps averaged over 50 runs. Other designations are as Figure 4.

All algorithms could find the optimum solution for objective function F3 as expressed by equation (11) with the results shown in Figure 5. However, CBO algorithms showed much faster convergence than others. In terms of network topology in CBO, BA and ER topologies worked better than the case when WS model used for constructing interaction topology. CBO was also better than GA, DE, PSO and DDA in optimizing the objective functions F4 as expressed by equation (12) and F5 as expressed by equation (13) and the results are shown in Figure 6 and 7, respectively. While, ER topology resulted in a bit faster convergence than BA and WS for F4, their performance was almost the same for F5. It is expected that consensus on random networks should be the fastest as compared to the one in BA and WS networks. This is mainly due to the fact that random networks often have shorter average path length compared to other models.

We also compared the computational complexity of these methods by calculating their convergence times for the considered objective functions (Table 1). CBO showed the best performance (i.e., the least number of iterations until convergence) followed by PSO and GA. DE and DDA were the slowest methods among these optimization algorithms.

Table 1 Iteration count averaged over 50 runs till convergence happened in optimization methods include distributed dual averaging (DDA), genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO) and the proposed consensus-based optimization (CBO)

In sum, our experiments showed that performing an optimization task with a simple consensus network provides the solution with a better performance than a number of classic optimization tools including GA, DE, PSO and DDA. Furthermore, we found pure random topologies constructed by Erdős-Rényi model more effective than small-world topologies constructed by Watts-Strogatz model and scale-free topologies obtained through Barabasi-Albert preferential attachment model.

Conclusion

In this paper we introduced a novel application for consensus phenomenon in complex networks. Consensus in networked structures has many applications ranging from engineering (e.g., sensor networks) to sociology (e.g., opinion formation in social networks). In this manuscript, we used network consensus to solve optimization tasks. We considered a number of agents interacting over a networked structure with topology as random, small-world or scale-free. Furthermore, each agent was associated with an opinion value which could change in collaboration with neighboring agents. The agents worked collectively with their friends (which was defined based on the considered network topology and was kept unchanged during the optimization process) to solve an optimization task. To this end, each agent adapted its opinion value based on the best-matching neighbor, i.e., the neighbor with the best value in the objective function. The proposed consensus-based optimization (CBO) method was applied on a number of benchmark problems and its performance was compared with that of a number of classic optimization tools such as genetic algorithms, differential evolution and particle swarm optimization. Our experiments showed that CBO could always find the optimal solution faster and more reliable. We also found Erdős-Rényi random topology better than Watts-Strogatz small-world and Barabasi-Albert scale-free topologies for which it could solve the optimization task faster when used in CBO as connection graph.

Appendix

Proof of the Convergence Theorem: Based on the assumption made in the theorem

k = 1 N min 1 i , j N , m 1 b ik t m , t m 1 , b jk t m , t m 1 δ m ,
(14)

We have

1 k = 1 N min 1 i , j N , m 1 b ik t m , t m 1 , b jk t m , t m 1 . d x m 1 δ m d x m .
(15)

Having (15) and the results of Lemma 1, one concludes that

d x m + 1 = d B . x m 1 k = 1 N min 1 i , j n , m 1 b ik t m , t m 1 , b jk t m , t m 1 . d x m , 1 δ m . d x m
(16)

That is

d B . x m 1 δ m . d x m .
(17)

Having (17) and using the results of Theorem 1, one concludes

1 δ m . d x m e δ m d x m .
(18)

We have

e δ m d x m e δ m e δ m 1 d x m d x m 1 L e δ m e δ m 1 L e δ 0 d x 0 = e i = 0 δ i d x 0 .
(19)

Using (17)-(19) and the fact that t = 0 δ t = , we conclude that

d x m + 1 = d B . x m e i = 0 δ i d x 0 = 0.
(20)

Or, in other words

lim t d x t = 0.
(21)

Since B is a stochastic matrix and x(t + 1) = B(x(t),t).x(t), we have

min j x j t x i t + 1 max j x j t , i = 1 , , N ,
(22)

Which becomes the following when time approaches to infinity

lim t min j x j t lim t x i t + 1 lim t max j x j t .
(23)

Let us denote the lower bound of the opinions by p and their upper bound by q. Then, the above expression becomes

p lim t x i t + 1 q .
(24)

Now, using equations (21) and (4) we have

lim t max i x i t lim t min i x i t = p q = 0 p = q .
(25)

Applying Squeeze Theorem (Steinhaus 1938), we have

lim t x i t + 1 = lim t B x t , t . x i t = x * , i = 1 , , N .
(26)

And this completes the proof.

References

  • Ajdari Rad A, Jalili M, Hasler M: Efficient rewirings for enhancing synchronizability of dynamical networks. Chaos 2008, 18: 037104. 10.1063/1.2967738

    Article  MathSciNet  Google Scholar 

  • Albert HJR, Barabasi A-L: “Diameter of the World Wide Web”. Nature 1999, 401: 130–131. 10.1038/43601

    Article  Google Scholar 

  • Albert R, Barabasi A-L: Statistical mechanics of complex networks. Rev Modern Physics 2002, 74: 47–97. 10.1103/RevModPhys.74.47

    Article  MATH  MathSciNet  Google Scholar 

  • Amblard F, Deffuant G: The role of network topology on extremism propagation with the relative agreement opinion dynamics. Physica A 2004, 343: 725–738.

    Article  Google Scholar 

  • Barabási AL: Scale-free networks: a decade and beyond. Science 2009, 325: 412–413. 10.1126/science.1173299

    Article  MathSciNet  Google Scholar 

  • Barabasi A-L, Albert R: Emergence of scaling in random networks. Science 1999, 286: 5009–5012.

    MathSciNet  Google Scholar 

  • Barbarossa S, Scutari G: Decentralized maximum likelihood estimation for sensor networks composed of nonlinearly coupled dynamical systems. IEEE Trans Signal Process 2007, 55: 3456–3470.

    Article  MathSciNet  Google Scholar 

  • Barry CMN, Wentzel KR: Friend influence on prosocial behavior: The role of motivational factors and friendship characteristics. Dev Psychol 2006, 42: 153–162.

    Article  Google Scholar 

  • Belykh I, Hasler M, Laurent M, Nijmeijer H: Synchronization and graph topology. Int J Bifurcation Chaos 2005, 15: 3423–3433. 10.1142/S0218127405014143

    Article  MATH  Google Scholar 

  • Berndt TJ, Murphy LM: Influences of friends and friendships: Myths, truths, and research recommendations. Adv Child Dev Behav 2003, 30: 275–310.

    Article  Google Scholar 

  • Boccaletti VLS, Moreno Y, Chavez M, Hwang DU: “Complex networks: structure and dynamics”. Physics Reports 2006, 424: 175–308. 10.1016/j.physrep.2005.10.009

    Article  MathSciNet  Google Scholar 

  • Boyd S, Vandenberghe L: Convex optimization. Cambridge University Press; 2004.

    Book  MATH  Google Scholar 

  • Brunetti S, Cordasco G, Gargano L, Lodi E, Quattrociocchi W: “Minimum Weight Dynamo and Fast Opinion Spreading”. 38th international conference on Graph-Theoretic Concepts in Computer Science 2012, 249–261.

    Chapter  Google Scholar 

  • Carletti T, Fanelli D, Grolli S, Guarino A: How to make an efficient propaganda. Europhysics Letters 2006, 74: 222–228. 10.1209/epl/i2005-10536-9

    Article  Google Scholar 

  • Deffuant G, Neau D, Amblard F, Weisbuch G: Mixing beliefs among interacting agents. Adv Comp Sys 2001, 3: 87–98.

    Article  Google Scholar 

  • Duchi AAJC, Wainwright MJ: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans Automatic Con 2012, 57: 592–606. 10.1109/TAC.2011.2161027

    Article  MathSciNet  Google Scholar 

  • Eberhart RC, Shi Y: Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the Congress on Evolutionary Computation. 2000, 1: 84–88.

    Google Scholar 

  • Erdős P, Rényi A: On the evolution of random graphs. Pub Mathematic Ins Hungarian Acad Sci 1960, 5: 17–61.

    Google Scholar 

  • Fortunato S, Latora V, Pluchino A, Rapisarda A: “Vector opinion dynamics in a bounded confidence consensus model”. Int J Modern Physics C 2005, 16: 1535–1553. 10.1142/S0129183105008126

    Article  MATH  Google Scholar 

  • Gandica Y, Del Castillo-Mussot M, Vázquez GJ, Rojas S: Continuous opinion model in small-world directed networks. Physica A 2010, 389: 5864–5870. 10.1016/j.physa.2010.08.025

    Article  Google Scholar 

  • Girvan M, Newman MEJ: Community structure in social and biological networks. Proc Nat Acad Sci USA 2002, 99: 7821–7826. 10.1073/pnas.122653799

    Article  MATH  MathSciNet  Google Scholar 

  • Guo L, Cai X: Continuous opinion dynamics in complex networks. Com Computational Physics 2009, 5: 1045–1053.

    Google Scholar 

  • Hegselmann R, Krause U: Opinion dynamics and bounded confidence models, analysis, and simulation. J Art Soc Social Simulation 2002, 5. no. 3 no. 3

    Google Scholar 

  • Holland JH: Adaptation in Natural and Artificial Systems. Ann Arbor, MI: The University of Michigan Press; 1975.

    Google Scholar 

  • Holyst JA, Kacperski K, Schweitzer F: Social impact models of opinion dynamics. Ann Rev Computatio Physics 2001, 20: 531–535.

    Google Scholar 

  • Jalili M: “Effects of leaders and social power on opinion formation in complex networks”. Simulation - Transactions of the Society for Modeling and Simulation International 2013a. 10.1177/0037549712462621

    Google Scholar 

  • Jalili M: Social power and opinion formation in complex networks. Physica A 2013, 392: 959–966. 10.1016/j.physa.2012.10.013

    Article  Google Scholar 

  • Kennedy J, Eberhart R: “Particle swarm optimization”. In Proceedings IEEE International Conference Neural Networks 1995, 4: 1942–1948.

    Article  Google Scholar 

  • Kozma B, Barrat A: Consensus formation on adaptive networks. Physical Review E 2008, 77: 016102.

    Article  Google Scholar 

  • Krapivsky PL, Redner S: Dynamics of majority rule in two-state interacting spin systems. Phys Rev Let 2003, 90: 238701.

    Article  Google Scholar 

  • Krause U: A discrete nonlinear and non-autonomous model of consensus formation. Com Diff Eq 2000, 2000: 227–236.

    Google Scholar 

  • Kurmyshev E, Juárez HA, González-Silva RA: Dynamics of bounded confidence opinion in heterogeneous social networks: concord against partial antagonism. Physica A 2011, 390: 2945–2955. 10.1016/j.physa.2011.03.037

    Article  Google Scholar 

  • Leskovec J, Adamic LA, Huberman BA: The dynamics of viral marketing. ACM Conference on Electronic Commerce. Ann Arbor: Michigan, USA; 2006.

    Book  Google Scholar 

  • Lewenstein M, Nowak A, Latane B: Statistical mechanics of social impact. Physical Rev A 1992, 45: 703–716.

    MathSciNet  Google Scholar 

  • Lorenz J: Continuous opinion dynamics under bounded confidence: a survey. Int J Modern Physics C 2007, 18: 1819–1838. 10.1142/S0129183107011789

    Article  MATH  Google Scholar 

  • Lu JH, Yu XH, Chen GR, Cheng DZ: Characterizing the synchronizability of small-world dynamical networks. IEEE Tran Cir Syst I: Fundamental Theory Ad App 2004, 51: 787–796. 10.1109/TCSI.2004.823672

    Article  MathSciNet  Google Scholar 

  • Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U: Network motifs: simple building blocks of complex networks. Science 2002, 298: 824–827. 10.1126/science.298.5594.824

    Article  Google Scholar 

  • Nardini C, Kozma B, Barrat A: Who’s talking first? consensus or lack thereof in coevolving opinion formation models. Phys Rev Let 2008, 100: 158701.

    Article  Google Scholar 

  • Nesterov Y: Primal-dual subgradient methods for convex problems. Mathematical Programming 2009, 120: 221–259. 10.1007/s10107-007-0149-x

    Article  MATH  MathSciNet  Google Scholar 

  • Newman MEJ: The structure and function of complex networks. SIAM Review 2003, 45: 167–256. 10.1137/S003614450342480

    Article  MATH  MathSciNet  Google Scholar 

  • Newman MEJ, Park J: Why social networks are different from other types of networks. Physical Review E 2003, 68: 036122.

    Article  Google Scholar 

  • Newman A-LBM, Watts DJ: The structure and dynamics of networks. 1. Princeton University Press; 2006.

    Google Scholar 

  • Perc M: Evolution of cooperation on scale-free networks subject to error and attack. New J Physics 2009, 11: 033027. 10.1088/1367-2630/11/3/033027

    Article  Google Scholar 

  • Perc M, Szolnoki A: Coevolutionary games-A mini review. Biosystems 2010, 99: 109–125. 10.1016/j.biosystems.2009.10.003

    Article  Google Scholar 

  • Scutari G, Barbarossa S, Pescosolido L: Distributed decision through self-synchronizing sensor networks in the presence of propagation delays and asymmetric channels. IEEE Trans Signal Process 2008, 56: 1667–1684.

    Article  MathSciNet  Google Scholar 

  • Seneta E: Non-negative matrices and Markov chains. 2nd edition. Berlin: Springer Verlag, New York; 1981.

    Book  Google Scholar 

  • Steinhaus H: A note on the ham sandwich theorem. Mathesis Polska 1938, 9: 26–28.

    Google Scholar 

  • Storn R, Price K: Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. J Global Optimization 1997, 11: 341–359. 10.1023/A:1008202821328

    Article  MATH  MathSciNet  Google Scholar 

  • Strogatz SH: Exploring complex networks. Nature 2001, 410: 268–276. 10.1038/35065725

    Article  Google Scholar 

  • Tang K, Xiaodong L, Suganthan N, Yang PZ, Weise T: Benchmark functions for the cec'2010 special session and competition on large-scale global optimization”. Technical report, University of Science and Technology of China 2009, 1: 1–23.

    Google Scholar 

  • Urbig D, Lorenz J, Herzberg H: Opinion dynamics: the effect of the number of peers met at once. J Artific Soc Social Simulation 2008, 11: 4.

    Google Scholar 

  • Watts DJ, Dodds PS: Influentials, networks, and public opinion formation. J Consumer Res 2007, 34: 441–458. 10.1086/518527

    Article  Google Scholar 

  • Watts DJ, Strogatz SH: Collective dynamics of ‘small-world’ networks. Nature 1998, 393: 440–442. 10.1038/30918

    Article  Google Scholar 

  • Weisbuch G: Bounded confidence and social networks. Eur Physic J B 2004, 38: 339–343. 10.1140/epjb/e2004-00126-9

    Article  Google Scholar 

  • Xiao L: Dual averaging methods for regularized stochastic learning and online optimization. J Machine Lear Res 2010, 11: 2543–2596.

    MATH  Google Scholar 

  • Yang H-X, Wu Z-X, Zhou C, Zhou T, Wang B-H: Effects of social diversity on the emergence of global consensus in opinion dynamics. Physical Review E 2009, 80: 046108.

    Article  Google Scholar 

Download references

Acknowledgment

The authors would like to thank Dr. Hamid Beigy for his insightful comments and discussion on the topic.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahdi Jalili.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

OAS and MJ designed the research and developed the methods. OAS did the numerical simulations. OAS and MJ wrote the manuscript. Both authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Askari-Sichani, O., Jalili, M. Large-scale global optimization through consensus of opinions over complex networks. Complex Adapt Syst Model 1, 11 (2013). https://doi.org/10.1186/2194-3206-1-11

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2194-3206-1-11

Keywords