Large-scale global optimization through consensus of opinions over complex networks
© Sichani and Jalili; licensee Springer. 2013
Received: 31 October 2012
Accepted: 28 March 2013
Published: 24 April 2013
Large-scale optimization tasks have many applications in science and engineering. There are many algorithms to perform such optimization tasks. In this manuscript, we aim at using consensus in multi-agent systems as a tool for solving large-scale optimization tasks.
The model is based on consensus of opinions among agents interacting over a complex networked structure. For each optimization task, a number of agents are considered, each with an opinion value. These agents interact over a networked structure and update their opinions based on their best-matching neighbor in the network. A neighbor with the best value of the objective function (of the optimization task) is referred to as the best-matching neighbor for an agent. We use structures such as pure random, small-world and scale-free networks as interaction graph. The optimization algorithm is applied on a number of benchmark problems and its performance is compared with a number of classic methods including genetic algorithms, differential evolution and particle swarm optimization.
We show that the agents could solve various large-scale optimization tasks through collaborating with each other and getting into consensus in their opinions. Furthermore, we find pure random topology better than small-world and scale-free topologies in that it leads to faster convergence to the optimal solution. Our experiments show that the proposed consensus-based optimization method outperforms the classic optimization algorithms.
Consensus in multi-agents systems can be efficiently used for large-scale optimization problems. Connectivity structure of the consensus network is effective in the convergence to the optimum solution where random structures show better performance as compared to heterogeneous networks.
AMS subject classification
15A04, 54A20, 60J20, 92D25
Networks are everywhere and we confront many networks in our daily life; they are practically present where any kind of information is transmitted or exchanged. Networks such as the Internet, the World Wide Web, engineering, social, biological and economical networks have been subject to heavy studies in the last decade and many applications have been developed based on network science Albert & Barabasi (2002, 1999; Barabasi & Albert 1999; Boccaletti et al. 20062006; Newman 2003; Newman & Park 2003; Strogatz 20011998). The progress in network science accelerated after the seminal work of Watts & Strogatz (1998) on collective behavior of small-world networks Watts & Strogatz (1998) and Barabasi & Albert (1999) on scale-free graphs (Barabasi & Albert 1999). Watts and Strogatz discovered that many real-world networks have small-world property in that their characteristics path length scales logarithmically with network size (Watts & Strogatz 1998) – a property that is observed in random networks. At the same time, these networks show high levels of transitivity (clustering coefficient) (Watts & Strogatz 1998) – much higher than corresponding random networks. Furthermore, many real networks from different disciplines were shown to have a power-law degree distribution (Barabasi & Albert 1999); the probability of having a node with degree k is k-γ with γ being in the range 2–3. Real networks have been shown to have more complex properties such as motifs (Milo et al. 2002) and community structure (Girvan & Newman 2002). These structural features influence dynamics and functionality of networks. For example, synchronization and consensus properties of networks largely depend on their structure (Belykh et al. 2005; Lu et al. 2004).
The most striking pattern of networked structures appear when a number of agents (each with simple behavior) interact leading to complex behaviors as a result of collective motion. Synchronization of interacting agents - as the most striking form of collective behavior - has many applications in science and engineering. For example, techniques available in network theory can be used for efficient distrusted inference in sensor networks (2007; Scutari et al. 2008). In this work we used tools available in network science to perform a numerical optimization task. Optimization is an approach that iteratively improves the performance of a system, which is formulated as a single standard measurement equation called cost (or objective) function.
In order to use network theory for solving an optimization task, we used the concept of consensus formation in the opinions of multi-agent systems. Let us consider a network of agents where a (discrete or continuous) opinion value is associated to each agent. Agents can influence each others’ opinions through the connections existing between them, i.e., the edges of the network. Considering some simple update rules and if certain conditions are met, the agents can reach a consensus in their opinions through a number of opinion updates (Kozma & Barrat 2008; Carletti et al. 2006). In this work we considered the evolution of continuous opinions based on the modified version of bounded confidence model (Deffuant et al. 2001), which has been extensively studied in recent years (Gandica et al. 2010; Weisbuch 2004; Urbig et al. 2008). The previous studies of continuous opinion formation have been mainly performed considering uniform agents (Gandica et al. 2010; Weisbuch 2004; Urbig et al. 2008). However, in reality the agents are diverse in their wealth and social status, and hence, have diverse influence on others (Holyst et al. 2001; Lewenstein et al. 1992; Jalili 2013a; Jalili 2013b). Therefore, we associated specific weight for each agent resulting in faster consensus.
The paradigm proposed in this manuscript was applied on a number of benchmark problems. We first considered a simple function with many local optima and showed that the proposed optimization strategy could successfully find the optimum. We then applied the method on a number of benchmark problems from CEC 2010 competition benchmark set (Tang et al. 2009). We compared the performance of the proposed consensus-based optimization approach with that of a number of classic optimization methods including genetic algorithms, differential evolution and particle swarm optimization.
Optimization through consensus in the network
The paradigm we have proposed for large-scale optimization task is based on consensus in networked structures. In opinion formation models, there is a population of agents, each with a (discrete or continuous) opinion value representing its information about a subject (Deffuant et al. 2001; Gandica et al. 2010; Weisbuch 2004; Urbig et al. 2008). The term opinion is not easy to define in reality; however, it can be considered as a discrete or continuous value expressing the individuals’ degree of desire or preference. This opinion is often represented as a real number when the model is unimodal or as a vector of real numbers when the model is multimodal. In this paper we aimed at optimizing an objective function, and therefore, each agent will have an opinion value containing all the input parameters of desired objective function, i.e., a multimodal model.
Opinion formation in multi-agent systems
The agents update their opinions as a result of interactions with their neighboring agents. Consider two neighboring agents i and j with opinions as x i and x j , respectively. Their opinions at time n + 1 will be a function of their previous opinions, i.e. x i (n + 1) = f1(x i (n), x j (n)), x j (n + 1) = f2(x i (n), x j (n)). If certain conditions are met, after a number of updates in these values, the agents can reach a consensus in their opinions (Kozma & Barrat 2008; Carletti et al. 2006). The collective behavior of the agents over complex networks largely depends on the structural properties of the networks (Amblard & Deffuant 2004), and minor modification in the structure of the network can have drastic effects on the behavior of opinion formation (Nardini et al. 2008).
There are a number of rules for modelling opinion formation in complex networks. For example, considering discrete opinions, in the voter model, randomly selected agents exchange their opinions by that of one of their neighbours (Krapivsky & Redner 2003). The agents might influence their neighbouring agents to change their opinions based on their strength and the neighbours’ threshold (Leskovec et al. 2006). In the evolution of continuous opinions on a network, the opinions of two connected agents are updated if their difference is less than a threshold, i.e. the agents have evolving opinions (Deffuant et al. 2001; Amblard & Deffuant 2004; Lorenz 2007; Kurmyshev et al. 2011; Hegselmann & Krause 2002; Guo & Cai 2009).
where f is the desired cost (or objective) function to be optimized, N is the network size and N i is the set of neighbours of agent i. μ is the convergence (or influence) parameter, which often takes a value between 0 and 1. This parameter controls the speed of convergence in such a way that small values of μ corresponds to slow but smooth convergence, while the large values of μ corresponds to faster but wavy convergence.
To some extent, the above model for opinion formation imitates the behavior of agents in real social networks. A person may know many individuals in the society; however, he/she is only influenced by his/her closest friends (i.e., neighbors in the network). In many cases, individuals get the maximum influence through their best (closest) friends and try to make themselves similar to them, i.e., making their opinion closer to their closest friends. People try to behave like their best friends for establishing and maintaining their friendships and they influenced by them more than the others in their life. Sometimes these changes will happen because people want to preserve their connections and friendships and they will act or behave like their close friends (Barry & Wentzel 2006). They project their own attitudes and habits to their friends. Furthermore, research showed that, in general, the influence of the very best friend approximately is equal or comparable to the influence of multiple friends (Berndt & Murphy 2003).
where ϵ is a small value (in order to make the denominator non-zero). The above weighted update rules can be justified as follows. Let us suppose that an objective functions is to be minimized. As the best-matching neighbor is found for each agent, it influences the agent according to its fitness, i.e., its value in the objective function. To this end, the weight for the update rule of an agent gets as the fitness function at that agent divided by the fitness value of the best-matching neighbor, often resulting in a value in the range 0–1 (note that the opinions are updated only when the fitness of best-matching neighbor is better than that of the agent). It is worth mentioning that in some cases, the opinions are in multi dimensions, i.e., x is vector, in which the best matching agent is obtained separately for each dimension.
The method largely depends on the diffusion of good opinions (i.e., those that are good in terms of the objective function) in the network. Agents with opinion values close to the optimal objective function disseminate their opinions through communicating with their neighbors, i.e., getting into consensus with them. Indeed, influence of opinions is in two folds. Indeed, closer opinions to the optimal value of the objective function have a better chance to be selected as best-matching neighbors.
The above rule for opinion formation is somehow inspired by communication in human societies. Our friends influence our behavior in daily life; however, we are usually affected only when our friends are better than us. Here, similarly, for each agent, first, the best matching agent is found, and then, its opinion is updated (using equation (2)) if the fitness of the best-matching neighbor is better (i.e., it results in a lower value in the objective function) than that of the agent.
It is worth mentioning that the consensus (or synchronization) properties of dynamical networks largely depend on their structure and some topologies are favored for fast consensus (Belykh et al. 2005; Ajdari Rad et al. 2008). Network topology plays also important role in the evolution of other dynamical phenomena over complex networks, such as evolution of cooperative behavior among interacting agents (Perc & Szolnoki 2010; Perc 2009).
A pseudo-code of the proposed consensus-based optimization algorithm is illustrated in section Pseudo-Code as follows.
Pseudo-Code for the proposed consensus-based optimization method
N: number of agents in the population (network size)
M: number of attributes of opinion vector
Boundaries: the range of the opinions
F: desired objective function which is needed to be optimized (minimized in this case)
Initialize N * M matrix X by a random normal distribution for the opinion values in Boundaries;
net = Create a structured network;
for each agent i in population do
for each attribute a do
neighbors_opinion = mask other attributes of the opinions x in neighbors of agent i in network net by a dummy value;
self_opinion = mask other attributes of the opinions x agent i;
j = find the best agent in neighbors_opinion resulting in the best value for F;
if neighbor_opinion of agent j optimizes F better than self_opinion then
Until stopping condition(s) has/have been met
In the beginning of the process, the agents are initialized by some random values in a range acceptable by their opinion values. As indicated by Watts and Dodds (2007) “a minority of individuals who influence an exceptional number of their peers” (Watts & Dodds 2007), there is often a minority of agents that have a significant influence on others, which is mainly due to their specific position in the network The hypothesis of influential agents demonstrated that initiating influential individuals will be explicitly different from initiating non-influential ones in the size and likelihood of a cascade (Watts & Dodds 2007). This means that initial opinions for influential agents probably would bias the result of the consensus. This phenomenon will not happen in proposed method, since CBO is not based on the bounded confidence model. Every agent selects his/her best-matching neighbor regardless of its great social power and degree.
Consensus of opinion values
The above lemma was proved in (Seneta 1981); however, we also give another proof using a simpler method.
It is clear that in the above representation, matrix A is a stochastic matrix.
Theorem 1: The product of two stochastic matrixes is a stochastic matrix.
Therefore, C is a matrix with nonnegative entries and row-sums of equal to 1, and thus, it is a stochastic matrix.
Let t1 and t2 represent time steps (t1 < t2) and B(t1,t2) = A(t1-1)A(t1-2)A(t1-3)…A(t2), which models the accumulated weights between time t1 and t2 (Hegselmann & Krause 2002). It can be simply shown that for any r ≥ 0, 1 – r ≤ e–r.
Theorem 2 (Convergence Theorem): Considering opinion update rule (2), suppose we have a matrix B(t1,t2) = [b ij (t1,t2)], which is a stochastic matrix and models accumulated weights where b ij is an element of matrix B the sequences 0 = t0 < t1 < t2 < … ≤ T and δ1, δ2, …, δ i , … are such that 0 ≤ δ t ≤ 1 and . If for all m ≥ 1 and 1 ≤ i, j ≤ N, then for any initial condition, there exists a consensus, i.e., lim t → ∞x i (t) = x* for i = 1, …, N.
Proof: see the Appendix section.
We applied our optimization procedure on a number of benchmark problems and compared its performance with some well-known methods including genetic algorithms (GA), particle swarm optimization (PSO), differential evolution (DE) and distributed dual averaging (DDA) algorithms. GA has been successfully applied to many optimization problems and was used as a basic paradigm in this work. It starts with a population of some random solutions – denoted by chromosomes. Therefore, the first step is to encode the initial solutions from phenotype to genotype. The objective function is then used for ranking the chromosomes. GA works iteratively and, in each step, uses some operators such as parent selection, recombination or cross over and mutation (Holland 1975). There are a number of parameters that should be tuned in order for a GA to work well. These include crossover probability, mutation probability, population model and parent selection models. Crossover probability P c indicates the probability of creating a new chromosome from two parents. Mutation probability P m indicates the portion of the population that undergoes mutation in each iteration of the algorithm.
DE is one of the best-performing evolutionary algorithms frequently used for optimization tasks, which often results in the optimal solution in shorter steps as compared to other optimization algorithms. DE uses the difference of a randomly selected pair of chromosomes – indicating diversity of the population – and adds it to one of the chromosomes in the population. Then, it uses crossover operators such as binomial and exponential crossover to combine the chromosomes (Storn & Price 1997). The parameters of the algorithm are as follows. β is a real value showing the coefficient of the difference between two selected chromosomes and controls the amplification of differential variation. P r indicates the probability of using the mutant (trial) vector. N v is an integer number indicating the number of couple chromosomes in calculating the mutant vector.
We also compared our algorithm with PSO that is a well-known optimization algorithm based on swarm intelligent (Kennedy & Eberhart 1995). In this algorithm, there is a population of agents called swarms (or particles) interacting with other agents – like our algorithm. PSO has two components: cognitive and social components. The cognitive component is the experience of each particle while the social component is the experience of the community the agents belong to. PSO has shown high degree of flexibility and acceptable speed in solving many optimization problems. Here we used one of the best extensions of PSO that is PSO with Inertia weights (Eberhart & Shi 2000). This feature plays an important role in balancing the powers of exploration and exploitation and making the algorithm more stable. PSO has a number of control parameters. Let us denote the parameters controlling the cognitive and social power of the algorithm as c1 and c2, respectively.
Distributed dual averaging (DDA) algorithm – inspired by Nesterov’s dual averaging algorithm (Xiao 2010; Nesterov 2009) – has been proposed for optimizing convex functions (Duchi & Wainwright 2012). Similar to CBO optimization algorithm, DDA is a network-based optimization method in which each node computes sub-differential of a local function while receiving information from its neighboring nodes. There is also a weight matrix to model the weighting process of the method. In any iteration, each node updates its solution vector by multiplying the stochastic weight matrix by the summation of its neighbors’ parameters and the sub-gradient of the objective function. DDA is computationally efficient and the convergence time depends on properties of the objective function and underlying network topology. Expander graphs have been proposed as efficient connection topology for DDA optimization algorithm (Duchi & Wainwright 2012). Alternating direction method of multipliers (ADMM) is another optimization method which uses properties of dual decomposition and augmented Lagrangian methods simultaneously (Boyd & Vandenberghe 2004). The Lagrange dual function is obtained by convex conjugate definition and the dual problem is solved using gradient ascent.
which is a function with many local optima. The optimal point for which the minimum 1 is achieved for this function is at x * = 0.1.
In all above functions except F1, x ∈ [−5, 5] D . Furthermore, the global optimum - which is F2* = F3* = F4* = F5* = 0 - is achieved at x * which is a random and different vector of real numbers in each run.
One of the key ingredients of the proposed optimization algorithm is the graph structure used for connecting the agents, which is kept unchanged during the optimization process. In other words, the set of neighbours are not changed for the agents. In this work, we used a number of well-known graph structures including, Erdős-Rényi random, Watts-Strogatz small-world and Barabasi-Albert scale-free networks.
We used the model introduced by Erdős and Rényi for construction of pure random networks (Erdős & Rényi 1960). In this model, N nodes are considered and each pair is connected with probability P. Research showed that real networks are neither random nor regular but somewhere in between; they are indeed small-world. In order to construct small-world networks, we used the original model proposed by Watts and Strogatz, as follows (Watts & Strogatz 1998). Starting with a regular ring graph in which each node is connected to its k-nearest neighbours, each edge is rewired with probability P, provided that self-loops and duplication of edges are prohibited. They showed that for some intermediate values of the rewiring probability P, we obtain a network with low characteristic path length, comparable to that of random networks, and high clustering coefficient (i.e., transitivity) that is much higher than corresponding random networks.
Erdős-Rényi and Watts-Strogatz models result in networks with almost homogeneous degree distribution. However, it was shown that many real networks have heterogeneous degree distribution; there are many low-degree nodes in the network, while a few nodes are hubs with high degrees (Albert & Barabasi 2002; Barabasi & Albert 1999; Barabási 2009). Barabasi and Albert proposed a preferential attachment growth model for constructing such networks, which is used in this work (Barabasi & Albert 1999). The model starts with a k + 1 all-to-all connected nodes. In each step, a new node with k links is added to the network. This node tips to the old nodes with probability that is proportional to their degree, i.e., the higher is the degree of an old node in the network, the higher the probability of the making connection with the new node. The model results in scale-free networks whose degree distribution obeys a power-law (Barabasi & Albert 1999).
Results and discussion
All algorithms could find the optimum solution for objective function F3 as expressed by equation (11) with the results shown in Figure 5. However, CBO algorithms showed much faster convergence than others. In terms of network topology in CBO, BA and ER topologies worked better than the case when WS model used for constructing interaction topology. CBO was also better than GA, DE, PSO and DDA in optimizing the objective functions F4 as expressed by equation (12) and F5 as expressed by equation (13) and the results are shown in Figure 6 and 7, respectively. While, ER topology resulted in a bit faster convergence than BA and WS for F4, their performance was almost the same for F5. It is expected that consensus on random networks should be the fastest as compared to the one in BA and WS networks. This is mainly due to the fact that random networks often have shorter average path length compared to other models.
Iteration count averaged over 50 runs till convergence happened in optimization methods include distributed dual averaging (DDA), genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO) and the proposed consensus-based optimization (CBO)
In sum, our experiments showed that performing an optimization task with a simple consensus network provides the solution with a better performance than a number of classic optimization tools including GA, DE, PSO and DDA. Furthermore, we found pure random topologies constructed by Erdős-Rényi model more effective than small-world topologies constructed by Watts-Strogatz model and scale-free topologies obtained through Barabasi-Albert preferential attachment model.
In this paper we introduced a novel application for consensus phenomenon in complex networks. Consensus in networked structures has many applications ranging from engineering (e.g., sensor networks) to sociology (e.g., opinion formation in social networks). In this manuscript, we used network consensus to solve optimization tasks. We considered a number of agents interacting over a networked structure with topology as random, small-world or scale-free. Furthermore, each agent was associated with an opinion value which could change in collaboration with neighboring agents. The agents worked collectively with their friends (which was defined based on the considered network topology and was kept unchanged during the optimization process) to solve an optimization task. To this end, each agent adapted its opinion value based on the best-matching neighbor, i.e., the neighbor with the best value in the objective function. The proposed consensus-based optimization (CBO) method was applied on a number of benchmark problems and its performance was compared with that of a number of classic optimization tools such as genetic algorithms, differential evolution and particle swarm optimization. Our experiments showed that CBO could always find the optimal solution faster and more reliable. We also found Erdős-Rényi random topology better than Watts-Strogatz small-world and Barabasi-Albert scale-free topologies for which it could solve the optimization task faster when used in CBO as connection graph.
And this completes the proof.
The authors would like to thank Dr. Hamid Beigy for his insightful comments and discussion on the topic.
- Ajdari Rad A, Jalili M, Hasler M: Efficient rewirings for enhancing synchronizability of dynamical networks. Chaos 2008, 18: 037104. 10.1063/1.2967738MathSciNetView ArticleGoogle Scholar
- Albert HJR, Barabasi A-L: “Diameter of the World Wide Web”. Nature 1999, 401: 130–131. 10.1038/43601View ArticleGoogle Scholar
- Albert R, Barabasi A-L: Statistical mechanics of complex networks. Rev Modern Physics 2002, 74: 47–97. 10.1103/RevModPhys.74.47MATHMathSciNetView ArticleGoogle Scholar
- Amblard F, Deffuant G: The role of network topology on extremism propagation with the relative agreement opinion dynamics. Physica A 2004, 343: 725–738.View ArticleGoogle Scholar
- Barabási AL: Scale-free networks: a decade and beyond. Science 2009, 325: 412–413. 10.1126/science.1173299MathSciNetView ArticleGoogle Scholar
- Barabasi A-L, Albert R: Emergence of scaling in random networks. Science 1999, 286: 5009–5012.MathSciNetGoogle Scholar
- Barbarossa S, Scutari G: Decentralized maximum likelihood estimation for sensor networks composed of nonlinearly coupled dynamical systems. IEEE Trans Signal Process 2007, 55: 3456–3470.MathSciNetView ArticleGoogle Scholar
- Barry CMN, Wentzel KR: Friend influence on prosocial behavior: The role of motivational factors and friendship characteristics. Dev Psychol 2006, 42: 153–162.View ArticleGoogle Scholar
- Belykh I, Hasler M, Laurent M, Nijmeijer H: Synchronization and graph topology. Int J Bifurcation Chaos 2005, 15: 3423–3433. 10.1142/S0218127405014143MATHView ArticleGoogle Scholar
- Berndt TJ, Murphy LM: Influences of friends and friendships: Myths, truths, and research recommendations. Adv Child Dev Behav 2003, 30: 275–310.View ArticleGoogle Scholar
- Boccaletti VLS, Moreno Y, Chavez M, Hwang DU: “Complex networks: structure and dynamics”. Physics Reports 2006, 424: 175–308. 10.1016/j.physrep.2005.10.009MathSciNetView ArticleGoogle Scholar
- Boyd S, Vandenberghe L: Convex optimization. Cambridge University Press; 2004.MATHView ArticleGoogle Scholar
- Brunetti S, Cordasco G, Gargano L, Lodi E, Quattrociocchi W: “Minimum Weight Dynamo and Fast Opinion Spreading”. 38th international conference on Graph-Theoretic Concepts in Computer Science 2012, 249–261.View ArticleGoogle Scholar
- Carletti T, Fanelli D, Grolli S, Guarino A: How to make an efficient propaganda. Europhysics Letters 2006, 74: 222–228. 10.1209/epl/i2005-10536-9View ArticleGoogle Scholar
- Deffuant G, Neau D, Amblard F, Weisbuch G: Mixing beliefs among interacting agents. Adv Comp Sys 2001, 3: 87–98.View ArticleGoogle Scholar
- Duchi AAJC, Wainwright MJ: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans Automatic Con 2012, 57: 592–606. 10.1109/TAC.2011.2161027MathSciNetView ArticleGoogle Scholar
- Eberhart RC, Shi Y: Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the Congress on Evolutionary Computation. 2000, 1: 84–88.Google Scholar
- Erdős P, Rényi A: On the evolution of random graphs. Pub Mathematic Ins Hungarian Acad Sci 1960, 5: 17–61.Google Scholar
- Fortunato S, Latora V, Pluchino A, Rapisarda A: “Vector opinion dynamics in a bounded confidence consensus model”. Int J Modern Physics C 2005, 16: 1535–1553. 10.1142/S0129183105008126MATHView ArticleGoogle Scholar
- Gandica Y, Del Castillo-Mussot M, Vázquez GJ, Rojas S: Continuous opinion model in small-world directed networks. Physica A 2010, 389: 5864–5870. 10.1016/j.physa.2010.08.025View ArticleGoogle Scholar
- Girvan M, Newman MEJ: Community structure in social and biological networks. Proc Nat Acad Sci USA 2002, 99: 7821–7826. 10.1073/pnas.122653799MATHMathSciNetView ArticleGoogle Scholar
- Guo L, Cai X: Continuous opinion dynamics in complex networks. Com Computational Physics 2009, 5: 1045–1053.Google Scholar
- Hegselmann R, Krause U: Opinion dynamics and bounded confidence models, analysis, and simulation. J Art Soc Social Simulation 2002, 5. no. 3 no. 3Google Scholar
- Holland JH: Adaptation in Natural and Artificial Systems. Ann Arbor, MI: The University of Michigan Press; 1975.Google Scholar
- Holyst JA, Kacperski K, Schweitzer F: Social impact models of opinion dynamics. Ann Rev Computatio Physics 2001, 20: 531–535.Google Scholar
- Jalili M: “Effects of leaders and social power on opinion formation in complex networks”. Simulation - Transactions of the Society for Modeling and Simulation International 2013a. 10.1177/0037549712462621Google Scholar
- Jalili M: Social power and opinion formation in complex networks. Physica A 2013, 392: 959–966. 10.1016/j.physa.2012.10.013View ArticleGoogle Scholar
- Kennedy J, Eberhart R: “Particle swarm optimization”. In Proceedings IEEE International Conference Neural Networks 1995, 4: 1942–1948.View ArticleGoogle Scholar
- Kozma B, Barrat A: Consensus formation on adaptive networks. Physical Review E 2008, 77: 016102.View ArticleGoogle Scholar
- Krapivsky PL, Redner S: Dynamics of majority rule in two-state interacting spin systems. Phys Rev Let 2003, 90: 238701.View ArticleGoogle Scholar
- Krause U: A discrete nonlinear and non-autonomous model of consensus formation. Com Diff Eq 2000, 2000: 227–236.Google Scholar
- Kurmyshev E, Juárez HA, González-Silva RA: Dynamics of bounded confidence opinion in heterogeneous social networks: concord against partial antagonism. Physica A 2011, 390: 2945–2955. 10.1016/j.physa.2011.03.037View ArticleGoogle Scholar
- Leskovec J, Adamic LA, Huberman BA: The dynamics of viral marketing. ACM Conference on Electronic Commerce. Ann Arbor: Michigan, USA; 2006.View ArticleGoogle Scholar
- Lewenstein M, Nowak A, Latane B: Statistical mechanics of social impact. Physical Rev A 1992, 45: 703–716.MathSciNetGoogle Scholar
- Lorenz J: Continuous opinion dynamics under bounded confidence: a survey. Int J Modern Physics C 2007, 18: 1819–1838. 10.1142/S0129183107011789MATHView ArticleGoogle Scholar
- Lu JH, Yu XH, Chen GR, Cheng DZ: Characterizing the synchronizability of small-world dynamical networks. IEEE Tran Cir Syst I: Fundamental Theory Ad App 2004, 51: 787–796. 10.1109/TCSI.2004.823672MathSciNetView ArticleGoogle Scholar
- Milo R, Shen-Orr S, Itzkovitz S, Kashtan N, Chklovskii D, Alon U: Network motifs: simple building blocks of complex networks. Science 2002, 298: 824–827. 10.1126/science.298.5594.824View ArticleGoogle Scholar
- Nardini C, Kozma B, Barrat A: Who’s talking first? consensus or lack thereof in coevolving opinion formation models. Phys Rev Let 2008, 100: 158701.View ArticleGoogle Scholar
- Nesterov Y: Primal-dual subgradient methods for convex problems. Mathematical Programming 2009, 120: 221–259. 10.1007/s10107-007-0149-xMATHMathSciNetView ArticleGoogle Scholar
- Newman MEJ: The structure and function of complex networks. SIAM Review 2003, 45: 167–256. 10.1137/S003614450342480MATHMathSciNetView ArticleGoogle Scholar
- Newman MEJ, Park J: Why social networks are different from other types of networks. Physical Review E 2003, 68: 036122.View ArticleGoogle Scholar
- Newman A-LBM, Watts DJ: The structure and dynamics of networks. 1. Princeton University Press; 2006.Google Scholar
- Perc M: Evolution of cooperation on scale-free networks subject to error and attack. New J Physics 2009, 11: 033027. 10.1088/1367-2630/11/3/033027View ArticleGoogle Scholar
- Perc M, Szolnoki A: Coevolutionary games-A mini review. Biosystems 2010, 99: 109–125. 10.1016/j.biosystems.2009.10.003View ArticleGoogle Scholar
- Scutari G, Barbarossa S, Pescosolido L: Distributed decision through self-synchronizing sensor networks in the presence of propagation delays and asymmetric channels. IEEE Trans Signal Process 2008, 56: 1667–1684.MathSciNetView ArticleGoogle Scholar
- Seneta E: Non-negative matrices and Markov chains. 2nd edition. Berlin: Springer Verlag, New York; 1981.View ArticleGoogle Scholar
- Steinhaus H: A note on the ham sandwich theorem. Mathesis Polska 1938, 9: 26–28.Google Scholar
- Storn R, Price K: Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. J Global Optimization 1997, 11: 341–359. 10.1023/A:1008202821328MATHMathSciNetView ArticleGoogle Scholar
- Strogatz SH: Exploring complex networks. Nature 2001, 410: 268–276. 10.1038/35065725View ArticleGoogle Scholar
- Tang K, Xiaodong L, Suganthan N, Yang PZ, Weise T: “ Benchmark functions for the cec'2010 special session and competition on large-scale global optimization”. Technical report, University of Science and Technology of China 2009, 1: 1–23.Google Scholar
- Urbig D, Lorenz J, Herzberg H: Opinion dynamics: the effect of the number of peers met at once. J Artific Soc Social Simulation 2008, 11: 4.Google Scholar
- Watts DJ, Dodds PS: Influentials, networks, and public opinion formation. J Consumer Res 2007, 34: 441–458. 10.1086/518527View ArticleGoogle Scholar
- Watts DJ, Strogatz SH: Collective dynamics of ‘small-world’ networks. Nature 1998, 393: 440–442. 10.1038/30918View ArticleGoogle Scholar
- Weisbuch G: Bounded confidence and social networks. Eur Physic J B 2004, 38: 339–343. 10.1140/epjb/e2004-00126-9View ArticleGoogle Scholar
- Xiao L: Dual averaging methods for regularized stochastic learning and online optimization. J Machine Lear Res 2010, 11: 2543–2596.MATHGoogle Scholar
- Yang H-X, Wu Z-X, Zhou C, Zhou T, Wang B-H: Effects of social diversity on the emergence of global consensus in opinion dynamics. Physical Review E 2009, 80: 046108.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.