Skip to content

Collective Decision-making and Opinion Dynamics

 

Social learning and information aggregation in social networks

We study the behavioral foundations of non-Bayesian models of learning over social networks and develops a taxonomy of conditions for information aggregation in a general framework. As our main behavioral assumption, we postulate that agents follow social learning rules that satisfy “imperfect recall,” according to which they treat the current beliefs of their neighbors as sufficient statistics for the entire history of their observations. We augment this assumption with various restrictions on how agents process the information provided by their neighbors and obtain representation theorems for the corresponding learning rules (including the canonical model of DeGroot). We then obtain general long-run learning results that are not tied to the learning rules’ specific functional forms, thus identifying the fundamental forces that lead to learning, non-learning, and mislearning in social networks. Our results illustrate that, in the presence of imperfect recall, long-run aggregation of information is closely linked to (i) the rate at which agents discount their neighbors’ information over time, (ii) the curvature of agents’ social learning rules, and (iii) whether their initial tendencies are amplified or moderated as a result of social interactions.

Decision-making in groups and over social networks

We are interested in the rules that individuals use to update their opinions over time when new information becomes available, especially in the context of group dynamics. While the rational decision-theoretic model of learning involves people taking their prior opinions, observations, and evidence and using Bayes’ rule to come to an updated conclusion, we investigate the ways in which human decision-making deviates from this perfectly rational ideal.

Computations for fully rational agents

Given a collection of rational agents, the set of calculations required for them to make optimal decisions results in a computationally difficult problem. We are broadly interested in understanding what these rational agents need to accomplish, as they observe their neighbors’ behavior and beliefs, in order to make decisions, how long these computations take, and what methods might be practicable in approximating this process.

A theory of misinformation spread on social networks

In this work, we study a strategic model of online news dissemination on a Twitter-like social network. Agents with heterogeneous priors decide whether to forward a piece of news they received to their followers. Each agent makes a forwarding decision based on whether the news can persuade the followers to think more like them in aggregate. At the micro-level, we show how novelty and affirmation motives naturally emerge from the utility-maximizing behavior of the agents when persuasion is the main motive for sharing news. We characterize the dynamics of the news spread and establish the equation governing the steady state size of news cascades. Exact necessary and sufficient conditions are derived for emergence of a cascade, based on which we formulate the problem of finding the news precision level maximizing ex-ante likelihood of a sharing cascade. We show that if the cost associated with broadcasting to followers is sufficiently small, then a cascade occurs almost surely for news that has enough accuracy. When the cost associated with broadcasting passes a certain threshold, the optimal precision needed for a cascade is related to the aggregate wisdom of the crowd, and more precisely, to whether the aggregation of agents’ prior beliefs concentrate on the truth. When the society as a whole is biased, i.e., there is a gap between the true state and the aggregation of prior perspectives, the truth almost always triggers a cascade. In contrast, in a wise or unbiased society, cascades are more likely to occur for false news, i.e., information that is likely to be inaccurate. Our results complement the empirical findings that support wider spread of inaccurate/false news compared to accurate information on social networks.

Back to Research & Projects