Menglong (Vincent) Guan
Abstract: This paper presents an experimental study on how people choose sets of information sources (referred to as information bundles). The findings reveal that subjects frequently fail to choose the more instrumentally valuable bundle in binary choices, largely due to the challenge of integrating the two information sources within each bundle to identify their joint information content. The mistakes in choices can not be attributed to an inability to use information bundles. Instead, these mistakes are strongly explained by subjects' tendency to follow a simple but imperfect heuristic when comparing them, which I call "common source cancellation (CSC)". The heuristic causes subjects to mistakenly disregard the common information source in two bundles and focus solely on the comparison of the sources that the two bundles do not share. As a result, choices between information bundles are made without adequately considering the joint information content of each bundle. Notably, CSC emerges as a robust explanation for the information bundle choices for all subjects, including those who make perfect use of information bundles to make inferences.
Abstract: We explore theoretically and experimentally whether information design can be used by trustees as a signaling device to boost trusting acts. In our main setting, a trustee partially or fully decides a binary payoff allocation and designs an information structure; then a trustor decides whether to invest. In the control setting, information design is not available. In line with the standard equilibrium analysis, we find that introducing information design increases trustworthiness and trusting acts, and some trustees choose full trustworthiness with the most informative structure. We also find systematic behavioral deviations, including some trustees' choosing zero trustworthiness with the least informative structure and trustors' overtrusting in low informative structures. We finally provide a model of heterogeneity in prosociality and strategic sophistication, which rationalizes the experimental findings.
Abstract: We experimentally study how people’s demand for information structures is shaped by their informativeness—the reduction in uncertainty they produce. To do this, we introduce new methods that remove confounds for information demand like failures of Bayesian reasoning. We show that people (i) strongly demand informativeness when it has instrumental value but also (ii) display a sharp aversion to informativeness when it cannot be used to improve choice, sometimes leading to costly errors in information choice. Several strands of evidence suggest that this aversion is driven by subjective information processing costs that rise with informativeness.
Three Faces of Complexity in Strategic Choice [draft available soon] (with Ryan Oprea)
Abstract: We study if and by what mechanism complexity affects strategic choices in experiments designed based on the infinitely repeated prisoner’s dilemma. When responding to the opponent’s known strategy, people are drawn to simple strategies due to implementation (procedural) complexity and have lower choice optimality when there is computational complexity. In settings involving strategic uncertainty, simple strategies are more focal even when neither implementation nor computational complexity exists. People are further drawn to simple strategies when there is either complexity. Moreover, the effect of complexity, especially implementation complexity, weakens and even dominates the influence of strategic uncertainty, the most important previously documented driver of strategy choices.
Preference for Sample Features and Belief Updating [preliminary draft available upon request] (with ChienHsun Lin, Jing Zhou and Ravi Vora)
Abstract: We experimentally investigate how individuals use and value different statistical characteristics of a set of realized signals, referred to as sample features, in the classic belief updating task. We find that both the valuation and the realized usefulness of sample features do not monotonically respond to the changes in instrumental value (informativeness) across sample features. Subjects prefer sample features that contain proportion (the relative frequency of realized outcomes) over those that do not. Their belief updating behaviors align most closely with the Bayesian benchmark when using proportion, despite it not being the most informative one. Combining preference and belief updating performance, we show that, on average, subjects make better use of the sample features they prefer. Taken together, our findings shed light on the association between the preference of sample features and errors in belief updating in the classic experimental setting of studying belief updating behaviors.