Menglong (Vincent) Guan
Abstract: This paper presents an experimental study on how people choose sets of information sources (referred to as information bundles). The findings reveal that subjects frequently fail to choose the more instrumentally valuable bundle in binary choices, largely due to the challenge of integrating the information sources within a bundle to identify their joint information content. The mistakes in choices can not be attributed to an inability to use information bundles. Instead, these mistakes are strongly explained by subjects' tendency to follow a simple but imperfect heuristic when valuing them, which I call "common source cancellation (CSC)". The heuristic causes subjects to mistakenly disregard the common information source in two bundles and focus solely on the comparison of the sources that the two bundles do not share. As a result, choices between information bundles are made without adequately considering the joint information content of each bundle. Notably, CSC emerges as a robust explanation for the information bundle choices for all subjects, including those who make perfect use of information bundles to make inferences.
Abstract: We explore theoretically and experimentally whether information design can be used by trustees as a signaling device to boost trusting acts. In our main setting, a trustee partially or fully decides a binary payoff allocation and designs an information structure; then a trustor decides whether to invest. In the control setting, information design is not available. In line with the standard equilibrium analysis, we find that introducing information design increases trustworthiness and trusting acts, and some trustees choose full trustworthiness with the most informative structure. We also find systematic behavioral deviations, including some trustees' choosing zero trustworthiness with the least informative structure and trustors' overtrusting in low informative structures. We finally provide a model of heterogeneity in prosociality and strategic sophistication, which rationalizes the experimental findings.
Abstract: We experimentally study how people’s demand for information structures is shaped by their informativeness—the reduction in uncertainty they produce. To do this, we introduce new methods that remove confounds for information demand like failures of Bayesian reasoning. We show that people (i) strongly demand informativeness when it has instrumental value but also (ii) display a sharp aversion to informativeness when it cannot be used to improve choice, sometimes leading to costly errors in information choice. Several strands of evidence suggest that this aversion is driven by subjective information processing costs that rise with informativeness.
Three Faces of Complexity in Strategic Choice [draft available soon] (with Ryan Oprea)
Abstract: We experimentally show that the complexity of strategies strongly shapes strategy selection in infinitely repeated prisoner's dilemma in several distinct ways. In our experiment, subjects choose from a menu of pre-set strategies and we vary (i) whether they have to implement their chosen strategies themselves, (ii) whether they have to compute the payoff consequences of those strategies prior to selecting them and (iii) whether subjects are strategically uncertain about the strategy of their counterpart. We find that subjects display a strong aversion to implementing algorithmically complex strategies and have significant difficulty identifying best response strategies when forced to compute them themselves. We also find that algorithmically simple strategies serve as focal points producing an attraction to simple strategies when faced with strategic uncertainty, particularly when subjects are tasked with inferring the payoff consequences of those strategies themselves. We find that the overall influence of complexity on strategy choice is at least as strong as that of strategic uncertainty, the most important driver of strategy choice documented in the literature so far.
Preference for Sample Features and Belief Updating [preliminary draft available upon request] (with ChienHsun Lin, Jing Zhou and Ravi Vora)
Abstract: We experimentally investigate how individuals use and value different statistical characteristics of a set of realized signals, referred to as sample features, in the classic belief updating task. We find that both the valuation and the realized usefulness of sample features do not monotonically respond to the changes in instrumental value (informativeness) across sample features. Subjects prefer sample features that contain proportion (the relative frequency of realized outcomes) over those that do not. Their belief updating behaviors align most closely with the Bayesian benchmark when using proportion, despite it not being the most informative one. Combining preference and belief updating performance, we show that, on average, subjects make better use of the sample features they prefer. Taken together, our findings shed light on the association between the preference of sample features and errors in belief updating in the classic experimental setting of studying belief updating behaviors.