Research

Working Papers

Confidence in Inference

Abstract
Do people behave as if they know how informative signals are? I axiomatically identify the restrictions on choice implied by certainty about the information structure in a sample environment. Certainty is equivalent to a separability axiom and yields parallel linear indifference curves in the space of samples. This equivalence does not depend on expected utility or Bayesian updating and holds for a broad class of monotone updating and choice rules. A controlled experiment shows that certainty about the information structure is rejected for 95% of subjects. Subjects are insensitive to the stated information structure and instead choose based solely on sample characteristics. Many decisions display a sample-size neglect bias. Using an incentive-compatible confidence elicitation method, I find that sample-size neglect is positively associated with confidence, suggesting that subjects act as if they are uncertain about the information structure even when it is explicitly provided.

A Procedural Model of Complexity Under Risk

[Summary slides]

Abstract
I consider a decision-maker who uses rules to simplify lotteries in order to compare them. I characterize expected utility in this setting and highlight its complexity requirements which a purely axiomatic characterization overlooks. I relax these requirements to characterize two models of complexity aversion : outcome support size cost and entropy cost models. I consider an additional aspect of complexity: decision-makers find it easier to evaluate a lottery when outcomes are close in value. To capture this, I characterize a third model of complexity aversion. Here the DM first partitions together outcomes which are close in value and then evaluates the lottery along with the complexity of the partition. This representation offers a measure of complexity which is not restricted to the probability and support size but also accounts for the cardinal values of the outcomes. I also compare empirically the models and find support for partition complexity.

Updating and Misspecification: Evidence from the Classroom

with Marc-Antoine Chatelain, Paul Han and Xiner Xu

Abstract
Misspecification is theoretically linked with updating failures, but empirical evidence has been lacking. We document the empirical relevance and estimate the impact of misspecification on updating. We collect a novel high-frequency dataset on students' beliefs about grades in a freshman course. Students are overconfident, their beliefs do not improve over time, and they overestimate the testing noise by a factor of 3. Our RCT exogenously shocks and improves students' belief in the testing noise. Treated students reduce their prediction errors by 32\%. We estimate the impact of misspecification structurally and find that a lower bound of 25\% of prediction errors can be attributed to misspecification. Our finding suggests that misspecification is a major obstacle to processing information correctly, but it can be alleviated via simple interventions.

Work in Progress

What makes a matching market congested?

with Justin Hadad

Abstract
We study a decentralized matching market where each applicant sends a fixed number of applications and then firms make one offer. Our game emulates matching markets under time constraints: agents who are unmatched after the round remain unmatched. Congestion arises from two basic market failures: some firms do not receive applications (an issue of coverage), and some applicants receive multiple offers (an issue of collisions). We study how the market size, degree of preference alignment, and number of applications affect congestion. In contrast to the literature, aligned preferences and screening worsen congestion. Furthermore, additional applications do not always alleviate congestion, and optimal quotas are typically small.