Research

Working Papers

Confidence in Inference

Abstract
I study a decision-maker who chooses between objects, each associated with a sample of signals. I axiomatically characterize the set of choices that are consistent with established models of belief updating. A simple thought experiment yields a natural choice pattern that lies outside this set. In particular, the effect of increasing sample size on choice cannot be rationalized by these models. In a controlled experiment, 95% of subjects' choices violate models of belief updating. Using a novel incentive-compatible confidence elicitation mechanism, I find confidence in correctly interpreting samples influences choice. As suggested by the thought experiment, many subjects display a sample size neglect bias which is positively associated with higher confidence.

A Procedural Model of Complexity Under Risk

[Summary slides]

Abstract
I consider a decision-maker who uses rules to simplify lotteries in order to compare them. I characterize expected utility in this setting and highlight its complexity requirements which a purely axiomatic characterization overlooks. I relax these requirements to characterize two models of complexity aversion : outcome support size cost and entropy cost models. I consider an additional aspect of complexity: decision-makers find it easier to evaluate a lottery when outcomes are close in value. To capture this, I characterize a third model of complexity aversion. Here the DM first partitions together outcomes which are close in value and then evaluates the lottery along with the complexity of the partition. This representation offers a measure of complexity which is not restricted to the probability and support size but also accounts for the cardinal values of the outcomes. I also compare empirically the models and find support for partition complexity.

Updating and Misspecification: Evidence from the Classroom

with Marc-Antoine Chatelain, Paul Han and Xiner Xu

Abstract
Misspecification is theoretically linked with updating failures, but empirical evidence has been lacking. We document the empirical relevance and estimate the impact of misspecification on updating. We collect a novel high-frequency dataset on students' beliefs about grades in a freshman course. Students are overconfident, their beliefs do not improve over time, and they overestimate the testing noise by a factor of 3. Our RCT exogenously shocks and improves students' belief in the testing noise. Treated students reduce their prediction errors by 32\%. We estimate the impact of misspecification structurally and find that a lower bound of 25\% of prediction errors can be attributed to misspecification. Our finding suggests that misspecification is a major obstacle to processing information correctly, but it can be alleviated via simple interventions.

Work in Progress

Congestion in Matching

with Justin Hadad

Abstract
We examine matching markets with a limited number of rounds, where N applicants apply to M receivers and N>M. Our environment mirrors executing k rounds of the deferred acceptance mechanism. Due to the limited rounds, not all desirable matches are realized, causing congestion. Allowing applicants to submit more applications does not alleviate congestion. With one round of offers, a quota of at most three applications strictly outperforms scenarios with more applications. The optimal quota increases with higher round numbers. Our findings also suggest that quotas are strong: for sufficiently unbalanced markets, a quota of one outperforms an additional round of offers.