Confidence in Inference
Abstract
I study a decision-maker who chooses between objects, each associated with a sample of signals. I axiomatically characterize the set of choices that are consistent with established models of belief updating. A simple thought experiment yields a natural choice pattern that lies outside this set. In particular, the effect of increasing sample size on choice cannot be rationalized by these models. In a controlled experiment, 95% of subjects' choices violate models of belief updating. Using a novel incentive-compatible confidence elicitation mechanism, I find confidence in correctly interpreting samples influences choice. As suggested by the thought experiment, many subjects display a sample size neglect bias which is positively associated with higher confidence.
A Procedural Model of Complexity Under Risk
Abstract
I consider a decision-maker who uses rules to simplify lotteries in order to compare them. I characterize expected utility in this setting and highlight its complexity requirements which a purely axiomatic characterization overlooks. I relax these requirements to characterize two models of complexity aversion : outcome support size cost and entropy cost models. I consider an additional aspect of complexity: decision-makers find it easier to evaluate a lottery when outcomes are close in value. To capture this, I characterize a third model of complexity aversion. Here the DM first partitions together outcomes which are close in value and then evaluates the lottery along with the complexity of the partition. This representation offers a measure of complexity which is not restricted to the probability and support size but also accounts for the cardinal values of the outcomes. I also compare empirically the models and find support for partition complexity.
Updating and Misspecification: Evidence from the Classroom
with Marc-Antoine Chatelain, Paul Han and Xiner Xu