I study a decision-maker who chooses between objects, each associated with a sample of signals. I axiomatically characterize the set of choices that are consistent with established models of belief updating. A simple thought experiment yields a natural choice pattern that lies outside this set. In particular, the effect of increasing sample size on choice cannot be rationalized by these models. In a controlled experiment, 95% of subjects' choices violate models of belief updating. Using a novel incentive-compatible confidence elicitation mechanism, I find confidence in correctly interpreting samples influences choice. As suggested by the thought experiment, many subjects display a sample size neglect bias which is positively associated with higher confidence.

** A Procedural Model of Complexity Under Risk **

I consider a decision-maker who uses rules to simplify lotteries in order to compare them. I characterize expected utility in this setting and highlight its complexity requirements which a purely axiomatic characterization overlooks. I relax these requirements to characterize two models of complexity aversion : outcome support size cost and entropy cost models. I consider an additional aspect of complexity: decision-makers find it easier to evaluate a lottery when outcomes are close in value. To capture this, I characterize a third model of complexity aversion. Here the DM first partitions together outcomes which are close in value and then evaluates the lottery along with the complexity of the partition. This representation offers a measure of complexity which is not restricted to the probability and support size but also accounts for the cardinal values of the outcomes. I also compare empirically the models and find support for partition complexity.

** Updating Bias and Model Misspecification: Evidence from the Classroom ** *(draft coming soon)*

with Marc-Antoine Chatelain, Paul Han and Xiner Xu

Agents may fail to learn because they misperceive information or update with bias. We study this issue by collecting, in an incentive-compatible manner, a rich high-frequency dataset on students' beliefs within the context of freshman courses. Using students' beliefs about their expected grade for each test and the noisiness of testing, we estimate, for the first time outside of the lab, measures of misspecification and of updating biases as well as their evolution over time. We find that students tend to overestimate testing noise and underreact to information. We investigate the role of model misspecification by conducting a randomized control trial where treated students are given additional, non-personal, information on the testing noise. The treatment tends to improve students' beliefs about the noisiness of testing and how they process information. Our findings suggest that misspecification is a major obstacle in processing information, but it can be alleviated via simple interventions.