The Bob Critique

(or why any claim of learning needs to include an analysis of observations across trials)

 

Consider Bob, who is engaged in a repeated guessing task. In the beginning of every trial, the experimenter makes an identical and independent draw from a symmetric distribution on a set of real numbers. Bob is asked to provide his best guess of that trial-specific draw. After Bob's guess, the trial-specific draw is revealed to Bob. While Bob is not explicitly given the distribution, he can learn it from experience.

 

However, suppose that Bob simply guesses the revealed draw from the previous trial. Note that Bob's guesses do not become more accurate across trials. Also note that Bob's trial-specific guesses do not converge to the mean. 


Despite that the running average of Bob’s responses will converge to the mean, a researcher should not conclude that Bob has learned the distribution.

 

If a researcher only analyzes data averaged across trials, Bob's responses might be mistakenly identified as evidence of learning. In fact, there are thousands of psychology and neuroscience papers that only analyze responses that have been averaged across trials and would declare that Bob is “learning” or even that Bob is “Bayesian learning.” (Economists, do not laugh, these are also now appearing in top economics journals!)

 

When researchers place subjects into a stochastic environment, any claims that subjects have learned aspects of the environment should be supported by analyzing data across trials. Otherwise, they will conclude that Bob is learning and they are vulnerable to the Bob Critique. Here are my efforts on the matter.