One of the most common findings in behavioral decision research is that people often have unrealistic beliefs about how much they know, but only recently have researchers begun to examine the consequences of these unrealistic beliefs. Unfortunately, examination of this issue is complicated by the use of different ways of characterizing unrealistic beliefs about one’s knowledge. This paper examines the implications of two common measures – labeled overconfidence and unjustified confidence – showing how and where they can lead to different conclusions when used for prediction. The authors first consider conceptual, measurement, and analytic issues distinguishing these measures. Next, they provide a set of simulations designed to elucidate when these two different methods of characterizing unrealistic beliefs about one’s knowledge will lead to different conclusions. Finally, they illustrate the main findings from the simulations with three empirical examples drawn from our own data. The results highlight the need for clarity in the match between research question and measurement strategy.