Presenting the results of educational assessments to lay audiences requires a difficult balance between accuracy and simplicity. This study investigates the effectiveness with which the National Assessment of Educational Progress (NAEP) has communicated information about the performance of American students to the print media. In the case of the 1990 assessment of mathematics, NAEP tried two methods of characterizing student performance. One method selected arbitrary points on the distribution of student scores, called "anchor points," and used items answered by most students at each point as a basis for descriptions of student performance. The second method set three points on the scale, called "achievement levels," to reflect judgments about what students should be able to do and provided verbal descriptions of the proficiency of students at each level. A review of a large number of newspaper and magazine articles found that the anchor-point and achievement-level descriptions were widely used by press writers but that their use of them was often unsatisfactory. The resulting reports appeared clear, but many were in fact simplistic or incorrect. For example, many writers who presented actual test items confused the percentage of students reaching each level with the typically much higher percentage answering illustrative items correctly. Writers rarely mentioned the judgmental basis of the achievement levels. The results suggest that better methods are needed to profile student performance for lay audiences.