The article, “The assessment of thoughtful literacy in NAEP: why the states aren’t measuring up,” the authors perform a study to determine the differences between the state achievement tests in reading comprehension and the National Assessment of Educational Progress (NAEP) Framework. It is assumed that these state achievement test are fairly comparable with the NAEP since the educational community agree that reading comprehension involves the reader to use prior knowledge to attain thoughtful and logical answers, thus becoming mature readers. However, this is not the case. Researchers have observed many teachers teaching to the test, which reveals students are memorizing and reciting details instead of thinking and responding to text. In 2005, many states have reported students achieving proficiency at an average rate of 40% higher than NAEP standards. The article states that these exaggerated levels of achievement are possibly due to the states “lowering the bar” due to the stress of the No Child Left Behind Act.
To determine if there are superiority differences between state tests and NAEP, the authors collected fourth grade sample state achievement tests in reading comprehension from California, Florida, Wisconsin, Illinois, New York, North Carolina, Pennsylvania, and Texas. Each sample test was classified by item type (open-ended or multiple choice format); item objective (assess vocabulary knowledge, familiarity with genre, text organization, characterization, or text detail); and item purpose and cognitive demand (text emphasis or higher order interpretation items.). The authors considered text emphasis items as those with improbable distractors that require very little thought answering the question. Higher order interpretation items require readers to logically answer questions using their understanding of the text or their personal experiences.
The authors found NAEP used 57% open-ended questions while sample state assessments used and average of 7% to assess comprehension. Although Florida used the most open-ended question, their sample test used less than half the amount of open-ended questions in NAEP.
The use of vocabulary items to assess comprehension was used regularly in the state test, while rarely used by NAEP. California and Wisconsin allotted 25% of comprehension items to vocabulary assessment. The average used by other states was 17%.
NAEP used only 2% of genre items to the ability to identify genre elements. Texas and Wisconsin averaged about the same as NAEP. However, California, Illinois, and North Carolina used more than 15.0% of their items to assess knowledge of genre.
The objective of text organization revealed NAEP and state average used 25% of their items, while Florida used 13% and North Carolina used 10%.
NAEP used more items for characterization and detail (46% and 24%) while the state average was at least 11% below the NAEP for characterization and about 6% for detail.
Data collected by the authors suggest that NAEP and state test use more than half their test to access higher order thinking skills. However, when compared to the actual items used for each objective, NAEP rated higher than the state test.
In conclusion, the authors determined that there are significant differences between NAEP tests and the state test. NAEP allocates more open-ended items for reading assessment, higher order thinking responses, and fewer genre and vocabulary items for comprehension. They suggest that teachers who encourage students to reach a mature reading level will better prepare them for state and national accountability assessments.
No comments:
Post a Comment