It wasn’t too long ago that we were in the midst of a high-stakes game of legislative testing chicken. The Great Testing Debate of 2015 eventually ground to a (sort of) conclusion that involved reducing the amount of testing overall and creating a pilot program to look into new possibilities on the testing front. Yes, I remember that I promised you a breakdown of that big, messy compromise a while back. No, we aren’t going to do that today.
Oh, stop looking so disappointed. We have plenty of time in the coming weeks to get our edu-nerd on when it comes to testing frequency and pilot programs. But as my policy friends Ben DeGrow and Ross Izard pointed out in a joint op-ed near the end of the legislative session, discussions about the tests being used–and how to make them better–are equally important.
That’s why I was glad to read last week that the PARCC consortium will be cutting down on both the amount of time students will spend taking tests and the logistical headaches caused by administering tests in two separate windows. Education Week provides a nice summary of the changes:
In the face of rising opposition to testing, the PARCC consortium has decided to carve 90 minutes off its 10- to 11-hour-long assessment, and shift the start of testing to later in the school year … The revised design will take effect with the 2015-16 PARCC tests in English/language arts and mathematics, which are given to students in grades 3-11.
… In addition to shortening the test, the new design also folds PARCC’s two testing windows into one 30-day window, and requires that states wait until three-quarters of the way through the school year before starting the testing period … Next year, states will have one 30-day period to give the tests. It can’t begin until 75 percent of instruction has been completed—day 133 in a 180-day instructional year—and it must be finished by the time 90 percent of instruction has been completed.
For you math folks, that’s a roughly 15 percent reduction in testing time even after you factor in the longer individual “units.” That certainly sounds like an improvement to me. I’m sure the schools will be happy with the consolidated window, too.
But that doesn’t mean it’s all well and good in PARCC land. There are still some concerns lingering out there. In particular, some folks are worried that the shift in testing window timing could make it difficult to adequately compare this year’s PARCC test with next year’s. This concern arises because some test takers — especially those who tested early this year — will benefit from greater instructional time ahead of tests under the new system.
Interesting stuff, to be sure. The timing of tests is an aspect of the debate that didn’t get much play during the legislative session, but it’s a crucial piece of the testing discussion. I’ll admit that this particular comparability wrinkle hasn’t previously struck me as a major issue, but there may be something to it in some instances. Then again, more instructional time ahead of a test for some students sounds like a win to me.
A new sampling technique being used to field test new questions has also raised some eyebrows, because it means that some students will need to take an extra experimental section instead of the new questions just being quietly slipped into operational tests. This piece concerns me far less, though it’s hard for me not to think that this approach could be helpful in terms of looking into whether sampling could ever truly be a viable full-scale testing strategy in the future.
It could also affect test balance depending on how the sampling is done. Regardless of all that, I suspect the kids chosen to participate in the extra testing session will be somewhat unenthusiastic about the extra time spent in front of a computer. Yet allowing students to self-select in or out could throw off the sample. That, my friends, is a pickle.
All told, I’m happy to see PARCC responding to the legitimate criticisms that have been leveled against it in recent months. Whether this move is enough to save a tainted brand is up for grabs. It doesn’t deal with the issues of federal intrusion (real or perceived), nor does it address more general concerns about the value of standardized testing overall. Still, progress is progress.