So new research suggests that perhaps college students aren't going through college without learning anything, contrary to the public perception of Academically Adrift. I say contrary to the public perception, rather than contrary to the text, as in fact the book found that not only did the group of college students tested show statistically significant gains between their first and fourth semester (not between their first and last year, a common misrepresentation of the study) as a whole, every individual subgroup within the sample did. I suppose I can't blame the media too much for the misperception that the book showed only failure, given that the authors of the book took every opportunity to play up the idea of a failed higher education system.
Of course, it's easy to show pessimistic findings when every aspect of your methodology is bent towards that result. Richard Haswell, in a review essay from Research in the Teaching of English (PDF):
I should be clear that my final sounding of this book is not that the authors misinterpret their own findings. I believe that their findings cannot be interpreted at all. As regards the significance of their research and its methodology to the college composition profession, my conclusion is terse. If you want to cite these authors in support of the failings of undergraduate writing, don’t. If you want to cite these authors in support of the successes of undergraduate writing, don’t. Academically Adrift’s data—as generated and analyzed—cannot be relied on.
Harsh judgment on a book published by the University of Chicago Press. But consider two research scenarios. Research Team A wants to test the null hypothesis that students do not gain in writing skills during college. What do the researchers do? Whether using a cross-sectional or longitudinal design, they make sure the earlier group of writers is equivalent to the later group. They randomly select participants rather than let them self-select. They create writing prompts that fit the kind of writing that participants are currently learning in courses. They apply measures of the writing that are transparent and interpretable. They space pretest and post-test as far apart as they can to allow participants maximum chance to show gain. They control for retest effects. They limit measures and discussion to no more than what their statistical testing and empirical findings allow. Meanwhile Team B is testing the same hypothesis. What do they do? They create a self-selected set of participants and show little concern when more than half of the pretest group drops out of the experiment before the post-test. They choose to test that part of the four academic years when students are least likely to record gain, from the first year through the second year, ending at the well-known “sophomore slump.” They choose prompts that ask participants to write in genres they have not studied or used in their courses. They keep secret the ways that they measured and rated the student writing. They disregard possible retest effects. They run hundreds of tests of statistical significance looking for anything that will support the hypothesis of nongain and push their implications far beyond the data they thus generate.
I am not speculating about the intentions or motives of the authors of Academically Adrift (AA). I am just noting that AA follows the methodology of Team B and not Team A.But, of course, that college is worthless is a conclusion that pleases many with flagrantly anti-academic biases in our media, so I doubt this new study will get much press.
No comments:
Post a Comment