is a senior writer who covers admissions, student affairs, and the issues affecting adult students.
Read Eric’s Head Count entries
Follow Eric on Twitter (@erichoov)
Noncognitive Measures Are ‘Not a Magic Wand’
January 18, 2013, 12:00 pm
By Eric Hoover
Los Angeles — On Thursday night an admissions dean here asked me why I’d described noncognitive assessment as the “next frontier” in college admissions. That was presumptuous, he said, too optimistic.
I’d chosen the word “frontier” for a reason, I told him. Frontiers are places of promise and possibility, but they also abound with uncertainty. That’s a fair way of describing how many admissions deans view the prospect of using alternative measures of student potential. Nobody’s calling them a panacea.
At a conference here hosted by the University of Southern California’s Center for Enrollment Research, Policy, and Practice, many attendees have predicted that the future of college admissions will include more assessments of attributes not captured by standardized-test scores and grade-point averages. As science reveals more and more about what matters in learning, it follows that our measures of merit will evolve.
For now, a sense of new possibilities goes hand in hand with concerns. Patrick Kyllonen, senior research director of the Educational Testing Service’s Center for Academic and Workforce Readiness and Success, said noncognitive assessments could help colleges better serve students once they enroll—and help employers make better hiring decisions.
Yet such assessments, like conventional tests, are susceptible to coaching, Mr. Kyllonen said. Wherever tests go, test prep follows.
Pamela T. Horne, associate vice provost for enrollment management and dean of admissions at Purdue University, said she saw promise in noncognitive assessment. But she questioned whether, during a time of tight resources, many colleges could justify investing time and money in experimental measures. She also wondered about applicants: Would teenagers welcome more application requirements?
Jon Boeckenstedt, associate vice president for enrollment management at DePaul University, uttered the quote of the day when he said: “There is no silver bullet in trying to predict who’s going to do well.”
At DePaul, applicants who do not submit test scores must complete essays designed to measure noncognitive traits. Previously, the university required such essays, and the applicants’ scores have told the university a little—not a lot—about their chances of success, Mr. Boeckenstedt said.
Applicants with higher essay scores were retained at a higher rate than those with lower scores, regardless of their ACT scores, he said. But the high-school GPA remains the best predictor of first-year success at DePaul.
David Coleman, president of the College Board, called for realistic expectations. Despite research showing the predictive value of noncognitive measures, they don’t add much when combined with SAT or ACT scores, he said. ”They are not a magic wand.”
Noncognitive assessments might prove most useful in evaluating applicants with high grades and low test scores, and those with low grades and high test scores, Mr. Coleman suggested. The potential of noncognitive measures to reveal the talents of students who fall short on both counts struck him as less promising.
“The thought that with that double negative you’re going to get a positive,” he said, “is not realistic.”
‘How We Separate Merit From Privilege’
January 20, 2013, 7:06 pm
By Eric Hoover
Los Angeles — At a conference hosted by the University of Southern California’s Center for Enrollment Research, Policy, and Practice last week, Charles E. Lovelace Jr. uttered the most memorable quote. The next great challenge in college admissions, he said, is “how we separate merit from privilege.”
Mr. Lovelace is executive director of the University of North Carolina at Chapel Hill’s Morehead-Cain Foundation, which annually provides full-ride scholarships to 50 undergraduates. In an article today, I describe how and why the university revamped its selection process, incorporating noncognitive measures of students’ potential. Morehead-Cain is one of many scholarship providers that assess attributes such as character and leadership.
Most colleges, however, have not adopted nontraditional assessments. Why?
On Friday, I moderated a panel with Inside Higher Ed’s Scott Jaschik, who asked that question of the admissions officers in the room. He described the “contradictions” he saw: Colleges describe the importance of noncognitive qualities, yet cling to the ACT and SAT, which measure cognitive skills.
To expand on Scott’s good observation, I’d ask colleges why they have not even experimented with other measures of merit, embracing the same trial-and-error process that colleges preach to students. After all, admissions officials often lament the limitations of standardized-test scores, which correlate with race and family income. Where’s the will to innovate?
I’ll take the optimistic (or naïve) view: Just because noncognitive assessments haven’t taken hold in undergraduate admissions does not mean they won’t in the future. After all, tests evolve, and so do peoples’ opinions of what tests should measure. (The mighty SAT, it’s worth remembering, was not handed down from the heavens; once it was just a brand-new invention, based on an informed hunch.)
Changes are visible. Scholarship providers, along with some graduate programs and medical schools, have adopted noncognitive measurements that could provide useful models for undergraduate admissions. And science is delivering new insights into how learning happens, which have implications for assessment design.
For now, practical concerns explain much of the hesitation among admissions officials. Testing companies, for one thing, haven’t provided colleges with a noncognitive assessments for applicants, so institutions must create their own or adapt them from models developed by researchers. That takes work.
Pamela T. Horne, associate vice provost for enrollment management and dean of admissions at Purdue University, has worried about the time and money that new assessments would require. Noncognitive measures, she said, are also susceptible to manipulation, or “coaching,” a concern many of her colleagues share.
It’s not my job to tell colleges which tests to use, but as an observer at this fascinating conference last week, I overheard objections to noncognitive assessments that also apply to conventional assessments: They’re “not perfect,” they “help some students more than others,” and they “don’t add much beyond high-school GPA.” Coaching is a valid concern, for sure, but the form of coaching known as test preparation hasn’t prompted most colleges to wash their hands of the SAT.
Looking ahead, one can either accept or reject Mr. Lovelace’s premise. If separating merit from privilege is a college’s goal, then enrollment leaders must ask themselves if they have the best tools for achieving that goal. If not, they have a choice: They can find, develop, or demand new tools, or they can keep wielding the ones they’ve got.
What’s This Test For?
January 17, 2013, 11:44 pm
By Eric Hoover
Los Angeles — My father, a handy fellow around the house, has often told me to make sure I have the right tool for the job. I was reminded of that advice on Thursday while listening to Sheldon Zedeck, a professor of psychology at the University of California at Berkeley.
Here at a conference sponsored by the University of Southern California’s Center for Enrollment Research, Policy, and Practice, Mr. Zedeck described his research on predicting the effectiveness of lawyers. The Law School Admission Test, or LSAT, plays a large role in determining who gets into law schools. Research has shown that it does a good job of predicting a student’s success … in law school.
But to measure an applicant’s potential for success as a lawyer, Mr. Zedeck has concluded, you need a different tool. “The LSAT is not a good predictor of effective lawyering,” he said.
Mr. Zedeck helped design and develop experimental tests meant to assess 26 factors, such as problem solving, advocacy, and communication skills, that the LSAT doesn’t measure. He based those measures on the factors that legal professionals associated with success in their field.
The verdict? Mr. Zedeck described his tests as effective predictors of lawyers’ effectiveness on the job. He found that performance did not vary significantly by the race and gender of applicants, suggesting that the measures could help law schools diversify their applicant pools. He also suggested that the measures, used to complement the LSAT, could help law schools better align their curricula with skills that are valued within the profession.
“You could do the same things for undergraduates,” he said, “by asking, What do you want your undergraduates to be like?”
If you buy Mr. Zedeck’s work, there’s nothing wrong with the LSAT. Or the ACT or the SAT, for that matter. It’s just that, like any tool, they’re good for some jobs but not for others.