Cómo evaluar resultados de aprendizaje en la educación superior
Noviembre 18, 2013

How Best to Assess?

ST. LOUIS – As the Obama administration pushes ahead on a controversial plan to create a new federal system for rating colleges – with a focus on affordability, access and outcomes – the subject of how best to assess higher education learning and other outcomes was a particularly hot one here at the annual meeting of the Association for the Study of Higher Education.

A Thursday morning pre-conference symposium on performance indicators, convened by ASHE’s Council on Public Policy in Higher Education, brought together a number of big names in assessment for a lively discussion of how best to evaluate colleges’ educational quality, which resulted in almost no agreement of any kind.

And a paper presented at the main conference later that day tackled the question of why legislators and members of the public complain that no one quite knows just what colleges are doing with students’ time and money – even as those in higher education may feel they already have more assessment data than they know what to do with.

Different Audiences, Different Needs

At the morning session, “Higher Education Indicators: What is the Best Approach to Meeting the Information Needs of Multiple Stakeholders?,” panelists were asked to make a five-minute case for which indicators they believe to be most important, and for which audience.

The first speaker, Lisa Lattuca of the University of Michigan, argued that the most pressing need is for measures of learning as opposed to measures of achievement. Lattuca, a professor in the Center for the Study of Higher and Postsecondary Education, proposed a system in which students would be evaluated on their progress in broadly applicable areas such as critical thinking and communication competency as well as specific learning in their major. The data gleaned from these assessments would serve both to show educators where they are succeeding and where they need to improve, and to inform policy makers and members of the public about what institutions and programs are trying to teach students and whether students are learning it.

The importance and the difficulty of measuring student learning proved a central theme of the discussion, as well as the point of most disagreement. That’s not surprising considering the current state of the broader debate on assessment: while affordability and access are also key aspects of the equation (and are not necessarily easy to measure, either), the issue of learning outcomes has tended to be most contentious. “Throughput” outcomes (i.e., completion rates) are simple enough to track – though even their accuracy is disputed – but the question of what exactly students are gaining on their way through is another matter entirely.

Despite a federal focus on educational quality that began some eight years ago with the Spellings Commission, efforts to quantify actual learning have made little clear headway – and many in higher education remain concerned that any such efforts must be dangerously reductive.

One panel speaker, Robert Shireman, former U.S. deputy under secretary of education and now executive director of California Competes, said that the best-case scenario would be a sort of “Thinkbit” for students – like a Fitbit, a wearable device that tracks the user’s physical activity. But the Thinkbit would tell you, “Is there real learning going on?”

But “we don’t have a Thinkbit yet,” Shireman said, and “as the president of the United States said, we are going to have ratings” – and soon.

Considering the tools that exist now, Shireman said, his best suggestion for improving outcomes assessment is that the Department of Education use the email addresses of everyone who has applied for federal financial aid to survey current and former students about their experiences in college. (Shireman has already broached this idea elsewhere.)

“Take what [the National Survey on Student Engagement] has learned about the kinds of things you can ask people” – about what they are doing in college, the kinds of feedback they’re getting on their work, the availability of faculty, etc. – and use that information, Shireman said, to get a real sense of how effectively colleges are serving their students and to what extent they are engaged in the types of activities that promote learning.

But NSSE’s own director, Alexander McCormick of Indiana University, took issue with Shireman’s idea, citing Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

McCormick pointed out two flaws with the idea of using NSSE-style data to rate colleges. One, he said, is that “most of the variability in student engagement is among students, not between institutions” – differences among institutions only account for about 10 percent of the overall variation, so “you’re comparing the tips of a lot of different icebergs.”

The other problem, according to McCormick, is that once students realized that their responses were being used to rate colleges, they would be likely to respond in ways that they saw as best for their own interests. (This concern is particularly salient in light of President Obama’s proposal to tie federal student aid to college ratings.)

McCormick recommended instead an “independent, auditable, transparent process of assessment, broken down by major academic units at large institutions.” This would be based on samples of syllabuses, tests and assignments across years and fields of study. Institutions would receive “letter grades for academic challenge, academic support, clarity of learning objectives and alignment of assessment with objectives.”

Countered Shireman: “Useful is what students do and what they get, whereas a syllabus shows what faculty want to do or think they’re doing… Imagine all the amazing syllabi we’d get from for-profits if we used syllabi to measure what colleges do.” (Shireman led the Obama administration’s efforts to crack down on for-profit institutions.)

One thing on which Shireman, McCormick, and the panel’s other participants did agree was that no one system of assessment could meet the needs of all interested parties.  Students, educators, taxpayers, and policy makers all need different types of information, and for different purposes. That’s “where the president’s plan falls apart,” Shireman said.

But, he added, a ratings system could be “personalized” to some degree in order to address the specific interests of students, legislators, and other stakeholders, which “would temper some of the problems associated with a ratings system.”

“Data Gluttony” vs. “the Black Box”

Later in the ASHE meeting, at another panel on assessment issues, Corbin M. Campbell of the Teachers College at Columbia University, stated the multiple-stakeholders problem succinctly: “No assessment can serve two masters.”

In the working paper she presented, titled “Assessing College Quality: Illuminating the Black Box and Contending With Data Gluttony in Higher Education,” Campbell addresses the disconnect between the “black box” – the sense among the public and policy makers that colleges aren’t being held accountable for their outcomes, and no one really knows what students are getting – and “data gluttony” – the sense within higher education that institutions are constantly assessing educational processes and their effectiveness.

(The idea of data gluttony was nicely illustrated at that same panel by Natasha Jankowski of the National Institute for Learning Outcomes Assessment at the University of Illinois. In a survey of 725 institutions, which was conducted in 2009 and again in 2013, Jankowski and Timothy Cain of the University of Georgia found that the average number of assessments used by each institution rose from three to five in just four years.)

According to Campbell, institutions have focused on “formative” assessments intended to help improve student learning and institutional quality (such as NSSE and the Collegiate Learning Assessment), while policy makers and members of the public typically see “summative” assessments that allow for broad comparisons across institutions (such as performance funding models and regional accreditation for policy makers, and U.S. News and other popular college rankings for members of the public).

The summative assessments seen by the public are generally “corporate-driven” and focused on inputs, resources, and reputations. As a result, “almost all the data they see lacks the ability to describe teaching quality, academic rigor, and educational experiences.”

“Higher educators have made great strides in developing formative assessment measures for institutions,” Campbell writes, but “in order to illuminate the black box, [they] must focus on the needs of the public for summative assessment of educational quality across institutions.”

0 Comments

Submit a Comment

Tu dirección de correo electrónico no será publicada. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This