La discusión sobre el ranking de Obama
Febrero 20, 2014

 

Rating (and Berating) the Ratings

Doug LedermanMichael Stratford and Scott Jaschik, Inside Higher ED, February 7, 2014

WASHINGTON — The Obama administration on Thursday released hundreds of pages of formal comments on its proposed college rating system, documents that mostly underscore the deep reservations that many higher education leaders have about the plan but also highlight pockets of support.

Nearly every major higher education group submitting comments on the rating system expressed concerns about the proposal.

Read more: http://www.insidehighered.com/news/2014/02/07/colleges-and-analysts-respond-obama-ratings-proposal#ixzz2sdoTxAb9Inside Higher Ed

February 7, 2014

WASHINGTON — The Obama administration on Thursday released hundreds of pages of formal comments on its proposed college rating system, documents that mostly underscore the deep reservations that many higher education leaders have about the plan but also highlight pockets of support.

Nearly every major higher education group submitting comments on the rating system expressed concerns about the proposal.

Read more: http://www.insidehighered.com/news/2014/02/07/colleges-and-analysts-respond-obama-ratings-proposal#ixzz2sdoTxAb9Inside Higher Ed

Molly Corbett Broad, president of the American Council on Education, in a letter signed by 19 higher education associations, outlined a range of pragmatic concerns about how the ratings regime may harm higher education but also questioned whether producing such a system was an appropriate role for the federal government to play in the first place.

“Beyond the many questions and technical challenges that surround the development and implementation of a proposed rating system, rating colleges and universities is a significant expansion of the federal role in higher education and breaks new ground for the department,” Broad wrote. “Moreover, it is extremely important to note that a federal rating system will carry considerably more weight and authority than those done by others.” Comments from other higher education associations largely echoed the concerns of many college leaders: they worry that a ratings system will create improper incentives for institutions, undermine the value of higher education and cut off access to institutions that serve low-income and underprivileged students. But none were as forceful in criticizing the proposed ratings system as the National Association of Independent Colleges and Universities. David Warren, the group’s president, said that his members were fundamentally opposed to the concept of a college ratings system.

“Private, independent college leaders do not believe it is possible to create a single metric that can successfully compare the broad array of American higher education institutions without creating serious unintended consequences,” Warren wrote, adding that any rating system would reflect policy makers’ priorities rather than those of individual students searching for a college.

“By its nature, a metric is quantitative,” he wrote. “Whereas finding a ‘best fit’ college has qualitative aspects that are equally as, or even more important than, the quantitative aspects.”

The Association of Public and Land-grant Universities, for its part, embraced the notion that its member institutions should be judged based on certain metrics. The group said the department should pursue an alternative to a federal ratings system, in which colleges are judged based on a set of measures (including graduation, loan repayment, and employment rates) that are adjusted by a “student readiness index” that controls for different types of student populations. The public-private divide was stark among the institutions responses, too. Private colleges clearly got their talking points from NAICU, with many of them echoing the group’s arguments that a rating system would be reductionist and that tying federal aid to such a system would hurt, not help, the low-income students administration officials say they aim to help. The president’s plan, “while well intentioned,” said Arthur Kirk, president of St. Leo University, “is inherently flawed in its strategy to tie the ‘value’ of a college education to federal funding for students through a single rating system. The purpose of the plan and proposed rating system is to increase access to higher education for all students, and especially to help students from disadvantaged backgrounds. Yet implementing a ratings system using data as it is currently collected through IPEDS will likely disenfranchise the very students it is supposed to help.” And to almost a person, the presidents of private institutions encouraged the administration not to bother creating a new system, but to depend on NAICU’s existing University and College Accountability Network (U-CAN) instead. Public universities were far from wholly united, and they are not enthusiastic about the idea of a federal rating system. But like their associations, APLU and the American Association of State Colleges and Universities, several major public university systems essentially took as a given that the government plans to create a new accountability regime one way or the other, and went to significant lengths to lay out how such a system might work well — and the pitfalls it might face. “As a public system, the State University of New York knows that our responsibility to meet economic and social objectives lies at the heart of our duty to accountability and transparency,” State University of New York System Chancellor Nancy Zimpher wrote in its submission. “As a public system, we embrace the rating system initiative as a tool for stakeholders, one that will enhance student success through more informed decision‐making.” The California State University System’s response, for example, which ran to 7,500 words (and two Excel spreadsheets), sorted among dozens of metrics that a rating system might incorporate, differentiating between those that are already collected (percent of students who are Pell recipients, net tuition by income level, annual completions, etc.) and those that aren’t (proportion of need-based aid offered in loans, median indebtedness of graduates, first-year retention gap between minority and non-minority students). One of the country’s other large public systems, the University of Wisconsin system, departed from its peers by co-signing a letter with the state’s private and technical colleges that argued for improving the current information that government collects through the Integrated Postsecondary Education Data System rather than creating a new rating system. Accountability vs. Consumer Choice Among the philosophical questions that the respondents were encouraged to consider was exactly what purpose the new federal system should strive to fulfill. The Education Department’s own request framed the issue plainly, noting how different the metrics and approach might be if the system is primarily for accountability as opposed to providing consumer information. Several commenters urged the department to focus on the former rather than the latter. “For practical and political reasons, the number of metrics used should be small and focused on accountability for minimum standards, rather than on comprehensive evaluation and comparison of institutions, which would be more important for consumer information,” wrote Nate Johnson, a  former state official in Florida who is now a consultant. “The Department should make clear that it is not attempting to cover everything of importance in higher education.” The New America Foundation, whose staff includes several former Education Department officials who work closely with the administration, generally agreed with that view — with a twist. “A formal ratings system focused on accountability would be the best way to leverage the federal government’s unique position in the higher education system,” the foundation said in its submission. “But the ratings system must be part of a larger effort around data and transparency that carries with it a strong consumer information component. We strongly believe that students and other consumers of higher education deserve high-quality, actionable data — and believe that the federal government can help collect and play an important role in making data available to consumers.” The more the system is focused on institutional accountability over consumer choice, the simpler it can (and should) be, several commenters argued. David A. Longanecker, president of the Western Interstate Commission for Higher Education and himself a former top Education Department official (with the bruises to show for it), said that “[t]he most critical information for students or prospective students and for policy makers is simple: how likely it is that the student will succeed and how much will it cost?” He argued for a system based on just three metrics: “the completion rate from institutions with similar missions, roles, and characteristics;” “the success of students in subsequent employment,” though he encouraged a broad definition of success and strongly discouraged any use of income data; and a measure of the “net price” for three categories of students: Pell Grant students, students with other federal grants, and students with no grants. Appropriate Focus on Job Outcomes?The part of the administration’s ratings system that would evaluate colleges based on post-graduation success of graduates (measured by such factors as employment rates and average salaries) was harshly criticized by many college leaders (especially among private institutions). Many said that such a system would unfairly punish colleges that educate many people who perform essential jobs that don’t make them wealthy. Sister Carol Jean Vale, president of Chestnut Hill College, in Massachusetts, wrote that the “the most complex issue considered by the administration is the post-graduation income metric.” She wrote that “[i]f what is earned in the first few years of post-college becomes the major measure of ‘value,’ then the unintended consequence would be that society would lose much of its community-based occupations such as social workers or P-12 teachers because colleges and universities would compete to offer only the highest ‘valued’ degree leaning to highest income potential. On levels too many to mention, this vision is much too narrowly focused and harmful to the larger good.” Christina H. Paxson, president of Brown University, also noted concerns about the impact on colleges that trained teachers. Further, she said such a system would “undervalue graduate school attendance, which would appear as low wages in a rating.” Measuring post-graduation earnings (and debt levels) raised particular issues for some specialized institutions. Grafton Nunes, president of Cleveland Institute of Art, wrote that graduates of his college (a nonprofit institution founded in 1882) “designed everything from the Tiffany lamps of the 19th century, to inventing the cab-over-engine truck, designing the first Mustang, Corvette and Thunderbird, the Crossfire and the Genesis, hold the patents on products such as the Dirt Devil, the spin brush and the Swiffer, designed the Moen fixtures in your home and all the lawn mowers from Sears and Cub Cadet that you ride.” But Nunes added that “none of these professions are reflected in the Department of Education lists of jobs available to students of art and design. And up to now, the DOE has shown no interest in correcting these omissions. If the professions available to this cohort of students cannot be accurately reflected in a simple list of professions, how can we who are presidents of art and design colleges trust fair and accurate representation in a one size fits all metric?” One might not expect opposition to the idea from medical schools, since many of their graduates go on to earn high salaries. But Atul Grover, chief public policy officer for the Association of American Medical Colleges, noted that many medical students graduate with lots of debt and (at least while they are medical residents) modest salaries. “The median education debt for indebted medical school graduates in 2013 was $175,000.  During residency training, physicians earn a stipend; however, that income is generally not sufficient to begin full repayment of educational loans, and is certainly not indicative of the future practicing physician’s salary,” Grover wrote. “As a result, medical residents depend on federal financial aid options such as forbearance and income-based repayment to postpone or reduce their obligations until they become licensed physicians. Any proposed rating system must not penalize borrowers who use these repayment options after graduation.” One college leader who urged a focus on post-graduation success was Joseph E. Aoun, president of Northeastern University, an institution known for its co-op program, which many graduates credit with getting them on the path to good jobs. Aoun wrote that it was important to broaden outcome measures beyond graduation rates and to consider “the work colleges are doing to prepare students for long-term career success.” But Aoun cautioned against simple measures and stressed the need to develop sophisticated ways to evaluate success. He noted alumni surveys not only of salaries, but of career satisfaction, “long-term return on investment,” student loan debt-to-income ratios, and “sense of purpose several years after graduation.” He said that Northeastern and some other colleges are working on such surveys, which could be useful. Despite all the skepticism about measuring job placement and earnings post-graduation, a letter submitted by Young Invincibles, a group that advocates on behalf of young Americans, was much more supportive of the concept. The letter cited various national polls, and also focus groups that Young Invincibles has conducted of students — and said that data on job placement and salaries are exactly what prospective students want. “[E]mployment prospects were very important to respondents, as was graduates’ ability to repay their debt,” the letter said. “Students consistently ranked these two outcomes as most valuable to their decision. Furthermore, in a survey of student leaders, we similarly found that 81 percent of respondents prioritized a school’s job placement rate as the most important information, an overwhelming show of support for this metric.” And yet, the letter added, many students reported that this information was “difficult” for them to find at colleges today.

For-Profits Focus on Demographics

The executives of several for-profit education companies also submitted comments to the department, echoing many of the concerns raised by other higher education sectors that enroll large numbers of low-income and other underserved student populations.

The for-profit-college leaders focused their comments largely on how the department should adjust performance metrics based on student population. Officials at DeVry and Kaplan both said that the administration should look beyond the number of Pell Grant recipients at an institution and also weight outcomes based on a range of other factors, such as whether students are caring for dependent children, working full-time while taking classes, or have delayed or interrupted their attendance in college.

David J. Adams, the deputy general counsel at Kaplan, also said gender, race, ethnicity and immigrant status “should also be carefully considered in any rating system,” since those demographic characteristics are related to graduation rates and other outcomes.

The ratings system “must be based on meaningful metrics that enable like comparisons and allow students to judge the efficacy of institutions educating and preparing students like themselves,” Adams wrote. “That cannot happen unless there is accounting for the pivotal role that student demographics play in outcomes, independent of the types of institutions that students attend.”

Among other respondents who offered distinctive perspectives:
  • The National Association of Student Financial Aid Administrators suggested that the government consider selecting 10 common metrics that would apply to all institutions, then give colleges flexibility to choose five more, say, from a set of 40 or so others that they believe “best reflect the institution and its students. This could provide valuable context to the standardized ratings done by the federal government.”
  • The Institute for College Access and Success called on the department to apply its rating system to institutions that primarily grant bachelor’s degrees before expanding it to other institutions. “While no data are perfect, the limitations of currently available data are far more pronounced at institutions servinging a less traditional student population,” the group wrote, citing, for example, federal graduation rates at community college that leave out transfer students.
  • The Student Press Law Center said that the department should include several transparency metrics in its ratings system, such as how well public colleges are following state open records laws or how well all institutions properly report campus safety data.

Read more: http://www.insidehighered.com/news/2014/02/07/colleges-and-analysts-respond-obama-ratings-proposal#ixzz2sdnteyy5 Inside Higher Ed

0 Comments

Submit a Comment

Su dirección de correo no se hará público. Los campos requeridos están marcados *

PUBLICACIONES

Libros

Capítulos de libros

Artículos académicos

Columnas de opinión

Comentarios críticos

Entrevistas

Presentaciones y cursos

Actividades

Documentos de interés

Google académico

DESTACADOS DE PORTADA

Artículos relacionados

Share This