Across the nation, hundreds of thousands of high school students from every state, of every race and ethnicity, and in every economic status are entering their senior year and will be preparing to make the choice of where (or whether) to continue their educational path at one of nearly 4,000 degree-granting colleges and universities across the country. To help decide which institution might be the best fit, many of these prospective college freshmen turn to commercially-produced resources such as U.S. News and World Report‘s “Best Colleges,” or Forbes‘s “Top 25 Colleges in U.S.” list of institutions that produce the highest mid-career salary earners, or any of the other dozens of college rankings offered annually for public consumption.
Approximately 60 percent of high school graduates across the country enroll in college. The difficulty for these students of discerning the right-for-them match is highlighted by transfer statistics: a recent National Student Clearinghouse study found that during the 2021-2022 academic year, two million of the more than 13 million enrolled undergraduate students transferred colleges (more than 15 percent), and a study by the National Student Satisfaction and Priorities Report found that only 51 percent of seniors would re-enroll in the same college if they were to do it over again. These statistics raise questions about the factors that contribute to this misalignment of students’ initial choices of a post-secondary educational institution.
In recent years rankings have earned the disdain of many, including US Secretary of Education Miguel Cardona, who labeled rankings a “joke” that “do little more than Xerox privilege” and called upon colleges to cease worshipping at the “false altar” of U.S. News and World Report’s rankings. As the anti-rankings movement gained prominence, rankings systems experienced unprecedented turbulence: in February 2023, top-ranked schools began to stop independently providing data and other information used for medical school and law school rankings, and some undergraduate institutions have followed suit. Amidst this blow to the practice, major players such as U.S. News and World Report have mounted a defense, with the publication’s Executive Chairman & CEO Eric Gertler penning a Wall Street Journal op-ed in March 2023 making the case for undergraduate rankings.
A better understanding of the history, intent, and methodology of various commercial college-rankings efforts offers much-needed context about how these lists can be used reasonably and rationally, and how the pros and cons of such systems can guide and inform a more productive process for future rankings.
The difficulty for these students of discerning the right-for-them match is highlighted by transfer statistics: …during the 2021-2022 academic year, two million of the more than 13 million enrolled undergraduate students transferred colleges…
How Rankings Came to Be
For prospective college students and their families, narrowing down the universe of thousands of available institutions into a manageable list that meets their educational, geographic, and economic needs is a daunting exercise. It could require parsing data from thousands of schools related to offerings and outcomes, road trips to visit and tour colleges, and finding and connecting with appropriately informed experts (alumni, guidance counselors, higher education professionals, etc.).
Enter college rankings: one handy list that is ordered from “best” to “worst.”
America’s first college rankings came about in the early 1900s, originating as attempts to trace the history of the country’s most eminent alumni. These rankings simply found the alma mater of those who topped Who’s Who in America and ranked colleges accordingly. This method of outcomes-based rankings, in which successful graduates are attributed to institutions, persisted through the 1960s until the next rankings revolution came along: peer reputation ratings. By polling those in the higher-education community, the new phase of college rankings gauged the collective opinion of academic officials such as college deans and presidents on peer institutions. Instead of using the views of eminent alumni as a proxy for academic quality, these rankings used the opinions of college administrators, deemed experts of and within academia. While peer reputation came to dominate rankings, these lists remained largely undiscovered by students.
Until, that is, 1983 when the national periodical US News and World Report entered the picture.
Beginning with a college-ranking list based solely on reputation, US News and World Report (USNWR) exploded onto the scene by targeting its list specifically for student consumption, in contrast to previous lists that served mainly academic purposes. The list then began its decades-long and ongoing run as an annual, highly anticipated event beginning four years later in 1987.
Over the years, USNWR periodically changed the mix and weighting of its ranking criteria based on the magazine’s assessment of what seemed to matter most to its audience. Its first list in 1983 was based solely on peer reputation. While still the most heavily weighted in USNWR’s formula, peer reputation now counts for only 20 percent of a college’s standing, and many objective criteria have come to fill the other 80 percent, including graduation rate (17.6 percent), financial resources per student (10 percent), and class size (8 percent). The formula has been somewhat dynamic, most often adapting in response to public and private colleges’ criticism. Over time, the weight of institutional reputation in the formula has steadily declined and other criticized measures, such as acceptance rates, have been completely eliminated from the calculation.
Seeing their peer institution’s success at selling magazines, many other publications began producing their own college rankings list. Such publications include The Economist, Money, Forbes, and the Wall Street Journal (in collaboration with Times Higher Education). Alongside magazines, organizations like Niche and Princeton Review have also published rankings. Niche and others have considered subjective measures, offering grades for the schools’ “party scene,” and ranks for prettiest campus, best food, best Greek life, and other areas of potential interest, some measured by self-reported student surveys. Other variations of rankings have incorporated factors such as diversity, safety, research funding, alumni earnings, and—in a return to days of yore—eminence of alumni. Most rankings draw in some capacity on the federal government’s “College Scorecard” tool, which tracks and compiles data related to institutional performance like post-graduate earnings, diversity, and financial aid statistics.
College rankings continue to attract a sizeable audience of students, college administrators, and everyday people who simply want to know how colleges stack up to each other. Whether directly through the rankings list or through the conceptions of prestige that the list reinforces, rankings influence applicants’ decision-making significantly. This effect has been documented in several research studies that show upticks in applications received when a school’s rank is increased; one such study quantifies the effect as a 1 percent boost in quantity of applications per one-spot rise in the rankings. The bottom line is that prospective students care about and act upon the position of schools in college rankings lists.
In Defense of Rankings
While top universities are generally well-known and often have longstanding prestige behind their name, many schools lack broad name recognition, and prospective first-year students investigating their options may find it difficult to differentiate between these lesser-known institutions. To the degree that rankings reflect academic quality and other valuable characteristics, they can help students make distinctions between schools based on this factor, typically assumed to be conveyed in a list of “best colleges.” If a college is able to put forward more resources for its students, graduate more students on time, earn the respect of its peers, attract high-quality faculty, and so on, it typically ranks higher. In this regard, rankings can be a legitimate resource for prospective applicants as they identify which schools to investigate further.
Rankings also can provide some accountability for the higher education system itself by allowing easy and transparent comparison on select individual elements among universities. Quantitative measures in particular provide an equitable field of comparison for those aspects of the college experience and those elements of an institution’s track record. For example, when USNWR’s ranking was based solely on reputation, Oberlin College enjoyed a strong fifth place in the liberal arts rankings. Come 1988, however, USNWR implemented quantitative metrics, and Oberlin began to fall. Some years later, it dropped out of the top 25 and sits today at 39th. Oberlin rode high on reputation but scored low on statistical indicators of quality. By empirically tracking the success of institutions, rankings enhance the incentive for improvement and make it more difficult for colleges to attain high public prestige without the numbers to back it up.
College rankings are not perfect, but they are not useless either. Although small distinctions between institutions are difficult to discern, it is reasonable to assume that institutions that are higher on the list have more resources available to dedicate to students, a higher probability of graduating students on time, greater name recognition to aid graduates in post-college pursuits, and more. Especially for schools with less historical prestige among peers, this general distinction would be near-impossible without the aggregated, standard assessment of college rankings. By helping students make this distinction, rankings can guide students to schools that are more likely to give them the college-match outcomes they are trying to achieve.
Effective reform of college ranking processes requires that the conversation on the future of rankings acknowledge their upside and magnify those elements while mitigating negative aspects.
In the same way that students depend on rankings to help form their perception of the higher education landscape, so, too, does the public. No matter how maligned, rankings lists have seemingly come to embody the success of an institution. Effective reform of college ranking processes requires that the conversation on the future of rankings acknowledge their upside and magnify those elements while mitigating negative aspects.
The Case Against Commercial Rankings
Criticisms of college-ranking lists generally fall within two general categories: “lack of accuracy” claims that allege rankings cannot determine meaningful distinctions between colleges to the degree of precision attested; and, “flawed factors” criticism that points out that even if it were possible to properly rank colleges sequentially, many of the factors used to do so lack meaningful relevance to institutional quality.
Lack of Accuracy
Changes in a college’s rank can occur simply because of tweaks in the ranking formula; that is, a college’s fall or rise in the rankings may be attributable not to any change in how they operate—their practices and policies could remain identical from year to year—but to the ranking formula. What value does a school’s precise ranking have now if next year’s formula would have it four spots lower? Indeed, one study found that, for schools outside the top 30, movement within four spots of a school’s previous ranking should be considered “noise,” as it can be explained by formula change; within the top 30, the “noise” threshold is two spots.
A further concern about reliability comes from the revelation that colleges have been found to submit incorrect data, inappropriately causing a rise in their rank. Schools that have been caught doing so include Tulane, George Washington, Dakota Wesleyan, Hampton, Drury, Oklahoma City, St. Louis University, St. Martin’s University, and Hampton, among others. Most recently, a professor at Columbia University called out his own institution after doubting its second-place ranking. As it turns out, the university had submitted incorrect data that inappropriately elevated it to a tie with Harvard and the Massachusetts Institute of Technology rather than its earned rank of 18th.
The secretive nature of data submissions to USNWR makes it nearly impossible to know how many universities might sit at the wrong spot on the list. Indeed, a 2013 survey of admissions directors found that 93 percent believed “other higher education institutions have falsely reported standardized test scores or other admissions data,” while a mere 7 percent agreed that “rankings producers have reliable systems in place to prevent fabrication of standardized test scores or other such data.” With the frequency of misreporting and constant changes to the formula, minute distinctions that characterize today’s rankings are unlikely to truly reflect significant variability in quality among institutions.
Flawed Factors
When USNWR ranks the “best” colleges and universities, a comparison of academic quality is overtly implied. Yet, many argue current factors used to measure and rank schools are inadequate proxies for quality.
The heaviest-weighted factor is peer reputation. It is measured by asking university administrators to rank their peer institutions on a scale of 1 (marginal) to 5 (distinguished). Colin Diver, Reed College president, argues in an op-ed that no administrator is equipped to fill out the reputation survey for so many schools:
I’m asked to rank some 220 liberal arts schools nationwide into five tiers of quality. Contemplating the latter, I wonder how any human being could possess, in the words of the cover letter, ‘the broad experience and expertise needed to assess the academic quality’ of more than a tiny handful of these institutions. Of course, I could check off ‘don’t know’ next to any institution, but if I did so honestly, I would end up ranking only the few schools with which Reed directly competes or about which I happen to know from personal experience. Most of what I may think I know about the others is based on badly outdated information, fragmentary impressions, or the relative place of a school in the rankings-validated and rankings-influenced pecking order.
This skepticism is validated by research confirming Diver’s final assertion that administrators’ opinions of other institutions are based almost entirely on those institutions’ prior USNWR rankings, making “reputation” nothing more than a repeating mirror of the overall rankings.
Further issues include that faculty compensation, another measure, is not adjusted to account for regions’ cost of living, desirability of locale, or overall dispersion of salaries within the institution. The use of average SAT/ACT scores as a weighting factor fails to account for the increasing number of colleges offering test-optional admissions processes and ignores research showing that while highly selective admissions processes requiring higher test scores typically lead to less racial and ethnic diversity in the admitted class, diversity itself may boost academic outcomes. And using the rate of alumni donations to measure institutional quality has also received criticism, with reasonable questions raised about relevance to current student experiences. Similar arguments about a lack of proper context or relevance can be made for many other of the ranking factors, too.
…administrators’ opinions of other institutions are based almost entirely on those institutions’ prior USNWR rankings, making “reputation” nothing more than a repeating mirror of the overall rankings.
Other Negative Consequences
A lack of accuracy and fidelity in what is supposedly being measured are not the only concerning issues with the college-rankings game—and the impacts on rankings may not even be the most significant negative consequences. The pursuit of higher rankings by colleges can actually harm institutional quality, magnify inequities, distort mission focus, and further cloud the picture of their institutions for prospective students.
Some colleges’ strategic plans have come to include rising or maintaining one’s spot in the rankings. The University of Houston, for example, reports entering the top 50 in USNWR’s rankings as an element of its formal strategic plan, and the State of Florida uses USNWR’s metrics in its public university funding formula, a move that the Chronicle of Higher Education found to “exacerbate inequities among the state’s colleges.” When colleges treat rankings as a primary determinant of success, they are compelled to work toward improving their performance as measured by ranking factors rather than toward actual improvement of the academics and educational experience they offer to students.
In pursuit of a higher ranking, one university, Baylor University in Waco, Texas, offered cash incentives to their students to retake the SAT, with students who scored 50 points higher than their original score earning $1000 in annual scholarship. And when admissions rate was an element of the formula, it was documented that colleges would encourage students who fell short of their admission standards to apply anyway, just so that the college could reject their application and their acceptance rate would appear more selective.
The pursuit of rankings also can increase tuition costs: using dollars spent as a proxy for quality bends the incentive structure to favor spending. One study of the USNWR formula, for example, found that to enter the top 20, a mid-30s college would have to spend $112 million extra annually on just faculty compensation and student resources.
In addition to driving up costs, college rankings can harm underrepresented students by incentivizing the admission of students with high standardized test scores, less chance of incurring debt, and higher probability of graduating on time. These criteria may play no small role in encouraging broad systematic discrimination against low-income students, students of color, and first-generation applicants whose promise is overshadowed by their potential to burden a college’s standing. This effect was demonstrated in a 2021 study that found that in the decade from 2005 to 2015, nearly 189,200 Pell Grant recipients and 152,900 first-generation students were not admitted to nationally ranked universities who likely would have been admitted in the absence of rankings.
James Murphy, deputy director for higher education policy at the think tank Education Reform Now argued that rankings’ “main function is to confer prestige rather than recognizing institutions that make significant contributions to society through scholarship and by advancing social mobility.”
The End of “America’s Best Colleges”?
The call for the elimination of standardized, commercially produced college rankings has grown so loud that serious discussion of meaningful reform is warranted. In 2022, some of the most prestigious law schools in the country pulled out of USNWR Law School rankings, triggering dozens more to follow: Yale first withdrew from the rankings, followed by Harvard and then UC Berkeley, Columbia, Georgetown, Stanford, and University of Michigan, among others. Within just a few months, at least 40 law schools and more than a dozen medical schools had pledged to no longer participate in the USNWR rankings. Speculation began that undergraduate institutions could be next when rankings season came around. Indeed, in February 2023, the Rhode Island School of Design and Colorado College announced they would no longer participate as a contributor of information to USNWR for the purpose of its annual rankings. The president of the latter institution, L. Song Richardson, proclaimed, “We expect that we will drop in the rankings based on our decision to leave the U.S. News & World Report [process]. If this occurs, it will not be because our educational quality has changed, but because U.S. News & World Report will continue to rank us using incomplete data.” In June 2023, Columbia University followed suit.
Other college presidents, such as Princeton’s Christopher Eisgruber, prominent public policy authors like Malcolm Gladwell, and government officials, including US Secretary of Education Miguel Cardona, have advocated for the elimination of undergraduate school rankings.
Building More Informative Rankings
Still, many students understandably search for an objective measure to compare colleges when deciding where to apply, the general public appears interested in the competition for top honors among colleges, and higher education institutions largely remain tied to where they rank relative to peer competitors—so much so that the print edition of US News and World Report’s “Best Colleges” edition regularly makes annual bestsellers lists.
Amidst the positives and negatives of college-ranking efforts, important questions remain: Can a single formula really be prescribed to generate a “best” ranked list that is even mildly applicable to an entire nation of students, diverse in race, ethnicity, gender, economic status, post-graduation plans, and more? And can the college-rankings process be reformed in such a way that preserves its positive aspects while downsizing its negative impact? Dismantling the self-reinforcement of the formula, lowering the incentives and ability to “chase” factors, and making the ranking list more reflective of true academic quality and more representative of schools that actually are best for students all would be welcomed changes.
Researchers have created alternative statistical models that lend unique perspectives of the higher education landscape and that offer new and meaningful points of comparison. Harvard Professor Raj Chetty’s project Opportunity Insights, for example, focuses on social mobility and ranks colleges on their ability to lift students from bottom-quintile earnings to top-quintile earnings. The Brookings Institute has similarly tackled the economics of college with “value-added” rankings that attempt to distill the economic worth of a college by assessing the extent to which its graduates’ financial success can be attributed specifically to students’ university experience rather than factors such as pre-college ability or familial wealth.
Customizable Ranking System
A singular list of “America’s Best Colleges” is an inadequate resource for a group of prospective college students with diverse backgrounds and aspirations; as a resource that is intended to guide individual decisions about college, standard commercially available rankings fall critically short.
One potential remedy is the creation of an interface that allows students to fashion their own ranking formula according to their personal priorities. Instead of journalists and editors guessing at the factors most important to potential applicants, students themselves could input a level of importance for any given statistic out of an array of available data to compile their own list. A system like this one could include more statistics and thereby offer greater insights into the college experience. With an assortment of potential ranking factors from Raj Chetty’s social mobility indicators to Brookings‘s economic-value-added, and Niche’s student surveys to Forbes’s eminence of alumni, a customizable ranking system would allow students to pick and choose the indicators that align with what they hope to get out of their college experience and weight them according to a student’s priorities.
The most advanced—and most recent—iteration of this concept is The New York Times’s new “Build Your Own College Rankings” tool. The resource allows users to drag a slider to rate the importance of high earnings, low sticker price, academic profile, party scene, racial diversity, economic mobility, net price, athletics, campus safety, and economic diversity. Some of these sliders rely on a singular dataset (such as “economic mobility” as calculated in Chetty’s Opportunity Insights database), while others such as “academic profile” aggregate multiple statistics (graduation rate, SAT/ACT scores, student-faculty ratio). This tool is the first attempt at customizable rankings since Forbes’s 2010 “DIY rankings,” which worked similarly but was soon discontinued, reportedly due to low use that was viewed as a lack of demand. The future success of the The New York Times tool may provide insight into the demand, or lack thereof, for such a customizable tool today.
By being pulled in all directions proportionally to the interest of the applicant pool, colleges could focus on appealing to what students say they want rather than what USNWR thinks they need.
A calculator similar to “Build Your Own Rankings” but offered outside the for-profit sector, and one that incorporates even more data, could expand the reach and availability of such a helpful tool, and likely would gain substantial credibility among users and higher education institutions. The necessary data for this more comprehensive system is already obtainable and used by commercial rankers to construct their lists. Much of these data are publicly available on College Scorecard, the Common Data Set, and the Integrated Postsecondary Education Data System, for example. The US Department of Education could compile these statistics into one publicly available database where students could select and assign their own weights to each factor and generate their personal rankings list. Because some datasets are collected privately—such as Forbes’ alumni eminence—the effort to construct this first public-good dataset would be enhanced by collaboration between commercial organizations generating rankings, researchers, universities, and the federal government to pool statistical databases and address concerns of user stakeholders to create the future of college rankings.
The widespread adoption of a public, customizable college ranking tool is likely to diminish the chasing of rankings by higher education institutions, as the selection and weighting of individual factors would be so diverse to make targeting a particular factor to try to climb in the rankings largely futile. Instead, colleges would have greater incentive to chart the course they deem best for students. When the overall rankings scheme is representative of the priorities of applicants, colleges are incentivized to appeal to those priorities. More and better research on what institutional characteristics contribute to what outcomes for students and the resulting value-added metrics that could be developed—including both quantitative components (such as measures of academic quality) and qualitative elements (such as types of clubs or sports, for example)—also would be a welcomed addition.
By being pulled in all directions proportionally to the interest of the applicant pool, colleges could focus on appealing to what students say they want rather than what USNWR thinks they need.
Conclusion
The merits of rankings are clear: students, academic leaders, higher-education donors, and everyday people want to know the comparative strength of colleges and universities. Because transparency of quality-driving factors and outcomes is imperative for a sector as consequential as higher education, properly constructed and properly intentioned rating systems can serve a useful and valuable service. Still, it is apparent that popular commercial college rankings lists are—at least to some degree—imprecise, misleading to students, stimulators of racial and socioeconomic discrimination, drivers of increased college costs, and often harmful to institutions striving for an ever-higher position.
Individual customization in the ranking system and the advent of statistics that show the reality of the higher education landscape could better guide students toward institutions where they find a fulfilling and successful college experience. A customizable system would also stymie attempts by institutions to “chase” the few statistics included in standard formulas, helping mitigate financial inefficiencies, discrimination in admissions, and structural hesitancies toward institutional improvement.
ABOUT THE AUTHORS
David Colin is a student and senior class president at Rye Country Day School in Westchester County, New York
Brian Backstrom the director of education policy studies at the Rockefeller Institute of Government