How Money Ranked Colleges: An In-Depth Look at Our Methodology
College is now the second-largest financial expenditure for many families, exceeded only by buying a home. So it isn’t surprising that parents and students are taking a harder look at the costs and payoffs of any college they consider.
To help, Money has drawn on the research and advice of dozens of the nation’s top experts on education quality, financing, and value, to develop a new, uniquely practical analysis of more than 700 of the nation’s best-performing colleges.
Money’s Best Colleges for Your Money rankings are the first to combine the most accurate pricing estimates available with students’ likely earnings after graduation and a unique analysis of how much “value” a college adds.
We estimate a college’s “value add” by calculating its performance on important measures such as graduation rates, student loan default rates, and post-graduation earnings, after adjusting for the types of students it admits. We believe this analysis gives students and parents a much better indication of which colleges will provide real value for their tuition dollars.
We developed our ratings in partnership with one of the nation’s leading experts on higher education data and accountability metrics, Mark Schneider. The former commissioner of the National Center of Educational Statistics, he is currently a vice president at the American Institutes for Research (AIR) and president of College Measures, a for-profit partnership of AIR and Optimity Advisors, which collects and publishes public data comparing a student’s educational record and later earnings.
The final methodology decisions were made by the Money editorial team, in consultation with Schneider and College Measures.
In building our rankings, Money focused on the three basic factors that surveys show are the most important to parents and students:
- Quality of education
- Affordability
- Outcomes
Because these three factors are so interrelated and crucial to families, we gave them equal weights in our ranking.
In each of these three major categories, we consulted with our advisers to identify the most reliable and useful data to assess a school’s performance. We also balanced the basic data in each category with at least one “value-added” measure.
To gauge how well a college is performing relative to its peers, we gathered federal and college data on the average test scores and grade point averages of students at each college, the percentage of each graduating class receiving degrees in each major, and the percentage of students with incomes low enough to qualify for Pell Grants (about 90% of which go to families with incomes below $50,000). We then used the statistical technique of regression analysis to determine the impact of a student’s test scores, economic background, and college major on key factors, such as graduation rates and future earnings. That enables us to see how much better or worse a particular college performed than would be expected given the characteristics of its student body.
We used this estimate of relative performance by the college as an important part of the ranking, as you’ll see below.
Money assigned each indicator a weight based on our analysis of the importance of the factor to families, the reliability of the data, and our view of the value of the information the data provided.
To avoid overloading readers with too many choices or too much data, and to ensure that we fairly compared apples with apples, we decided to analyze only those colleges that, in our view, passed these minimal quality and data tests. We decided that to be included in our rankings, a college must meet these four criteria:
- Be a public or not-for-profit four-year college or university.
- Have enough cost, quality, and outcomes data available to provide at least moderate confidence in our assessment.
- Not be financial trouble, as indicated by a below-investment grade rating for its bonds by Moody’s, or by inclusion on the U.S. Department of Education’s list of schools under the strictest level of “Heightened Cash Monitoring” because of indications of low “financial responsibility.”
- Have a graduation rate at or above the median for its type of school (public or private), or if the rate is below the median, have a graduation rate at least 25% above what would be expected given the incomes and test scores of its students.
This eliminated some colleges that may be good values, but might be facing temporary financial difficulties or have too few alumni reporting their incomes to PayScale, our salary data source, for us to evaluate them. But it left us with a robust universe of more than 700 colleges. In our view, even the lowest-ranked of the schools on our list demonstrate that they provide at least some value for your tuition dollars.
We then used the following data and methodologies in our three basic categories to create our rankings:
QUALITY OF EDUCATION: 33.3% weighting
For this factor, we used six indicators that provide meaningful information about the quality of a school’s instruction, weighted as shown:
(1) Graduation rates: 35%. Education experts and college officials generally agree that one of the most important reflections of a college’s quality is its graduation rate. (The American Association of State Colleges and Universities calls it “a legitimate indicator” of college quality.) Many rankings use this commonly cited federal statistic on the percentage of freshmen that graduate within six years. Because of its importance and wide acceptance, we assigned this measure a comparatively heavy weight.
(2) Value-added graduation rate: 35%. Many education experts and college officials point out that the basic graduation rate number, while useful, is an insufficient indicator of a college’s value because research shows that wealthier students and students who got good grades in high school are more likely to finish whatever college they attend. So elite, expensive schools such as, say, Harvard, would be expected to have high graduation rates. For that reason, we also calculated each school’s relative performance after accounting for the economic background and academic preparation of its students. The higher a school’s graduation rate was above the rate that would predicted for a school with that particular mix of students, the more value that particular college is assumed to have added. This “value-added” graduation rate analysis is widely accepted. (A 2013 OCED paper that such “value- added measurement provides a ‘fairer’ estimate of the contribution educational institutions.”) Because of its reliability and acceptance, we weighed this factor heavily.
(3 & 4) Peer quality: 15%. Decades of research have shown that undergraduates have a major impact on their peers. Students who room with better students get better grades, for example. By contrast, students surrounded by less conscientious peers—for example, heavy drinkers—study less and get worse grades. And students who room or socialize with more successful students tend to get better jobs upon graduation.
Our peer quality measure consists of these two indicators:
- Academic preparation of students (10%). We gathered the federal data on accepted students’ high school grade point averages and their scores on the ACT and SAT. We analyzed the overall relationship between test scores and grade point averages for all colleges that reported both data points, and then used those averages to fill in an estimated test score for schools that don’t make their students’ test scores public. This is an imperfect measure, and there is controversy over the usefulness and validity of standardized tests to begin with. However, the SAT and ACT tests currently provide the only nationally comparable data on student abilities. In addition, many studies have found a high correlation between test scores and academic success.
- Yield (5%). The federally reported “yield” is the percentage of accepted students who enroll in a given college. The higher the yield, the more likely it is that the school was the student’s first choice, or best option, and that applicants have perceive the college’s quality as high.
(5 & 6) Faculty quality: 15%. Research shows that students who get more individual attention from faculty tend to achieve more, both in college and after graduation. (See, for example, the recent Gallup-Purdue Index.) While there are no nationally comparable data on the number and quality of student interactions with faculty, we believe these two data points are useful indicators:
- Student-faculty ratio (10%). This is a standard, federally published metric used by many ranking organizations.
- Quality of professors (5%). Money asked the nation’s largest independent source of student ratings of professors, RateMyProfessors.com, to calculate the average overall rating for all professors at each school for helpfulness, clarity, and quality. We did not include students’ ratings of the professors’ “hotness” or “easiness,” which are also collected by the site. Although research has found that students do tend to give higher marks to easier professors, independent investigators have also found that Ratemyprofessors.com quality ratings are generally reliable and provide students “with useful information about quality of instruction.”
Change from our 2014 rankings: For 2015, we shifted a small amount of weight from the other measures to graduation rates because of the declining reliability of data such as test scores and GPA. As more colleges go “test optional,” for example, only students with high test scores will submit them, making a school’s average test score appear artificially high. Graduation rates, on the other hand, reflect many aspects of a school’s quality, since students who are unhappy with their experience will vote with their feet and drop or transfer out.
AFFORDABILITY: 33.3% weighting
For this factor we also used six indicators, weighted as shown:
(1) Net Price of a degree: 30%. Money has developed a unique, and many experts tell us, more accurate estimate, of college prices. We started with the “sticker” price provided by the college to the federal government. The full cost of attendance includes tuition, fees, room, board, books, travel, and miscellaneous costs. For public colleges, we used the in-state tuition and fees.
We then subtracted the average amount of institutional aid provided per student by the college, including need-based grants, merit aid, and athletic scholarships. (The aid data are provided by colleges on the Common Data Set, which we accessed through Peterson’s.) That gave us an estimate of the net price the college charged an average student for the most recent academic year.
Next, we used federal data on the percentage of students who graduate from that college in four, five, and six years to calculate an average time to degree, which for the vast majority of schools is now more than 4 years.
Judith Scott-Clayton of Columbia University, for example, has found that the average college graduate pays for 4.5 years of college, not just 4.However, because Money’s 2015 ranking includes only those schools with the best graduation rates, the average time to degree for all of the schools on our list is just 4.3 years.
We multiplied the net price of a single year by the average number of years it typically takes students to finish at that school (which ranges from four to six years, depending on the school) and added in an inflation factor (since tuition prices typically rise every year) to estimate the total net price of a degree for freshmen entering that school in the fall of 2015.
College counselors and financial aid experts have told us that the Money calculation is more realistic and helpful to parents and students than other popular price estimates, most of which use the cost of a single year, not the full degree. Sandy Baum, a senior fellow at the Urban Institute and one of the nation’s leading researchers on college costs and aid, says our estimates of the net prices of individual institutions “provide good benchmarks.” She added an important caveat, however: “Students at each institution face a wide range of net prices, so no individual student should assume that the schools on this list with highest net prices would end up being most expensive for them.”
The federal net price estimate for a year’s costs is lower than Money’s estimate because the Department of Education subtracts almost all grants—federal, state and institutional—while we only subtract aid provided by the school, for reasons described below.
Money gives the net price sub-factor a very heavy weighting because surveys show that the cost of college is now one of families’ biggest worries. A 2013 Harvard Institute of Politics survey reported, for example, that 70% of young adults say finances were a major factor in their college decision.
(2 & 3) Educational debt: 30%. Surveys show debt to be another of most families’ biggest worries. We weighed this factor equally with net price because of those concerns and because we believe it is an indicator of how fairly colleges distribute their grants and scholarships. Schools with comparatively low net prices but high borrowing are likely giving grant aid to wealthier students but shorting the packages of needier students, thus forcing them to borrow more. The higher the debt factor, the lower the school ranked.
Our educational debt assessment is based on these two indicators:
- Student borrowing (20%). The federal government reports the percentage of freshmen who borrow, as well as their total average amount of federal and private student loans. We combined these data to estimate the average debt per student. We then multiplied that number by the average number of years it takes students at that college to earn a bachelor's degree. The result is our estimate of total indebtedness for graduating seniors.
- Parent borrowing (10%). The federal government reports the total amount of parent PLUS loans awarded to parents at each college each year. We divided this number by the school’s enrollment to calculate an average parent PLUS debt per student. While other organizations generally don’t include parental debt in their rankings, Money believes parent educational borrowing is a financial burden, and should be an important consideration.
(4 & 5) Ability to repay: 30%. The ability to repay loans taken out to finance a college education is another indication of a school's affordability for its students.
We evaluated ability to repay using these two indicators:
- Student loan default risk index (15%). Each year, the federal government publishes the number of former students who left college three years ago and have since defaulted on their federal student loans. Using a methodology proposed by The Institute for College Access and Success (TICAS), Money adjusts these numbers for the share of students at the college who take out federal student loans. TICAS says this is a fairer and more accurate indicator of the odds that any particular student at the college will end up defaulting on a student loan.
- Student loan default risk index–value added (15%). We calculated the relationship between schools’ SLDRI and average student test scores and the percentage of the student body from low-income households. Schools that had a lower default rate than would be expected given their student population were ranked more highly.
(6) Affordability for low- and moderate-income families: 10%. This year, Money added a new affordability factor to the rankings, using data colleges report to the federal government on the average net price paid in the 2012-13 academic year by students in families with annual incomes of $48,000 or less. We did this to reflect how affordable the college is for disadvantaged students and working class families (the median family income in the U.S. in 2013 was $52,000). Another reason for focusing on this data for this income group is that is it is far more reliable than the government’s net price data for higher income groups. The government’s net price data is only reported for students who receive federal aid, which means the data covers almost all students with incomes below $50,000, but only about half of students from families in the highest quartile.
Changes from our 2014 rankings: To make room for the new affordability indicator, we reduced the weighting of our net price indicator. In addition, because PLUS loans are an imperfect indicator of how much parents may be borrowing to fund their children’s tuition (there are no data on how much parents of students at each college are borrowing from private lenders, or against their retirement plans or home equity, for example), we slightly reduced our weight on the PLUS indicator and shifted it to the default indicator.
OUTCOMES: 33.3% weighting
For our third category, we used a total of nine indicators, weighted as shown below.
(One data point we could not use: the percentage of students who find jobs within a year of graduation. Although many colleges claim that a high percentage of their new grads are employed, the data are generally not considered reliable enough to compare one college against another. That's because each college has a different method of surveying graduates, and unemployed grads may be less likely to answer such surveys, which would tend to make a college's employment rate look better than it really is.)
(1 &2) College career services: 15%. We used one quantitative and one qualitative indicator to assess the value of a college’s career services:
- Caseload of career services staffers (10%). Surveys show that the single most important reason students now give for attending college is to increase their odds of landing a good job and launching a career. Unfortunately, most colleges don’t provide much career coaching or job assistance. At the typical four-year college, there is only one professional career services staffer for every 1,000 undergraduates, which equates to an average availability of just one hour of personal attention per undergraduate per year. While we couldn’t get reliable data on the quality of the career services that schools provide, we could at least tell parents how overworked the staff is. So Money included in its ranking the number of full-time equivalent staffers in each college’s career services office as reported to Peterson's. (We called several hundred schools that hadn't reported the data to Peterson's to fill in the missing numbers.) We then calculated a caseload per staffer, based on each school’s enrllment.) The lower the caseload, the higher the college is ranked.
- Formal programs linking alumni with job-seeking students (5%). Personal referrals provide a huge advantage in the job market. (A 2010 Federal Reserve study found, for example, that job applicants who received a personal referral from an employee were more likely to get interviews, get hired, and be offered higher starting salaries. Colleges that help students make connections with their working alumni provide a valuable service.
(3, 4 & 5) Post-graduation earning power: 35%. We used three indicators to assess this factor:
- Payscale.com earnings for students who graduated within the last five years (20%). We considered the salaries on PayScale.com reported by graduates of the class of 2009 and later. The typical person in this group has two years of work experience. PayScale provided Money with aggregated data on more than 1.4 million people who in the last three years have filled out an online form that asks what their job is, what their salary is, where they went to college, what they majored in, and when they graduated, the largest such database available. Matthew Chingos, a Brookings Institution economist who has studied the PayScale data notes that it is currently “the only game in town” for anyone who wants to compare colleges’ outcomes nationwide. (Find more details on PayScale's data here.) We did not consider the earnings of anyone with a graduate degree, because there is no way to isolate the effect of the undergraduate degree on the eventual earnings of, say, a lawyer or doctor.
- Payscale.com earnings for mid-career workers (5%). Here we used the average earnings reported by those who graduated before 2004. The typical worker in this group has 15 years of work experience. We weighted early earnings more heavily than mid-career earnings because Trey Miller, a RAND Corporation economist who has studied the relationship between college choice and earnings, noted that a college choice has a much stronger impact on the type and pay of a first job after graduation than it does on job type and pay for a person who has been in the workforce for, say, 10 years. By then, experience and skills acquired since graduation will also play an important role.
- Estimated market value of alumni skills (10%). In April 2015, the Brookings Institution published an analysis of new data shedding light on the value added by each college. One of Brookings’ indicators was an estimate of the market value of the 25 most commonly cited skills listed by alumni of each college in their LinkedIn profiles. Jonathan Rothwell, the author of the Brookings study, said this new measure is based on millions of LinkedIn profiles and is a new and potentially better way to discover earning potential. In addition, the data is for all graduates, including those who have earned additional degrees, so it captures the earnings of some schools’ higher earners.
(6, 7, 8 & 9) Earnings value-add: 50%. We used four numbers to calculate how much more or less the graduates of each school are earning, compared with graduates of similar colleges.
- Early-career PayScale earnings after adjusting for majors (20%). In other analyses of which colleges seem to produce the highest earners, engineering and tech-oriented colleges dominate because computer scientists and engineers tend to earn very high salaries. But what if your child isn’t fated to be an engineer? Which school produces the highest earners for other majors? Money used a regression analysis to estimate the average impact on reported earnings of each of three “buckets” of majors: Business, STEM (science, technology, engineering, and math) and “everything else,” which is mostly made up of humanities, education, visual and performing arts, and behavioral and social sciences. We calculated the average earnings for each group of majors. Then, using federal data on the number of graduates in each major at a college, we calculated what the expected earnings would be for that school if each student earned an average salary for his or her field of study. Colleges where graduates earn more than the predicted salary are ranked more highly.
- Mid-career PayScale earnings after adjusting for majors (5%). We weighed mid-career earnings less heavily because, as previously noted, where a person went to school generally has a greater impact on salary earlier in a
- Value-added early-career PayScale earnings (20%). We also conducted the same kind of ”value-added“ analysis we did in the educational quality and affordability categories. To find colleges that accomplish what education is supposed to do—help hardworking students from any background get ahead—we calculated the impact of test scores and of coming from a low-income family on graduates’ earnings. Then, using federal data on the test scores and economic background of each school’s student body, we calculated what the expected earnings would be for each school. Colleges where graduates earned more than would be predicted, given their student body, are ranked more highly.
- Value-added mid-career PayScale earnings (5%).
Changes from our 2014 rankings: To make room for the new indicators, we slightly reduced the weighting we had given to mid-career earnings, because, as cited above, a college typically has the largest impact on early-career earnings.
How we calculated these rankings: For each of our data points, for each college, we calculated a “Z-score”—a statistical technique that turns lots of seemingly different kinds of data into a standard set of numbers that are easily comparable. Each Z-score tells you how far any particular number—such as, say, a graduation rate of 75% —is from the average for the entire group under consideration. We added up each school’s Z-score for each of the factors we used, and normalized that score into an easily understandable five-point scale. We then ranked the schools according to their total score.
CAVEATS
While we believe our rankings are better than any others out there, we know they are not perfect. Here are some caveats that we are aware of, and hope to address in future rankings.
Data on student learning. What students actually learn is a big unanswered question in higher education. Very few colleges try to measure learning, and very few of the ones that do release the results publicly. In addition, we were not able to find good data on basic indicators of academic rigor, such as the number of pages of writing or reading required per assignment. We will continue to explore the data in hopes of finding useful indicators of student learning.
Geographical adjustment of wages. Some colleges in low-wage areas, such as parts of the South and Midwest, get lower rankings because we have not adjusted the PayScale data for cost of living. Wages are higher in New York, for example, because rents and other living costs are much higher, but that doesn’t mean the graduate’s lifestyle is better. We will consider making geographic adjustments of earnings data in the future.
Alumni satisfaction. The information that’s currently available on alumni satisfaction—based on surveys and donation rates—is incomplete for many of the colleges on our list, so we were unable to include it as a measure. We are looking for ways to improve the alumni data and make it part of future rankings.
Out-of-state-public college tuition. Many students are interested in attending public colleges out of state. But public colleges charge higher tuition to out-of-state students. We will consider developing a cost and value comparison for out-of-state students.
Graduate earnings. The PayScale data is admittedly imperfect. It does not reflect unemployed or part-time workers, which means that the numbers we see may be skewed higher than the real average for all graduates. Offsetting that, at least in part: We are using data only on those who stopped their education at a bachelor’s degree, thus excluding high earners such as doctors, lawyers, and MBAs.
Net prices. Money's estimated net price is likely to be higher than the average price actually paid by most families. It is crucial to understand that while the Money net price estimate is the average price charged by the college, you and your family will pay less than that if your student receives any federal, state, or private scholarships. As an analogy, if you’re buying a can of soup, you have to pay what the grocery store charges, unless you have a coupon. Just as coupons can be used at competing supermarkets, most federal, state, and private scholarships can be used at many competing colleges. So we help you identify which college has the lowest net price at which you can apply any additional scholarships. In addition, our net price is based on the average student’s time-to-degree. Your student may finish in four years. And while many students take more than four years to finish a degree, they aren’t necessarily paying full tuition for the five or six years before they graduate, since they may, for example, take a year off to work. Money attempted to account for this by adjusting the estimated time to degree for all schools with large and established co-op work programs, such as Northeastern University. In addition, Money is not adding to the cost of a degree any amount for “opportunity cost,” which is the amount in earnings a student loses by not finishing a degree on time and delaying entry into the higher-paying job market of college graduates. So, while we may, in some cases, be overestimating the price of a degree, we are also underestimating the total economic expense to a student of schools that don’t speed them to graduation.
ACKNOWLEGDEMENTS
David Morton and Gareth Harper of College Measures collected and analyzed our data.
Katie Bardaro, the lead economist for Payscale.com, produced unique earnings reports and analyses for Money, and advised us on ways to account for the impact of majors on earnings.
Michael DeLeon, a graduate student at Teachers College, Columbia University, served as Money’s research assistant, analyzing data and conducting statistical tests.
Among the many experts who volunteered advice and suggestions:
- Anthony Carnevale, director and research professor, Georgetown University Center on Education and the Workforce.
- Trey Miller, associate economist, RAND Corporation
- Jennifer Lewis Priestley, statistics professor, Kennesaw State University
- Matt Reed, program director, The Institute for College Access and Success
- Joseph Yeado higher education research and policy analyst, The Education Trust
Among the dozens of experts we interviewed or consulted:
- Beth Akers and Matthew Chingos, Brookings Institution education researchers
- Robert Archibald, economics professor, William & Mary
- Jeffrey Arnett, research psychology professor, Clark University
- Sandy Baum, senior fellow, Urban Institute
- Douglas Bennett, former president, Earlham College
- Charles Blaich, director, Center of Inquiries at Wabash College. Also at the Center, Kathleen Wise, assistant director.
- Brandon Busteed, executive director, Gallup Education
- Corbin Martin Campbell, associate professor, Columbia University’s Teachers College
- Jesse Cunha, assistant professor, Graduate School of Business and Public Policy Naval Postgraduate School
- John Curtis, director of research and public policy, American Association of University Professors
- William Destler, president, Rochester Institute of Technology
- Marilyn Emerson, past president, Independent Educational Consultants Association
- Greg Fenves, provost, University of Texas
- David Figlio, director, Northwestern University Institute for Policy Research
- Douglas Harris, associate economics professor, Tulane University
- Terry Hartle associate vice president of the American Council on Education. Also at ACE: senior vice president Dan Madzelan
- David Hawkins, director of public policy and research, National Association for College Admission Counseling. Also at NACAC: president Kay Murphy and president-elect Jeffrey Fuller
- Kerry Healy, president, Babson College
- Lisa Heffernan, author, GrownandFlown.com
- Bradon Hosch, assistant vice president for institutional research, planning & effectiveness, Stony Brook University
- Mark Kantrowitz, publisher, Edvisors.com
- Michael McPherson, president, Spencer Foundation
- Ben Miller, senior policy analyst, New America Foundation
- National College Advising Corps (several staffers)
- Josipa Roksa, associate director, Center for Advanced Study of Teaching and Learning in Higher Education, University of Virginia
- Joyce Serido, research professor, University of Arizona
- Michael Violtt, president, Robert Morris University Illinois
- Carl Wieman, professor in the physics and in the education departments at Stanford University, Nobel Prize-winner, and former associate director for science at the White House Office of Science and Technology Policy