College is now the second-largest financial expenditure for many families, exceeded only by the purchase of a home. So it isn’t surprising that parents and students are taking a hard look at the costs and payoffs of any college they consider.
To help families do that, Money has drawn on the research and advice of dozens of the nation’s top experts on education quality, financing, and value to develop a uniquely practical analysis of more than 700 of the nation’s best-performing colleges.
Data collection and analysis were led by American Institutes for Research/Colleges Measures researcher Cameron Smither, with the help of research associate Deaweh Benson and scholar Janet Gao. Money’s editorial staff, however, is solely responsible for the final ranking decisions.
Here is the methodology behind our new 2018 rankings, first in a short version, followed by a more comprehensive one.
Money’s Methodology, in Brief
To make our initial cut, a college had to:
- Have at least 500 students.
- Have sufficient, reliable data to be analyzed.
- Not be in financial distress.
- Have a graduation rate that was at or above the median for its institutional category (public, private or historically black college or university), or have a high “value-added” graduation rate (in other words: score in the top 25% of our test of graduation rates after accounting for the average test scores and percentage of low income students among its enrollees).
A total of 727 schools met these requirements. We ranked them on 26 factors in three categories:
Quality of education (1/3 of weighting), which was calculated using:
- Six-year graduation rate (30%). This measure is widely recognized as one of the most important indicators of a college’s quality. This year we updated our six-year graduation rate to capture students who transferred into a college as well as full-time students.
- Value-added graduation rate (30%). This is the difference between a school’s actual graduation rate and its expected rate, based on the economic and academic profile of the student body (measured by the percentage of attendees receiving Pell grants, which are given to low-income students; the average standardized test scores and high school GPAs of incoming freshmen; and the share of traditional, full-time students on campus).
- Peer quality (10%). This is measured by the standardized test scores of entering freshman (5%), and the percentage of accepted students who enroll in that college, known as the “yield” rate (5%).
- Instructor quality (10%). This measured by the student-to-faculty ratio.
- Financial troubles (15%). Financial difficulties can affect the quality of education, and a growing number of schools are facing funding challenges. Financial troubles are signaled by a school’shaving low bond ratings, being labeled by the U.S. Department of Education as having financial issues, or having layoffs in the past year.
- Pell Grant recipient outcomes (5%). This year we added a new measure that tracks how many Pell Grant recipients a school graduates, as a way to analyze how well schools support their low-income students.
Affordability (1/3 of weighting), which was calculated using:
- Net price of a degree (30%). This is the estimated amount a typical freshman starting in 2018 will pay to earn a degree, taking into account the college’s sticker price; how much the school awards in grants and scholarships; and the average time it takes students to graduate from the school, all as reported to the U.S. Department of Education. (The data on merit and need-based grants you’ll see reported on our website are numbers reported by the colleges to Peterson’s, and are for your information but are not used in the rankings.)
- Debt (20%). This takes into account both the estimated average student debt upon graduation (15%) and average amount borrowed through the parent federal PLUS loan programs (5%).
- Student loan repayment and default risk (15%). This is the percentage of students who have paid down at least $1 in principal in five years of payments, as reported on the federal College Scorecard, and the Student Loan Default Risk Index, which is a calculation that compares the number of borrowers who default on their federal student loans as a percentage of the school’s enrollment.
- Value-added student loan repayment measures (15%). These are the school’s performance on the student loan repayment and default measures after adjusting for the economic and academic profile of the student body.
- Affordability for low-income students (20%). This is based on federally collected data on the net price that students from families earning $0 to $30,000 pay.
Outcomes (1/3 of weighting), which was calculated using:
- Graduates’ earnings (12.5%), as reported by alumni to PayScale.com. This includes early career earnings, three years after graduation (7.5%), as well as mid-career earnings, which are for those whose education stopped at a bachelor’s degree, 20 years after graduation (5%).
- Earnings adjusted by majors (15%). To see whether students at a particular school earn more or less than would be expected given the subjects students choose to study, we adjusted PayScale.com’s data for the mix of majors at each school; for early career earnings (10%) and mid-career earnings (5%).
- College Scorecard 10-year earnings (10%). The earnings of federal financial aid recipients at each college, as reported to the IRS 10 years after the student started at the college.
- Value-added earnings (12.5%). To see if a school is helping launch students to better-paying jobs than competitors that take in students with similar academic and economic backgrounds, we adjusted PayScale.com’s earnings data — both early career earnings (7.5%) and mid-career earnings (5%) — for the student body’s average test scores and high school GPA, and for the percentage of low-income students at each school.
- College Scorecard employment outcomes (25%). This includes two measures: the share of alumni who are neither working nor enrolled six years after starting college (12.5%) and the share of alumni who are earning less than $25,000 — roughly equivalent to the earnings of a high school graduate — six years after starting (12.5%)
- Job meaning (5%). We used the average score of each school’s alumni on PayScale.com’s survey question of “Does your work make the world a better place?”
- Socio-economic mobility index (20%). We included data provided by the Equality of Opportunity Project that reveals the percentage of students each school move from low-income backgrounds to upper-middle-class jobs by the time the student reaches their mid-30s.
Finally, we used statistical techniques to turn all the data points into a single score and ranked the schools based on those scores.
For a more detailed description of the methodology, read on.
Money’s Methodology, in Detail
Money’s Best Colleges for Your Money rankings combine the most accurate pricing estimates available with indicators of alumni financial success, along with a unique analysis of how much value a college adds when compared to other schools that take in similar students.
We estimate a college’s “value added” by calculating its performance on important measures such as graduation rates, student loan repayment and default rates, and post-graduation earnings, after adjusting for the types of students it admits. We believe this analysis gives students and parents a much better indication of which colleges will provide real value for their tuition dollars.
In building our rankings, Money focused on the three basic factors that surveys show are the most important to parents and students:
- Quality of education
Because these three factors are so interrelated and crucial to families, we gave them equal weights in our ranking.
To gauge how well a college is performing relative to its peers, we gathered federal and college data on the admissions selectivity of each school such as average test scores and grade point averages of students at each college; and the percentage of each graduating class receiving degrees in each major; and the percentage of students with incomes low enough to qualify for Pell Grants (the large majority of which go to families with incomes below $50,000). We then used the statistical technique of regression analysis to determine the impact of a student’s test scores, economic background, and college major on key factors, such as graduation rates and future earnings. That enables us to see how much better or worse a particular college performed than would be typical for schools with similar students.
Value-added measures are a way to capture some parts of a college’s quality that more commonly used objective numbers can’t capture, by giving some indication of how a college affects graduates’ outcomes — regardless of where they started. We used this estimate of comparative performance by the college as an important part of the ranking, as you’ll see below.
Money assigned each indicator a weight based on our analysis of the importance of the factor to families, the reliability of the data, and our view of the value of the information the data provided.
To ensure that we fairly compared apples with apples, we decided to analyze only those colleges that, in our view, passed these minimal quality and data tests. We decided that to be included in our rankings, a college must meet these criteria:
- Be a public or private nonprofit college in the United States that enrolls freshmen.
- Have at least 500 in-person undergraduate students and at least 150 full-time first-time (FTFT) undergraduate students.
- Charge tuition in dollars. We screened out military academies that require a commitment of service in exchange for free tuition because we don’t know how to value the cost of the time commitment and risk.
- Have sufficient reliable data to be analyzed.
- Not show signs of financial distress, as defined by having at least two of the following indicators: Low ratings by bond-rating agencies, warnings from accreditors, layoffs in the last year, being placed on “heightened cash management” by the U.S. Department of Education.
- Have a graduation rate that was at or above the median for its institutional category (public, private, or HBCU). Or have a high “value-added” graduation rate (in other words: score in the top 25% of our test of graduation rates after accounting for the average standardized test scores, average high school grade point average, share of entering class that are first-time, full-time students, and percentage of students receiving a Pell grant).
- Have a Cohort Default Rate (CDR) lower than 25%.
This eliminated some colleges that may be good values, but might be facing temporary financial difficulties or be too small for us to evaluate. But it left us with a robust universe of more than 700 colleges. In our view, even the lowest-ranked of the schools on our list demonstrate that they provide at least some value for your tuition dollars.
We then used the following data and methodologies in our three basic categories to create our rankings:
Quality of Education: 1/3 of weighting
Changes from our 2017 rankings: For 2018, we introduced two pieces of data based on newly available information from the Education Department on graduation rates for transfer students and Pell Grant recipients. We slightly reduced the weight on financial trouble to make room for the Pell Grant measure.
For this factor, we combined the following seven indicators:
Graduation rates: 30%. Education experts and college officials generally agree that one of the most important reflections of a college’s quality is its graduation rate. (The U.S. Department of Education calls them “helpful indicators of institutional quality.”) Many rankings use the most commonly cited graduation rate statistic — the percentage of freshmen that graduate within six years. Yet that rate is based only on first-time, full-time students, meaning it misses a large (and growing) population of students. To help address that, this year we used newly available federal data to expand that graduation rate by specifically including the share of students who transferred into a college and earned a degree within six years. The included gradation rate is calculated using bachelor’s degree-seeking students only.
Value-added graduation rate: 30%. Many education experts and college officials point out that the basic graduation rate number, while useful, is an insufficient indicator of a college’s value because research shows that wealthier students and students who got good grades in high school are more likely to graduate on time no matter what college they attend — so elite, expensive schools would be expected to have high graduation rates. For that reason, we also calculated each school’s relative performance after accounting for the economic background and academic preparation of its students. The higher a school’s graduation rate was above the rate that would be predicted for a school with that particular mix of students, the more value that particular college is assumed to have added. This “value-added” graduation rate analysis is widely accepted. (A 2013 OECD paper found that such “value-added measurement provides a ‘fairer’ estimate of the contribution of educational institutions.”)
Peer quality: 10%. Decades of research have shown that undergraduates have a major impact on their peers. Students who room with better students get better grades, for example. By contrast, students surrounded by less studious peers — for example, heavy drinkers, video game players, etc. — study less and get worse grades. And students who room or socialize with more successful students tend to get better jobs upon graduation.
Our peer quality measure consists of these two indicators:
- Academic preparation of students (5%). We used composite test scores of incoming freshmen to estimate the academic qualifications of the student body. While there is much debate over the usefulness and validity of standardized tests, the SAT and ACT tests currently provide the only nationally comparable data on student abilities.
- Yield (5%). The federally reported “yield” is the percentage of accepted students who enroll in a given college. The higher the yield, the more likely it is that the school was the student’s first choice, or best option, and that applicants have perceive the college’s quality as high.
Faculty: 10%. Research shows that students who get more individual attention from faculty tend to achieve more, both in college and after graduation. (See, for example, the recent Gallup-Purdue study.) So we include each school’s student-to-faculty ratio.
Financial troubles: 15%. We added this factor in 2017 because financial difficulties affect the quality of education through layoffs of staff, closure of programs, and reductions of services. And a growing number of schools are facing significant funding problems. A school was deemed to be facing financial trouble if it:
- Is on the “Heightened Cash Monitoring 2” list published by the U.S. Department of Education, reflecting concerns about the school’s financial stability.
- Has bonds that are rated below-investment grade by Moody’s or Standard & Poor’s.
- Has received warnings from its accreditor.
- Has had staff layoffs in the most recent academic year.
Pell Grant recipient outcomes: 5% We added this factor in 2018 as a way to analyze how well schools help low-income students succeed. We used the number of Pell recipients in the bachelor’s degree seeking cohort who actually earned a degree six years after enrolling. With this new measure, schools get credit for each additional Pell Grant recipient they graduate — so schools that serve a large number of low-income students and serve them well rise to the top.
Affordability: 1/3 of weighting
Changes from our 2017 rankings: For 2018, we reduced two student loan default measures from a combined 15% to a combined 10% and instead put more emphasis on the two student loan repayment measures. We — and many experts — believe that looking at how well graduates are managing repayment is a better way to manage the burden of student loans.
For this factor we used eight indicators, weighted as shown:
Net price of a degree: 30%. Money developed a unique and — according to many experts — more accurate estimate of college prices. We started with the “sticker” price provided by the college to the federal government. The full cost of attendance includes tuition, fees, room, board, books, travel, and miscellaneous costs. For public colleges, we used the in-state tuition and fees.
We then subtracted the average amount of institutional aid provided per student by the college. That gave us an estimate of the net price the college charged an average student for the most recent academic year these data were available. (The data on merit and need-based grants you’ll see reported on our website are numbers reported by the colleges to Peterson’s, and are for your information but are not used in the rankings.)
Next, we used federal data on the percentage of students who graduate from that college in four, five, and six years respectively to calculate an average time to degree, which for the vast majority of schools is now more than four years.
Judith Scott-Clayton of Columbia University, for example, has found that the average college graduate pays for 4.5 years of college, not just 4. However, because Money’s 2017 rankings includes only those schools with the best graduation rates, the average time to degree for all of the schools on our list is just 4.3 years.
We generated an estimated net price for a series of six years, inflating each slightly since tuition prices typically rise each year. We then created a weighted average total net price, based on those prices and the proportion of students who complete their degrees within four years, five years, and in six years. A handful of exceptions are made for institutions with longstanding co-op programs.
The federal net price estimate for a year’s costs is typically lower than Money’s estimate because the Department of Education subtracts almost all grants — federal, state and institutional — while we only subtract aid provided by the school, for reasons described below.
Money gives the net price sub-factor a very heavy weighting because surveys show that the cost of college is now one of families’ biggest worries. A 2014 Harvard Institute of Politics survey reported, for example, that 70% of young adults say finances were a major factor in their college decision.
Educational debt: 20%. Surveys show debt to be another of most families’ biggest worries.
Our educational debt assessment is based on these two indicators:
- Student borrowing (15%). Our measure of student borrowing starts with the average dollar value of federal student loans by entering undergraduates, multiplies that by the share of entering undergraduates that take out a federal student loan, then multiplies that again by the institution’s average time to degree. This generates the average total federal student loan amount across all entering undergraduate students. These data are reported by the federal government or calculated using federally reported data. A handful of exceptions are made for institutions with longstanding co-op programs.
- Parent borrowing (5%). The federal government reports the total amount of parent PLUS loans awarded to parents at each college each year. We divided this number by the school’s total undergraduate enrollment to calculate an average parent PLUS debt per undergraduate student. While other organizations generally don’t include parental debt in their rankings, Money believes parent educational borrowing is a financial burden, and should be an important consideration.
Ability to repay: 30%. The ability to repay loans taken out to finance a college education is another indication of a school’s affordability for its students.
We evaluated ability to repay using these four indicators:
- Student loan default risk index (SDRI) (5%). Each year, the federal government publishes the number of former students who left college three years ago and have since defaulted on their federal student loans. Using a methodology proposed by The Institute for College Access and Success (TICAS), Money adjusts these numbers for the share of undergraduate students at the college who take out federal student loans. TICAS says this is a fairer and more accurate indicator of the odds that any particular student at the college will end up defaulting on a student loan.
- Value-added SDRI (5%) We analyzed how much above or below the school’s SDRI was compared to schools with similar student bodies.
- Federal student loan repayment (10%). We used the data released on the federal College Scorecard on the percentage of student borrowers who pay down at least $1 on their principal in their first five years of payments. We believe this provides different and important information about alumni financial stability since some schools have started to game default rates by helping students to take advantage of hardship programs that delay defaults until after the three-year measuring window is closed.
- Value-added student loan repayment (10%). We analyzed how much above or below the school’s repayment rate was compared to schools with similar student bodies.
Affordability for low-income families: 20%. The federal government reports the average price paid at each school by federal aid recipients who are in the lowest income group ($0-$30,000). Money uses this as an indicator of how well the college targets need-based financial aid. At some schools, those from the lowest-income group are expected to come up with more than $20,000 per year — which may be more than the entire annual income of the family.
Outcomes: 1/3 of weighting
Changes from our 2017 rankings: For 2018, we eliminated a previous measure we called the estimated market value of alumni skills (it was weighted 10%), based on a Brookings Institution report that gathered the most commonly listed skills on alumni LinkedIn profiles and assigned them values based on data from analytics firm BurningGlass Technologies. To replace it, we increased the weight give to the share of alumni who are unemployed six years after school and the share of alumni earning at least $25,000 — both of which we felt were important measures of alumni outcome that did not focus solely on high earnings. Also, PayScale.com changed its methodology this year for collecting the salary data it provides to Money, setting “early career” to exactly three years’ experience (instead of within five) and mid-career to exactly 20 years experience (instead of more than 10 years). As a result, we do not recommend comparing year-to-year performance on the PayScale.com earnings and value-added earnings measures.
For our third category, we used a total of 12 indicators, weighted as shown below.
College Scorecard earnings: 10%. Median earnings for 2013-14, as reported to the IRS, of people who started as college freshmen and received federal aid in 2002 or 2003 and are currently working and not enrolled, published on the U.S. Department of Education’s College Scorecard.
Payscale.com earnings: 40%. Although the earnings data reported on PayScale.com is based on what visitors to the site voluntarily enter, our previous analysis shows that it is reliable. PayScale provided Money with aggregated data on more than 2.3 million people who in the last nine years have filled out an online form that asks what their job is, what their salary is, where they went to college, what they majored in, and when they graduated. This adds additional important information to federal earnings data, because it covers only the students who earned a bachelor’s degree from the school, not the dropouts. This data also screens out those who have earned graduate degrees, and thus may be earning more because of their graduate, rather than undergraduate, education.
- Early career earnings (7.5%). Earnings as reported for graduates with three years’ work experience.
- Mid-career workers (5%). We used the average earnings reported by those who do not have graduate degrees and have 20 years’ work experience.
- Value-added early career earnings (7.5%) We analyzed how the school’s early-career earnings level compared with schools with similar student bodies
- Value-added mid-career earnings (5%) We analyzed how the school’s mid-career earnings level compared with schools with similar student bodies
- Major-adjusted early career earnings: (10%) We analyzed the impact of the popularity of each major on schools’ average earnings, and subtracted out the impact of majors on each school’s average earnings. We then calculated how the school’s adjusted earnings compared with what would be typical for schools with similar major mixes. This allows us to compare, for example, schools that specialize in producing teachers with schools that produce engineers, and determine which art schools produce the highest-earning artists.
- Major-adjusted mid-career earnings. (5%)
Job meaning: 5%. Our 2016 Money/Barnes & Noble College survey found that parents and students alike put a higher emphasis on finding a “fulfilling” career, than on finding a high-paying job. So we include alumni responses to a PayScale.com survey question about whether their job “makes the world a better place.”
Share of alumni who are unemployed six years after starting school: 12.5%. This data from the federal College Scorecard shows the percentage of people who started as freshmen in 2006-07 or 2007-08, and were neither enrolled in school nor employed in 2013-14.
Share of alumni earning at least $25,000 within six years of starting college: 12.5%. The federal government publishes this data to show which colleges produce alumni who are earning at least as much as the typical 2013-14 high school graduate.
Socio-economic mobility: 20%. In January of 2017, a team of economists led by Raj Chetty of Stanford published a paper showing data for each college on how many low-income students (i.e with family incomes in the lowest quintile, or below $25,000 a year) were earning incomes that put them in the top quintile for their age group (earning at least $58,000) by 2014. We used the Equality of Opportunity Project’s “mobility rate” in the rankings this year because that is the indicator that shows “which colleges are contributing to the American Dream,” according to a co-author of the study, Brown University economist John Friedman.
How We Calculated These Rankings
For each of our data points, for each college, we calculated a “Z-score” — a statistical technique that turns lots of differently scaled data into a standard set of numbers that are easily comparable. Each Z-score tells you how far any particular number — such as, say, a graduation rate of 75% — is from the average for the entire group under consideration. We then rescaled each of the three main factors so that they had the same ranges. Finally, we ranked the schools according to their total score across our three main factors.
While we believe our rankings are better than any others out there, we know they are not perfect. Here are some caveats that we are aware of and hope to address in future rankings.
Student learning. What students actually learn is a big unanswered question in higher education. The various assessments of college student learning are controversial, few colleges use them and very few of the ones that do release the results publicly. In addition, we were not able to find good data on basic indicators of academic rigor, such as the number of pages of writing or reading required per assignment. We will continue to explore the data in hopes of finding useful indicators of student learning.
Geographical cost of living adjustments. Some colleges in low-wage, low-living-cost areas, such as parts of the South and Midwest, may get lower rankings because we have not adjusted the earnings data cost of living. But several factors added to the Money rankings recently lessen the impact of any geographic wage differential. The College Scorecard employment data treats all regions equally, as does PayScale’s “Job Meaning” survey. The student loan default and repayment data also should treat all regions equally since it shows which alumni have sufficient disposable income to repay their loans, in theory — possibly benefiting students in low-cost areas. On the other side of the equation, we also don’t adjust the cost of the living as part of our net price calculation for colleges that are in expensive areas, such as major cities.
Out-of-state public college tuition. Many students are interested in attending public colleges out of state. But public colleges charge higher tuition to out-of-state students. We will consider developing a cost and value comparison for out-of-state students.
Net prices. Money’s estimated net price is likely to be higher than the average price actually paid by most families. It is crucial to understand that while the Money net price estimate is based on the average price charged by the college, you and your family will pay less than that if your student receives any federal, state, or private scholarships. As an analogy, if you’re buying a can of soup, you have to pay what the grocery store charges, unless you have a coupon. Just as coupons can be used at competing supermarkets, most federal, state, and private scholarships can be used at many competing colleges. So we help you identify which college has the lowest net price at which you can apply any additional scholarships. In addition, our net price is based on the average student’s time-to-degree. Your student may finish in four years. And while many students take more than four years to finish a degree, they aren’t necessarily paying full tuition for the five or six years before they graduate, since they may, for example, take a year off to work. Money attempted to account for this by adjusting the estimated time to degree for all schools with large and established co-op work programs, such as Northeastern University. In addition, Money is not adding to the cost of a degree any amount for “opportunity cost,” which is the amount in earnings a student loses by not finishing a degree on time and delaying entry into the higher-paying job market of college graduates. So, while we may, in some cases, be overestimating the price of a degree, we are also underestimating the total economic expense to a student of schools that don’t speed them to graduation.
Former Money senior writer Kim Clark was instrumental in developing our methodology and rankings. Mark Schneider, now the director of the National Center for Education Statistics, and Matthew Soldner, now the commissioner of the National Center for Education Evaluation and Regional Assistance at NCES, consulted with Money on the creation of these rankings when they were at the American Institutes for Research. Money is thankful to several higher education experts whose insights in recent years helped us make decisions on what data to use and how to weight it. This year, in particular, we thank Michael Itzkowitz, senior fellow at ThirdWay and former director of the College Scorecard; Elise Miller, vice president for research and policy analysis at the Association of Public and Land Grant Universities; and Ben Miller, senior director of postsecondary education at the Center for American Progress.
The Money editorial staff is solely responsible for the final ranking decisions.