More than one-third of parents surveyed by Gallup last spring said they were anxious about paying for their children’s college. That’s no surprise: after years of continuous increases, college sticker prices are so high now that only families who have saved significantly for 18 years or who are independently wealthy can afford to pay them outright.
Yet affordable, high quality colleges do exist—if you know where to look. To help families in their college search, MONEY’s sixth annual Best Colleges for Your Money offer a practical analysis of more than 700 four-year colleges.
Data collection and analysis for the rankings were led by American Institutes for Research (AIR) researcher Dr. Audrey Peek, with the help of research associate Deaweh Benson and research assistant Merykokeb Belay. Quality control was led by Dr. Eric Larsen of AIR. MONEY’s editorial staff, however, is solely responsible for the final ranking decisions.
Here is the methodology behind our new 2019 rankings, first in a short version, followed by a more comprehensive one.
MONEY’s Methodology, in Brief
To make our initial cut, a college had to:
- Have at least 500 students.
- Have sufficient, reliable data to be analyzed.
- Not be in financial distress.
- Have a graduation rate that was at or above the median for its institutional category (public, private or historically black college or university), or have a high “value-added” graduation rate (in other words: score in the top 25% of graduation rates after accounting for the average test scores and percentage of low-income students among its enrollees).
A total of 744 schools met these requirements. We ranked them on 26 factors in three categories:
Quality of education (1/3 of weighting), which was calculated using:
- Six-year graduation rate (30%). This measure is widely recognized as one of the most important indicators of a college’s quality. We adjust our six-year graduation rate to capture students who transferred into a college as well as first-time students.
- Value-added graduation rate (30%). This is the difference between a school’s actual graduation rate and its expected rate, based on the economic and academic profile of the student body (measured by the percentage of attendees receiving Pell grants, which are given to low-income students; the average standardized test scores and high school GPAs of incoming freshmen; and the share of traditional, full-time students on campus).
- Peer quality (10%). This is measured by the standardized test scores of entering freshman (5%), and the percentage of accepted students who enroll in that college, known as the “yield” rate (5%).
- Instructor quality (10%). This is measured by the student-to-faculty ratio.
- Financial troubles (15%). Financial difficulties can affect the quality of education, and a growing number of schools are facing funding challenges. Financial troubles are signaled by a college having low bond ratings, being labeled by the U.S. Department of Education as having financial issues, or having layoffs in the past year.
- Pell Grant recipient outcomes (5%). This measures how many Pell Grant recipients a school graduates, as a way to analyze how well schools support their low-income students.
Affordability (1/3 of weighting), which was calculated using:
- Net price of a degree (30%). This is the estimated amount a typical freshman starting in 2019 will pay to earn a degree, taking into account the college’s sticker price; how much the school awards in grants and scholarships; and the average time it takes students to graduate from the school, all as reported to the U.S. Department of Education. (The data on merit and need-based grants you’ll see reported on our website are numbers reported by the colleges to Peterson’s. We publish these for your information, but they are not used in the rankings.)
- Affordability for low-income students (20%). This is based on federally collected data on the net price for one year paid by students from families earning $0 to $30,000.
- Debt (20%). This takes into account for all undergraduates both the estimated average student debt upon graduation (15%) and average amount borrowed through the parent federal PLUS loan programs (5%).
- Ability to repay debt (15%). This measure includes the percentage of students who have paid down at least $1 in principal in five years of payments, as reported on the federal College Scorecard (7.5%).The other half is the Student Loan Default Risk Index, (7.5%) which is a calculation that compares the number of borrowers who default on their federal student loans as a percentage of the school’s enrollment.
- Value-added student loan repayment measures (15%). These are the school’s performance on the student loan repayment and default measures after adjusting for the economic and academic profile of the student body.
Outcomes (1/3 of weighting), which was calculated using:
- Graduates’ earnings (12.5%), as reported by alumni to Payscale.com. This includes earnings for those whose education stopped at a bachelor’s degree for graduates three years after graduation (7.5%) and 20 years after graduation (5%).
- Graduates’ earnings adjusted by majors (15%). To take into account the subjects students choose to study, this measure adjusts PayScale.com’s data for the mix of majors at each school for early career earnings (10%) and mid-career earnings (5%).
- College Scorecard employment outcomes (25%). This includes two measures for federal financial aid recipients: the share of alumni who are neither working nor enrolled six years after starting college (12.5%) and the share of alumni who are earning less than $25,000 — roughly equivalent to the earnings of a high school graduate — six years after starting (12.5%)
- Earnings 10 years after college entry (10%). This captures the earnings of federal financial aid recipients at each college 10 years after the student started at the college, as reported to the IRS and presented in the College Scorecard.
- Value-added earnings (12.5%). To capture whether a school is helping launch students to better-paying jobs than competitors that take in students with similar academic and economic backgrounds, this measure adjusts PayScale.com’s earnings data for the student body’s average test scores and high school GPA, and for the percentage of low-income students at each school.
- Job impact (5%). This measure is the average score of each school’s alumni on PayScale.com’s survey question of “Does your work make the world a better place?”
- Socio-economic mobility index (20%). We included data provided by Opportunity Insights that shows the percentage of students at each school who move from low-income backgrounds to upper-middle-class jobs by the time students reach their mid-30s.
For a more detailed description of the methodology, read on.
MONEY’s Methodology, in Detail
MONEY’s Best Colleges for Your Money rankings combine the most accurate pricing estimates available with indicators of alumni financial success, along with a unique analysis of how much value a college adds when compared to other schools that take in similar students.
We estimate a college’s “value added” by calculating its performance on important measures such as graduation rates, student loan repayment and default rates, and post-graduation earnings, after adjusting for the types of students it admits. We believe this analysis gives students and parents a much better indication of which colleges will provide real value for their tuition dollars.
In building our rankings, MONEY focused on the three basic factors that surveys show are the most important to parents and students:
- Quality of education
Because these three factors are so interrelated and crucial to families, we gave them equal weights in our ranking.
To gauge how well a college is performing relative to its peers, we gathered federal and college data on the admissions selectivity of each school such as average test scores and average freshman GPA; and the percentage of each graduating class receiving degrees in each major; and the percentage of students with incomes low enough to qualify for Pell Grants (the majority of which go to families with incomes below $50,000). We then used the statistical technique of regression analysis to determine the impact of a student’s test scores, economic background, and college major on key factors, such as graduation rates and future earnings. That enables us to see how much better or worse a particular college performed than would be typical for schools with similar students.
Value-added measures are a way to capture some parts of a college’s quality that more commonly used objective numbers can’t capture. They give some indication of how a college affects graduates’ outcomes — regardless of where those graduates started.
MONEY assigned each indicator a weight based on our analysis of the importance of the factor to families, the reliability of the data, and our view of the value of the information the data provided.
To ensure that we fairly compared apples with apples, we decided to analyze only those colleges that, in our view, passed these minimal quality and data tests. We decided that to be included in our rankings, a college must meet these criteria:
- Be a public or private nonprofit college in the United States that enrolls freshmen.
- Have at least 500 in-person undergraduate students and at least 150 full-time first-time (FTFT) undergraduate students.
- Charge tuition in dollars. We screened out military academies that require a commitment of service in exchange for free tuition because we don’t know how to value the cost of the time commitment and risk.
- Have sufficient data to be analyzed. Any college missing graduation rates and Payscale earnings data was removed from our universe. And any college missing 5 or more metrics was removed from the final ranking.
- Not show signs of financial distress, as defined by having at least two of the following indicators: Low ratings by bond-rating agency Standard & Poor’s, warnings from accreditors, layoffs in the last year, or being placed on “heightened cash management 2” by the U.S. Department of Education.
- Have a graduation rate that was at or above the median for its institutional category (public, private, or HBCU). Or have a high “value-added” graduation rate (in other words: score in the top 25% of graduation rates after accounting for the average standardized test scores, average high school grade point average, share of entering class that are first-time, full-time students, and percentage of students receiving a Pell Grant. For these colleges, a graduation rate of 40% was used as a cut off).
- Have a Cohort Default Rate (CDR) lower than 25%.
This eliminated some colleges that may be good values, but might be facing temporary financial difficulties or may be too small for us to evaluate. But it left a robust universe of more than 700 colleges. In our view, even the lowest-ranked of the schools on our list demonstrate that they provide at least some value for your tuition dollars.
We then used the following data and methodologies in our three basic categories to create our rankings:
Quality of Education: 1/3 of weighting
Changes from our 2018 rankings: Moody’s declined to share the list of colleges it rates that have below-investment grade ratings this year.
For this factor, we combined the following seven indicators:
Graduation rates: 30%. Education experts and college officials generally agree that one of the most important reflections of a college’s quality is its graduation rate. (The U.S. Department of Education calls them “helpful indicators of institutional quality.”) Many rankings use the most commonly cited graduation rate statistic — the percentage of freshmen that graduate within six years. Yet that rate is based only on first-time, full-time students, meaning it misses a large (and growing) population of students. To help address that, this year we also used relatively new available federal data to expand that graduation rate by specifically including the share of students who transferred into a college and earned a degree within six years. The included graduation rate is calculated using bachelor’s degree-seeking students only.
Value-added graduation rate: 30%. Many education experts and college officials point out that the basic graduation rate number, while useful, is an insufficient indicator of a college’s value because research shows that wealthier students and students who got good grades in high school are more likely to graduate on time no matter what college they attend — so elite, expensive schools would be expected to have high graduation rates. For that reason, we also calculated each school’s relative performance after accounting for the economic background and academic preparation of its students. The higher a school’s graduation rate was above the rate that would be predicted for a school with that particular mix of students, the more value that particular college is assumed to have added. This “value-added” graduation rate analysis is widely accepted. (A 2013 OECD paper found that such “value-added measurement provides a ‘fairer’ estimate of the contribution of educational institutions.”)
Peer quality: 10%. Decades of research have shown that undergraduates have a major impact on their peers. Students who room with better students get better grades, for example. By contrast, students surrounded by less studious peers — for example, heavy drinkers, video game players, etc. — study less and get worse grades. And students who room or socialize with more successful students tend to get better jobs upon graduation.
Our peer quality measure consists of these two indicators:
- Academic preparation of students (5%). We used composite test scores of incoming freshmen to estimate the academic qualifications of the student body. While there is much debate over the usefulness and validity of standardized tests, the SAT and ACT tests currently provide the only nationally comparable data on student abilities.
- Yield (5%). The federally reported “yield” is the percentage of accepted students who enroll in a given college. The higher the yield, the more likely it is that the school was the student’s first choice, or best option, and that applicants perceive the college’s quality as high.
Faculty: 10%. Research shows that students who get more individual attention from faculty tend to achieve more, both in college and after graduation. (See, for example, the recent Gallup-Strada study.) We include each school’s student-to-faculty ratio.
Financial troubles: 15%. We added this factor in 2017 because a growing number of schools are facing significant funding problems. Financial difficulties affect the quality of education through layoffs of staff, closure of programs, and reduction of services. A school was deemed to be facing financial trouble if it:
- Is on the “Heightened Cash Monitoring 2” list published by the U.S. Department of Education, reflecting concerns about the school’s financial stability.
- Has bonds that are rated below-investment grade by Standard & Poor’s.
- Has received warnings from its accreditor.
- Has had staff layoffs in the most recent academic year that were covered in a media outlet.
Pell Grant recipient outcomes: 5% We added this factor in 2018 as a way to analyze how well schools help low-income students succeed. We used the number of Pell recipients in the bachelor’s degree seeking cohort who actually earned a degree six years after enrolling. With this new measure, schools get credit for each additional Pell Grant recipient they graduate — so schools that serve a large number of low-income students and serve them well rise to the top.
Affordability: 1/3 of weighting
For this factor we used eight indicators, weighted as shown:
Net price of a degree: 30%. MONEY developed a unique and — according to many experts — more accurate estimate of college prices. We started with the “sticker” price provided by the college to the federal government. The full cost of attendance includes tuition, fees, room, board, books, travel, and miscellaneous costs. (In calculating tuition and fees for public colleges, in-state amounts were used.)
We then subtracted the average amount of institutional aid provided per student by the college. That gave us an estimate of the net price the college charged an average student for the most recent academic year these data were available. (The data on merit and need-based grants you’ll see reported on our website are numbers reported by the colleges to Peterson’s. They are not used in the rankings.)
Next, we used federal data on the percentage of students who graduate from that college in four, five, and six years respectively to calculate an average time to degree, which for the vast majority of schools is now more than four years. The National Student Clearinghouse Research Center reports students at private, four-year colleges are enrolled an average 4.8 years before graduating. At public colleges, it’s 5.2 years.
We generated an estimated net price for a series of six years, inflating each slightly since tuition prices typically rise each year. We then created a weighted average total net price, based on those prices and the proportion of students who complete their degrees within four years, five years, and in six years. A handful of exceptions are made for institutions with longstanding co-op programs, as those programs intentionally extend the length of a degree and no tuition is charged during co-op periods.
The federal net price estimate for a year’s costs is typically lower than MONEY’s estimate because the Department of Education subtracts almost all grants — federal, state and institutional — while we only subtract aid provided by the school, for reasons described in the ‘caveats’ section below.
MONEY gives the net price sub-factor a very heavy weighting because surveys show that the cost of college is now one of families’ biggest worries. About half of freshmen in 2017 said cost was a “very important” factor in where they ended up attending, according to an annual survey from the Higher Education Research Institute at UCLA.
Affordability for low-income families: 20%. The federal government reports the annual average price paid at each school by federal aid recipients who are in the lowest income group ($0-$30,000). MONEY uses this as an indicator of how well the college targets need-based financial aid. At some schools, those from the lowest-income group are expected to come up with more than $20,000 per year — which may be more than the entire annual income of the family.
Educational debt: 20%. Surveys show debt to be another of most families’ biggest worries, and research shows student debt can have long-lasting effects on young adults’ finances.
Our educational debt assessment is based on these two indicators:
- Student borrowing (15%). Our measure of student borrowing starts with the average dollar value of federal student loans by entering undergraduates, multiplies that by the share of entering undergraduates that take out a federal student loan, then multiplies that again by the institution’s average time to degree. This generates the average total federal student loan amount across all entering undergraduate students. These data are reported by the federal government or calculated using federally reported data. A handful of exceptions are made for institutions with longstanding co-op programs.
- Parent borrowing (5%). The federal government reports the total amount of parent PLUS loans awarded to parents at each college each year. We divided this number by the school’s total undergraduate enrollment to calculate an average parent PLUS debt per undergraduate student. While other organizations generally don’t include parental debt in their rankings, MONEY believes parent educational borrowing is a financial burden, and should be an important consideration.
Ability to repay: 30%. The ability to repay loans taken out to finance a college education is another indication of a school’s affordability for its students.
We evaluated ability to repay using these four indicators:
- Student loan default risk index (SDRI) (5%). Each year, the federal government publishes the number of former students who left college three years ago and have since defaulted on their federal student loans. Using a methodology proposed by The Institute for College Access and Success (TICAS), MONEY adjusts these numbers for the share of undergraduate students at the college who take out federal student loans. TICAS says this is a fairer and more accurate indicator of the odds that any particular student at the college will end up defaulting on a student loan.
- Value-added SDRI (5%) We measured whether a college’s SDRI was above or below the rates typical of schools with similar student bodies.
- Federal student loan repayment (10%). We used the data released on the federal College Scorecard on the percentage of student borrowers who pay down at least $1 on their principal in their first five years of payments. We believe this provides different and important information about alumni financial stability since some schools have started to game default rates by helping students to take advantage of hardship programs that delay defaults until after the three-year measuring window is closed.
- Value-added student loan repayment (10%). We measured whether a college’s student loan repayment rate was above or below the rates typical of schools with similar student bodies.
Outcomes: 1/3 of weighting
For our third category, we used a total of 12 indicators, weighted as shown below.
Earnings 10 years after college entry: 10%. Median earnings for 2014-15, as reported to the IRS, of people who started as college freshmen and received federal aid in 2003 or 2004 and are currently working and not enrolled, published on the U.S. Department of Education’s College Scorecard.
Payscale.com earnings: 40%. Although the earnings data reported on PayScale.com is based on what visitors to the site voluntarily enter, our previous analysis shows that it is reliable. PayScale provided MONEY with aggregated data on more than 3 million people who in the last 10 years have filled out an online form that asks what their job is, what their salary is, where they went to college, what they majored in, and when they graduated. This adds additional important information to federal earnings data, because it covers only the students who earned a bachelor’s degree from the school, not the dropouts. These Payscale data also screen out those who have earned graduate degrees, and thus may be earning more because of their graduate, rather than undergraduate, education.
- Early career earnings (7.5%). Earnings as reported for graduates with three years’ work experience.
- Mid-career workers (5%). We used the average earnings reported by those who do not have graduate degrees and have 20 years’ work experience.
- Value-added early career earnings (7.5%). We analyzed how the school’s early-career earnings compared with schools with similar student bodies.
- Value-added mid-career earnings (5%). We analyzed how the school’s mid-career earnings compared with schools with similar student bodies.
- Major-adjusted early career earnings (10%). We analyzed the impact of the popularity of each major on schools’ average earnings, and adjusted for the impact of majors on each school’s average earnings. We then calculated how the school’s adjusted earnings compared with what would be typical for schools with similar major mixes. This allows us to compare, for example, schools that specialize in producing teachers with schools that produce engineers, in seeing how well they each perform compared with other schools that primarily produce teachers and engineers.
- Major-adjusted mid-career earnings. (5%)
Job impact: 5%. Our 2016 MONEY/Barnes & Noble College survey found that parents and students alike put a higher emphasis on finding a “fulfilling” career, than on finding a high-paying job. So we include alumni responses to a PayScale.com survey question about whether their job “makes the world a better place.”
Share of alumni who are not employed six years after starting school: 12.5%. This data from the federal College Scorecard shows the percentage of federal financial aid recipients who started as freshmen in 2006-07 or 2007-08, and were neither enrolled in school nor employed in 2013-14.
Share of alumni earning at least $25,000 within six years of starting college: 12.5%. The federal government publishes these data to show which colleges produce alumni who are earning at least as much as the typical 2013-14 high school graduate. It is based on incoming freshman in 2002-03 or 2003-04 who received federal financial aid.
Socio-economic mobility: 20%. Opportunity Insights, formerly the Equality of Opportunity Project, published “mobility rate” data for each college on how many low-income students (i.e., with family incomes in the lowest quintile, or below $25,000 a year) were enrolled and how many went on to earn incomes that put them in the top quintile for their age group (earning at least $58,000) by 2014. We used this “mobility rate” data in the rankings as an indicator of which colleges are helping promote upward mobility.
How We Calculated These Rankings
For each of our data points, for each college, we calculated a “Z-score” — a statistical technique that turns lots of differently scaled data into a standard set of numbers that are easily comparable. Each Z-score tells you how far any particular number — such as, say, a graduation rate of 75% — is from the average for the entire group under consideration. Finally, we ranked the schools according to their total score across our three main factors.
While we believe our rankings are better than any others out there, we know they are not perfect. Here are some caveats that we are aware of and hope to address in future rankings.
Student learning. What students actually learn is a big unanswered question in higher education. The various assessments of college student learning are controversial, few colleges use them and very few of the ones that do release the results publicly. In addition, we were not able to find good data on basic indicators of academic rigor, such as the number of pages of writing or reading required per assignment. We will continue to explore the data in hopes of finding useful indicators of student learning.
Standardized Tests. More colleges are going “test-optional,” by giving applicants the choice of whether to submit SAT or ACT scores. If this continues to grow in popularity to where a large minority of students at colleges in MONEY’s universe do not submit scores, we’ll have to evaluate the effect that has on our rankings, particularly our value-add assessment that rely heavily on scores to predict academic performance.
Geographical cost of living adjustments. Some colleges in low-wage, low-living-cost areas, such as parts of the South and Midwest, may get lower rankings because we have not adjusted the earnings data cost of living. But several factors added to the MONEY rankings recently lessen the impact of any geographic wage differential. The College Scorecard employment data treats all regions equally, as does PayScale’s “Job Meaning” survey. The student loan default and repayment data also should treat all regions equally since it shows which alumni have sufficient disposable income to repay their loans, in theory — possibly benefiting students in low-cost areas. On the other side of the equation, we also don’t adjust for the cost of the living for room & board as part of our net price calculation for colleges that are in expensive areas, such as major cities.
Measuring economic mobility. The data we use from Opportunity Insights data is now a few years old, but it remains the best that is available in a national dataset. We recognize other experts have recently suggested different ways of measuring how well a college serves low-income students and promotes socioeconomic mobility, and we plan to investigate whether we should replicate their models to update our measure in the future.
Out-of-state public college tuition. Many students are interested in attending public colleges out of state. Colleges charge higher prices to out-of-state students, and we haven’t developed a strong cost and value comparison for those cases. Out-of-state students can still use our rankings to assess public colleges on their educational quality and alumni outcomes, but the affordability metrics—specifically net price and average borrowing levels—likely will not apply.
Net prices. MONEY’s estimated net price is likely to be higher than the average price actually paid by most families. It is crucial to understand that while the MONEY net price estimate is based on the average price charged by the college, you and your family will pay less than that if your student receives any federal, state, or private scholarships. As an analogy, if you’re buying a can of soup, you have to pay what the grocery store charges, unless you have a coupon. Just as coupons can be used at competing supermarkets, most federal, state, and private scholarships can be used at many competing colleges. So we help you identify which college has the lowest net price at which you can apply any additional scholarships. In addition, our net price is based on the average student’s time-to-degree. Your student may finish in four years. And while many students take more than four years to finish a degree, they aren’t necessarily paying full tuition for the five or six years before they graduate, since they may, for example, take a year off to work. MONEY attempted to account for this by adjusting the estimated time to degree for all schools with large and established co-op work programs, such as Northeastern University. In addition, MONEY is not adding to the cost of a degree any amount for “opportunity cost,” which is the amount in earnings a student loses by not finishing a degree on time and delaying entry into the higher-paying job market of college graduates. So, while we may, in some cases, be overestimating the price of a degree, we are also underestimating the total economic expense to a student of schools that don’t speed them to graduation.
Former MONEY senior writer Kim Clark was instrumental in developing our methodology and rankings. Mark Schneider, now the director of the Institute of Education Sciences, and Matthew Soldner, now the commissioner of the National Center for Education Evaluation and Regional Assistance at NCES, consulted with MONEY on the creation of these rankings in 2015 when they were at the American Institutes for Research. Cameron Smither, now at the American Association of State Colleges & Universities, also influenced MONEY’s rankings while he worked at AIR. MONEY is thankful to several higher education experts whose insights in recent years helped us make decisions on what data to use and how to weight it.
The MONEY editorial staff is solely responsible for the final ranking decisions.