Roughly three-quarters of Americans view the high cost of college as the biggest barrier to earning a degree, according to a recent survey from NORC at the University of Chicago.
That makes sense, given how college costs have climbed in recent decades. Today, deciding whether, and where, to attend college is one of the biggest financial decisions you’ll make.
To guide families in that decision, Money’s latest Best Colleges offers a practical analysis of more than 600 four-year colleges.
Data collection and analysis for the rankings were led by Money’s rankings partner, Witlytic. Money’s editorial staff, however, is solely responsible for the final ranking decisions.
Here is the methodology behind our new 2022 rankings, first in a short version, followed by a more comprehensive one.
To make our initial cut, a college had to:
A total of 671 schools met these requirements. We ranked them on 24 factors in three categories:
Finally, we separated the country’s most selective schools — the colleges where the acceptance rate for the past three years fell below 20% — into their own list. All the colleges are scored on the same scale, so readers can still compare a college on the selective list with a college on the main list.
For a more detailed description of the methodology, read on.
The Best Colleges for Your Money rankings combine the most accurate pricing estimates available with indicators of alumni financial success, along with an analysis of how much value a college adds when compared to other schools that take in similar students.
We estimate a college’s “value add” by calculating its performance — after adjusting for the types of students it admits — on important measures such as graduation rates, student loan repayment and default rates and post-graduation earnings.
To gauge how well a college is performing relative to its peers, we looked at data on the admissions selectivity (as measured by standardized test scores and the average freshman GPA) and the percentage of students with incomes low enough to qualify for Pell grants (the majority of which go to families with incomes below $50,000).
We then used the statistical technique of regression analysis to model the expected impact a student’s test scores, GPA and income have on key factors, such as graduation rates and future earnings. The difference between these expected values and the actual reported values represent the school’s “value add”.
Value-added measures are a way to capture some parts of a college’s quality that more commonly used metrics can’t capture. They give an indication of how a college affects graduates’ outcomes, rather than simply rewarding colleges that admit the highest-performing students.
To ensure that we fairly compared apples to apples, we decided that to be included in our rankings, a college must meet these criteria:
This eliminated some colleges that may be good values, but might be facing temporary financial difficulties or may be too small for us to evaluate. But it left a robust universe of nearly 700 schools. In our view, even the lowest-ranked of the schools on our list demonstrate that they provide at least some value for your tuition dollars.
We then used the following data and calculations in our three basic categories to create our rankings:
For this factor, we combined the following seven indicators:
Graduation rates: 30%. Education experts and college officials generally agree that one of the most important reflections of a college’s quality is its graduation rate. The most commonly cited graduation rate statistic — the percentage of freshmen that graduate within six years — is based only on first-time, full-time students, meaning it misses a large population of students at many colleges. To help address that, we also use recently available federal data to expand that graduation rate by specifically including the share of full-time students who transferred into a college and earned a degree within six years. The included graduation rate is calculated using bachelor’s degree-seeking students only.
Value-added graduation rate: 30%. Many education experts and college officials point out that the basic graduation rate number, while useful, is an insufficient indicator of a college’s value because research shows that wealthier students and students who got good grades in high school are more likely to graduate on time no matter what college they attend. In other words, elite schools, which disproportionately enroll such advantaged students, are expected to have high graduation rates. For that reason, we calculated each school’s relative performance after accounting for the economic background and academic preparation of its students (measured by the percentage of attendees receiving Pell grants, the majority of which are given to low-income students; the standardized test scores and high school GPAs of incoming freshmen; and the share of traditional, full-time students on campus). The higher a school’s graduation rate was above the rate predicted for a school with that particular mix of students, the more value that particular college is assumed to have added. (A 2013 OECD review of research on value-added analysis says it can provide “a more ‘accurate’ estimate of the contribution educational institutions make to students’ academic progress” by isolating student attainment from other contributing factors such as family characteristics and socioeconomic background.)
Peer quality: 10%. Decades of research have shown that undergraduates have a major impact on their peers.
Our peer quality measure consists of these two indicators:
Faculty: 10%. Research shows that students who get more individual attention from faculty tend to achieve more, both in college and after graduation. (See, for example, this Gallup-Strada study.) We include each school’s student-to-faculty ratio.
Financial troubles: 10%. Financial difficulties can affect the quality of education, through layoffs of staff, closure of academic programs and reduction of services. A school was deemed to be facing financial trouble if it:
Pell Grant recipient outcomes: 10% To analyze how well schools help low-income students succeed, we multiplied the share of federal Pell grant recipients on campus with the six-year graduation rate for Pell grant recipients in the school’s bachelor’s degree-seeking cohort. Schools that serve a large share of low-income students and help them graduate rise to the top.
For this factor we used 10 indicators, weighted as shown:
Net price of a degree: 30%. Money uses a unique estimate of college prices. We started with the “sticker” price provided by the college to the federal government. The full cost of attendance includes tuition, fees, room, board, books, travel and miscellaneous costs. (In calculating tuition and fees for public colleges, in-state amounts were used.)
We then subtracted the average amount of institutional aid provided per student by the college. That gave us an estimate of the net price the college charged an average student for the most recent academic year these data were available.
Next, we used federal data on the percentage of students who graduate from that college in four, five and six years to factor in how long a typical student is enrolled before earning a degree. The National Student Clearinghouse Research Center reported in 2016, the most recent year available, that students at private, not-for-profit four-year colleges are enrolled an average 4.8 years before graduating. At public colleges, it’s 5.2 years.
We generated an estimated net price for a series of six years, inflating each slightly since tuition prices typically rise each year. We then created a weighted average total net price, based on those prices and the proportion of students who complete their degrees within four years, five years, and in six years. A handful of exceptions are made for institutions with longstanding co-op programs, as those programs intentionally extend the length of a degree and no tuition is charged during co-op periods.
The federal net price estimate for a year’s costs is typically lower than Money’s estimate because the Department of Education subtracts almost all grants — federal, state and institutional — while we only subtract aid provided by the school, for reasons described in the ‘caveats’ section below.
Money gives the net price sub-factor a very heavy weighting because surveys consistently show that the cost of college is one of families’ biggest worries. About half of freshmen in 2019 said cost was a “very important” factor in where they ended up attending, according to a survey from the Higher Education Research Institute at UCLA.
Net price by income bracket: 20%. The federal government reports the annual average price paid at each school by federal aid recipients broken down into five income groups. Given the way college costs have outpaced income growth in the last few decades, even many middle-income families report that paying for college is out of reach. As a result, we look at three income brackets to capture affordability for low- and middle-income families.
We award the most weight to the lowest income group as an indicator of how well the college targets need-based financial aid:
Educational debt: 20%. Surveys show unaffordable debt to be another of students’ biggest worries about attending college, and research shows student debt can have long-lasting effects on young adults’ finances.
Our educational debt assessment is based on these two indicators:
Ability to repay: 30%. The ability to repay loans taken out to finance a college education is another indication of a school’s affordability for its students.
We evaluated ability to repay using these four indicators:
Changes from the 2020 rankings: We did not use average salary data from Payscale in our rankings this year, which substantially changes the makeup of our outcomes bucket. Instead of having the three “raw” earnings figures we’ve used in the past (Payscale earnings of graduates after 5 years, Payscale earnings of graduates after 15 years and College Scorecard earnings), we use just the federal College Scorecard earnings. We also switched from using Payscale earnings to using Scorecard earnings in our value-added and major-adjusted analysis. Despite these changes, we kept the relative weight on raw earnings, value-added earnings and major-adjusted earnings equal.
We also introduced two new measures in this bucket and swapped the employment measures based on what was available in the College Scorecard. Those changes are outlined below.
For our third category, we used a total of seven indicators, weighted as shown below.
Earnings 10 years after college entry: (25%). These are the median earnings in 2018 and 2019 of people who started as college freshmen and received federal aid between 2007 and 2009 and are currently working and not enrolled, published on the U.S. Department of Education’s College Scorecard.
Major-adjusted early career earnings: (15%). Research shows that a student’s major has a significant influence on their earning potential. To account for this, we used College Scorecard program-level earnings of students who graduated in 2017 and 2018. These are the earnings of graduates from the same academic programs at the same colleges, one and two years after they earned a degree. We then calculated an average salary, weighted by the share of the bachelor’s degree completers in each major (grouped by CIP program levels). Then, we used a clustering algorithm to group colleges that have a similar mix of majors and compared the weighted average against colleges in the same group. Colleges whose weighted average was higher than that of their group scored well. This allows us to compare schools that produce graduates who go into fields with very different earnings potentials.
Value-added early career earnings (15%). We analyzed how students’ actual earnings 10 years after enrolling compared with the rate predicted of students at schools with similar student bodies.
Share of graduates who don’t have a job after one year: (10%). This data from the College Scorecard shows the percentage of federal financial aid recipients who graduated in 2016-17 or 2017-18 and were neither enrolled in school nor employed a year later. Lower values are scored higher, so that colleges where a larger share of graduates are working perform better. This data is new this year. In the past, we measured the share of students who weren’t working or in school six years after first enrolling.
Share of students earning more than a high school graduate within six years of starting college: (10%). The federal government publishes these data to show which colleges produce alumni who are earning at least as much as the typical high school graduate. It is based on incoming freshmen in 2011-12 or 2012-13 who received federal financial aid. This data is new this year, and replaces a similar measure we used in 2020 that captured the share of students who earned more than $28,000.
Economic mobility: 10%. Think tank Third Way recently published new economic mobility data for each college, based on a college’s share of low- and moderate-income students and a college’s “Price-to-Earnings Premium” (PEP) for low-income students. The PEP measures the time it takes students to recoup their educational costs given the earnings boost they obtain by attending an institution. Low-income students were defined as those whose families make $30,000 or less. We used this as an indicator of which colleges are helping promote upward mobility. This data is new this year, and replaces an older mobility rate we had been using from Opportunity Insights.
Return on investment: 10%. This data comes from a new analysis from the Bipartisan Policy Center that estimates how attending a college benefits a student’s earnings. The center’s return on investment model factors in the cost of attending a school alongside the college’s earnings premium, or the additional earnings over a lifetime gained by those who attended the college relative to the typical earnings of a high school graduate. We weighed BPC’s “full model,” which makes adjustments to account for state-specific wage differences and the effect of labor market discrimination.
For each of our data points, for each college, we calculated a “Z-score” — a statistical technique that turns lots of differently scaled data into a standard set of numbers that are easily comparable. Each Z-score tells you how far any particular number — such as, say, a graduation rate of 75% — is from the average for the entire group under consideration. We capped outliers in our outcomes bucket to limit how well a college can score for having exceptionally high earnings.
Finally, we separated the country’s most selective schools — the colleges where the acceptance rate for the past three years fell below 20% — into their own list. Then we ranked the schools according to their total score across our three main factors.
We imputed SAT/ACT scores for colleges with missing data using the average values for a college’s Carnegie classification. For any other missing values, we assigned the average value of the remainder of this list for that specific data point.
If a college does not appear on our list that means it did not meet one of the initial screening requirements outlined above. Those requirements include enrolling at least 500 students and having sufficient data to analyze. But the most common reason colleges don’t make the cut is that their six-year graduation rate is too low.
Colleges need a graduation rate that was at or above the median for its institutional category. This year, those medians were 56% for public colleges, 64.6% for private, not-for-profit colleges, and 37.5% for historically Black colleges and universities.
You may notice that some colleges with graduation rates below those cutoffs are still included. In those cases, the colleges were originally disqualified but got brought back into the universe because they had exceptionally high value-added graduation rates. That means our analysis shows their graduation rate is much higher than expected given the students who enroll. This year, about 65 colleges were added to the universe this way.
While we believe our rankings are better than any others out there, we know they are not perfect. Here are some caveats that we are aware of.
Limitations on earnings and employment data. The earnings data in the College Scorecard only captures students who received federal financial aid. At some colleges, this may capture the majority of students, if most students either borrow student loans or receive federal grants. But at some campuses, the earnings data will be based on a small portion of the students that college educates. The earnings data available is also limited to short time frames. For program-level earnings, for example, it only captures alumni within two years of graduation. Plus, many smaller college programs are not reported due to privacy concerns. As the government continues to release more earnings data that captures more students over a longer period of time, we feel our analysis will improve.
Student learning. What students actually learn is a big unanswered question in higher education. The various assessments of college student learning are controversial, few colleges use them and very few of the ones that do release the results publicly. In addition, we have not been able to find good data on basic indicators of academic rigor, such as the number of pages of writing or reading required per assignment.
Standardized Tests. Since 2020, colleges have made a near-universal shift to “test-optional” policies, by giving applicants the choice of whether to submit SAT or ACT scores. Some colleges have made the switch permanently, while others are conducting multi-year experiments, and it’s unclear how many colleges will stick with the policy. Still, if this trend continues and grows to the point where a large population of students at many of the colleges in Money’s universe do not submit scores, we’ll have to evaluate the effect that has on our rankings, particularly our value-add assessments that rely heavily on scores to predict academic performance.
Geographical cost of living adjustments. Some colleges in low-wage, low-living-cost areas, such as parts of the south and midwest, may get lower rankings because we have not adjusted the earnings data for the cost of living. While there are some factors in our rankings that lessen the impact of any geographic wage differential — including the College Scorecard employment data, student loan default rate, and debt repayment rate — we hope to address this issue more thoroughly in future rankings. We note, though, that we also do not conduct any cost of living adjustments in our net price of a degree calculation, and colleges in the south and west often score well here.
Out-of-state public college tuition. Many students are interested in attending public colleges out of state. Colleges charge higher prices to out-of-state students, and we haven’t developed a strong cost and value comparison for those cases. Out-of-state students can still use our rankings to assess public colleges on their educational quality and alumni outcomes, but the affordability metrics — specifically net price and average borrowing levels — likely will not apply.
Net prices. Money’s estimated net price of a degree is likely to be higher than the average price actually paid by most families at that institution. It is crucial to understand that while the Money net price estimate is based on the average price charged by the college, you and your family will pay less than that if your student receives any federal, state or private scholarships. We do not include that aid in our estimate, because it’s money that you can take to any college. In addition, our net price is based on the average student’s time-to-degree. If you finish in four years, that likely will reduce your cost.
Former Money senior writer Kim Clark was instrumental in developing our methodology and rankings. Mark Schneider, now the director of the Institute of Education Sciences, and Matthew Soldner, now the commissioner of the National Center for Education Evaluation and Regional Assistance at the Institute of Education Sciences, consulted with Money on the creation of these rankings in 2014 when they were at the American Institutes for Research. Cameron Smither, now at the American Association of State Colleges & Universities, also influenced Money’s rankings while he worked at AIR.
Money is thankful to several higher education experts whose insights in recent years helped us make decisions on what data to use and how to weigh it.
The Money editorial staff is solely responsible for the final ranking decisions.
Disclaimer: All the information for Money’s Best Colleges of 2022, which includes tuition prices, financial aid, percentages, average student debt and early career earnings, was researched by Money. However, Money does not guarantee that this information will apply to each person nor guarantees any early career earnings.