University Rankings as a Tool for Social Impact
How Times Higher Education developed and implemented a new set of social impact rankings for higher education for a more sustainable, resilient future. Part of the Innovating Higher Education series.
Times Higher Education has been producing university rankings since 2004, both to comment on the higher education sector and to serve as an advocate for it, and in 2017, we began developing a set of university rankings based on the United Nation’s Sustainable Development Goals (SDGs). The COVID-19 pandemic has demonstrated not only our planet’s interconnected nature but the urgency of building sustainability, and the 17 interlinked SDGs serve as a road map toward a more sustainable, resilient future. All elements of society need to play their part, and we believe that higher education is well-positioned to demonstrate its relevance to society by placing the SDGs at the center of best practices.
By bringing together sustainability and higher education, we believe we can both demonstrate the progress that universities are making and support them in moving more quickly and effectively toward delivering on the goals. We’ve been in the business of ranking universities for a while, but as we’ve learned how to collect and analyze data over almost two decades, we’ve also learned how influential these rankings can be. We realized we had an opportunity to leverage our core competence in rankings as a strategic lever in the field of higher education, to fashion something new and evaluate a different kind of game: social impact.
Our World University Rankings now cover 1,397 universities from 92 countries; tens of millions of students access our website to read them, and billions will come across stories that refer to the ranking. Because university leadership takes them seriously, as do governments and individuals, we had an opportunity to create a new ranking that looked directly at how higher education worked in society, that would be more open to the wide range of universities across the world, and that would drive positive behavior.
Strategic Rationale and Design
The UN Sustainable Development Goals provide a consistent and well-understood framework to build a ranking. But while the SDGs have clear relevance to the world, how could we adapt the Goals to be relevant to universities? Were there risks in generating a ranking rather than an evaluation tool?
We decided that to ensure the resulting rankings had an impact, there should be a single measure of impact. The ranking should also be open to a wide range of institutions from across the world and it had to be relevant both to institutions and an external audience. This still left us with a significant set of challenges:
Metrics had to be designed in a way that would minimize unintended consequences—such as promoting of bad behavior in order to achieve arbitrary targets—and we had to take an approach that didn’t just reward wealthy universities. Most of all, we needed to recognize existing best practices and find ways to promote them.
After consulting extensively with university partners and organizations in the field of sustainability, we began developing our approach.
Multiple Rankings
Our design was to create a ranking for each of the SDGs—starting with 11 in 2019 and expanding in future years—but with a single overall evaluation of performance, a single impact ranking. In order to encourage participation and to reduce the effort required for university data collection teams, we minimized the number of SDGs that universities had to provide data for in order to be considered in the final ranking.
In the end, we decided that we would ask for data from four SDGs: in addition to #17, Partnerships for the Goals—which encourages cross-sectoral and international cooperation—universities could provide data from any three others, enabling them to focus on the SDGs where they had data or had a particular focus. In doing so they could gain visibility for their activities, even if they are unable to provide data for other SDGs. (At a high level we decided that the overall score for SDG 17 Partnership for the Goals would be worth 22 percent of the ranking, and each of the other three SDGs would be worth 26 percent; where universities submitted data for more than three additional SDGs we would take the scores from the strongest three.)
Detailed Measures of Progress
To relate the goals to specific university actions, we based the questions that we asked on a theory of change connecting the SDGs to four key aspects of university activity: research, stewardship, outreach, and teaching. We asked for direct evidence of activities, and we encouraged universities to make that evidence public. This gave us the ability to generate the specific questions that universities could use to evidence the work they were doing. Within each SDG there was a series of questions whose answers—alongside relevant bibliometric data—provides the university’s score for that SDG.
The theory of change allows us to differentiate between inward-focused—or, perhaps more accurately, academic-focused—areas of impact, such as research and stewardship, and the more outwardly focused areas of impact around outreach and teaching. The social impact could be demonstrated in a range of ways, and when we were asking for evidence of programs we tried to do so in a way that allowed universities to provide cases that were most relevant to them. This meant that we saw very different health outreach programs from universities in the UK or the US, for example, and a university like Amrita University in India, where the university works directly with extremely poor village communities.
Evidence-based questions also required us to adopt a new calculation approach. Most rankings rely on quantitative data only, but how should we assess more qualitative insights? Our approach tried to minimize the subjectivity of assessment, evaluating each qualitative piece of evidence based on a university asserting progress, comparison of the evidence submitted to the question, and the evidence being in the public domain. This latter enabled us to have confidence that the evidence not only represented good practice but that it was available for others to see and emulate or to challenge.
Results From the First Two Editions
We believe that this approach has been successful, and we hope to be able to make this index of examples publicly available in the future. However, simply creating the ranking mechanism would not be sufficient for the project to be successful. For the approach to work, we would need higher education institutions to volunteer their time to participate, no small hurdle: The time available to university data teams is limited, and the existing demands from rankers and statutory data collection are already extreme. There are also, inevitably, concerns that universities are likely to have with a new undertaking. What if they perform badly, or at least less well than they do in existing rankings?
However, there have been direct and obvious successes from the rankings:
A notable comment comes from Arizona State University, an Ashoka U Changemaker Campus (and fifth on the Impact Rankings in 2020), whose president Michael Crow commented: “This is more than a target to motivate behavior. This is a commitment Arizona State University has made to demonstrate that sustainability is achievable.”
What Could This Mean for the Future?
Adopting the Sustainable Development Goals as a target—and using this ranking as a metric—offers real opportunities for universities in a time of increasing challenges:
Reconnecting to Core Values | Social impact has always been an imperative for universities, whether implicitly or explicitly, but it frequently gets subsumed by funding mechanisms, or by the needs of teaching and research. However, the expansion, massification, and universalization of higher education make the funding requirements for universities far more visible, which in turn requires universities to better demonstrate their importance to society.
Market differentiation | Even at an individual institutional level there is a need to be able to distinguish one university from another. At Times Higher Education we estimate that only approximately 5,000 of the world’s 22,000 universities could claim to be “research-led.” Once you step away from the small number of truly international “superbrands,” how can a university demonstrate its relevance to the people funding it and living beside it? That difference may come from universities that tackle the impact agenda.
Behavioral change | There are signs that this approach to measuring impact is having a direct positive impact on behavior. We are seeing universities putting out more of their policies as public documents, leadership taking stronger stances on sustainability, and sustainability teams arguing for increased action as a result of the work they have done.
We hope that in the future governments will also be able to relate and act on these measurements in ways that more conventional rankings fail to deliver. As university funding comes under threat these rankings provide stronger insight into the value that universities are providing to countries. They may help countries to understand how better to marshal their resources to address the SDGs, and where work is already being effective. The inclusiveness, the breadth of the measures used, and the relevance of those measures to societal and governmental goals, make them a useful additional source of information, especially in parts of the world with less developed higher education informational infrastructures.
As more universities and their leadership chose to participate in the initiative, we also think that they can gain benefits from opening their data and actions for others to see, and hopefully to learn from. After all, collaboration and partnerships are baked into the goals.
Cross-posted from Stanford Social Innovation Review Magazine Article: University Rankings as a Tool for Social Impact