A recent report on a shift in Kenyan university rankings has stirred debate, as Kenyatta University (KU) was placed above the long-reigning University of Nairobi (UoN) in recent reports. While this development has been praised by some as a testament to KU’s growing academic stature, others have questioned the credibility and consistency of university ranking systems.
The discourse surrounding this change highlights the complexities of university evaluations and raises questions about the accuracy and fairness of ranking methodologies.
University rankings systematically evaluate higher education institutions based on various parameters like academic performance, research output, faculty quality, and international reach. These rankings are often used by prospective students, faculty, and policymakers to gauge the relative strengths of institutions.
Each ranking body has its own formula and priorities, resulting in differing rankings for the same institutions. Rankings are influential, shaping the landscape of higher education by providing comparative assessments of universities globally and regionally. However, these rankings also prompt debates on their accuracy, fairness, and the impact they have on universities and stakeholders.
Four key ranking bodies are commonly cited. QS World University Rankings, produced by Quacquarelli Symonds, emphasise academic and employer reputation, and graduate employability.
Times Higher Education World University Rankings assesses institutions primarily through research influence and industry partnership income. Academic Ranking of World Universities, or Shanghai Rankings, focuses on research output, with high regard for publications in top-tier journals and major academic awards. U-Multirank, funded by the European Union, allows users to customise rankings based on chosen criteria, providing a flexible, user-driven approach to institutional evaluations.
Rankings generally rely on a mix of data sources, including expert opinions, self-reported data from universities, and independent audits. Core parameters are common across ranking bodies, but their weightings vary according to each system’s priorities.
Key factors include academic reputation (usually 30–40 percent of the ranking score), based on global surveys of academic peers, and employer reputation (10–20 percent), gauging feedback from employers on graduate performance. Research output and impact hold significant weight, especially in rankings like Times Higher Education, where citations contribute up to 30 percent of the score.
Other parameters include student-to-faculty ratio (10–20 percent), internationalization (5–10 percent), industry income and partnerships (2–10 percent), faculty quality (10–15 percent), and citations per faculty (10–20 percent).
In Kenya, the rise of KU over UoN has fuelled calls for standardised local metrics, with advocates suggesting that the Commission for University Education develop a domestic ranking framework. A Kenya-specific ranking system could encourage local institutions to focus on quality and best practices while allowing for fairer comparisons.
This approach would reduce reliance on international rankings, which may overlook unique strengths and challenges faced by Kenyan institutions. Additionally, it could promote a fairer, inclusive approach to quality improvement by providing institutions with a local benchmark that considers their distinct contexts.
University rankings, while valuable for benchmarking institutional performance, should be interpreted cautiously. Prospective students and faculty should consider which parameters align with their personal or professional goals, rather than relying solely on overall rankings.
— The writer is a Professor of physical chemistry at the University of Eldoret