Abstract. This research article delves into methodological challenges in scientometrics, focusing on errors stemming from the selection of classification schemes and document types. Employing two case studies, we examine the impact of these methodological choices on publication and citation rankings of institutions. We compute seven bibliometric indicators for over 8434 institutions using 23 different classification schemes derived from Clarivate’s InCites suite, as well as including all document types versus only citable items. Given the critical role university rankings play in research management and their methodological controversies, our goal is to propose a methodology that incorporates uncertainty levels when reporting bibliometric performance in professional practice. We then delve into differences in error estimates within research fields as well as between institutions from different geographic regions. The findings underscore the importance of responsible metric use in research evaluation, providing valuable insights for both bibliometricians and consumers of such data.