Guidelines and declarations

Several different guidelines and declarations have been published in recent years, aiming to promote responsible use of publication metrics in assessment of research and researchers.

University of Jyväskylä - along with other Finnish universities and research organizations - is signatory to the DORA -declaration and is committed to follow the Finnish national guidelines of researcher assessment.

Recommendation for the responsible evaluation of a researcher in Finland

Good practice in researcher evaluation. Recommendation for the responsible evaluation of a researcher in Finland was published in 2020. These recommendations have been written for the context of individual researcher assessment, but the same principles are applicable to assessments of larger entities. The overarching principles of the recommendation are: transparency, integrity, equity, competence and diversity.

The national recommendation provides guidelines to various topics in researcher assessment: construction of the assessment procedure, assessing research outputs, identifying the diversity of research activities, and researcher participations in the process. Organizations committed to the recommendation are required to create policies to implement it, and monitoring procedures to follow realized outcomes.

The recommendation includes guidelines for responsible publication metrics. Publication metrics can be used to support assessment, but assessment should focus on qualitative evaluation by experts. It is important to ensure that the quantitative metrics are based on relevant data, and that both the methods and the results of analysis are scientifically sound and transparent.


The Declaration on Research Assessment (DORA) can be summarized in this simple central tenet:

Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

It then further explains this, giving guidelines how research organizations, funders, individual researchers, publishers, and metrics providers should implement this principle.

Research organizations should be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published. Organizations should also consider the diversity of research outputs (including e.g. datasets and software), and the diversity of relevant impact measures, such as influence on policy and practice.

Individual researchers, when involved in committees making decisions about funding, hiring, tenure, or promotion, are reminded to make assessments based on scientific content rather than publication metrics. Shaping the academic culture is every individual's responsibility: citing primary literature rather than reviews in order to give credit where credit is due, using range of metrics and indicators beyond (and preferably instead of) those based on journal name and prestige in one's own impact statements and CV, challenging practices and narratives that rely inappropriately on Journal Impact Factors, and promoting and teaching best practice focusing on the value and influence of specific research outputs.

Leiden manifesto

The Leiden Manifesto for research metrics consists of 10 principles to guide research evaluation. These are:

  1. Quantitative evaluation should support qualitative, expert assessment.
  2. Measure performance against the research missions of the institution, group, or researcher.
  3. Protect excellence in locally relevant research.
  4. Keep data collection and analytical processes open, transparent, and simple.
  5. Allow those evaluated to verify data and analysis.
  6. Account for variation by field in publication and citation practices.
  7. Base assessment of individual researchers on a qualitative judgement of their portfolio.
  8. Avoid misplaced concreteness and false precision.
  9. Recognize the systemic effects of assessment and indicators.
  10. Scrutinize indicators regularly and update them.

The Metric Tide

The Metric Tide is an independent review of the role of metrics in research assessment and management in United Kingdom. The core recommendations emerging from this are:

  • Robustness: basing metrics on the best possible data in terms of accuracy and scope
  • Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment
  • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results
  • Diversity: accounting for variation by field, and using a variety of indicators to support diversity across the research system
  • Reflexivity: recognising systemic and potential effects of indicators and updating them in response