Using lotteries to allocate research funding: Perspectives from Switzerland
|Date:||09 September 2021|
The Swiss National Science Foundation (SNSF) has been employing random selection in grant decisions since 2019. Dr. Marco Bieri, Scientific Officer at SNSF, describes the rationale for using random selection and reflects on SNSF’s experiences so far.
Why did you start employing random selection in the grant allocation process?
The limitations of peer review are well documented in scientific literature.1 In addition, there is also increasing evidence of a certain degree of randomness in the review process. For example, different studies have shown how two independent panels evaluating the same set of applications came to different decisions. This indicates that the outcomes depended not only on the scientific content of the proposals, but also on which panel they were evaluated by. Therefore, “research on research” encourages funders to take into account uncertainty and consider introducing an element of randomness in the grant allocation process. In our evaluations, we sometimes encountered situations where evaluation panels were literally forced to make a decision on a small share of the proposals around the funding cutoff, although they found it virtually impossible to discriminate between them using the predefined quality criteria. In such situations, there is a danger of biased decisions being made, of non-relevant criteria being applied, or of existing criteria being weighted excessively or in a different way.2 For this reason, we decided to consider random selection (RS) as a “tie-breaker” in cases where proposals around the funding cutoff cannot be objectively differentiated any further.
In which funding schemes do you use random selection?
We first trialled RS in the Postdoc.Mobility funding scheme.3 The scheme is aimed at junior postdocs who would like to enhance their scientific knowledge and become scientifically more independent through a research stay abroad of up to 24 months. Following changes in the career funding portfolio, we knew we would have to deal with a significant increase in proposals. This made some amendments to the evaluation process necessary. In this context, we also considered the latest findings on peer review and decided to trial RS. After carefully evaluating the trial, the Presiding Board of the SNSF Research Council decided that RS could be used as an optional element in all SNSF funding schemes.4-6 In all cases, RS is only applied following a rigorous scientific assessment, and only to a small share of proposals of similar quality around the funding cutoff. In the course of the Postdoc.Mobility pilot, we also simulated a simple and cost-effective remote evaluation and compared it to the official outcome of the established evaluation procedure relying on panel meetings. We recently published the results of this study in BMJ Open, in which we also address RS, as it was an element in both evaluation formats.7
Could you explain how this random selection works in practice?
As mentioned, all proposals undergo a rigorous scientific assessment. For Postdoc.Mobility, we assign every proposal to two reviewers who are part of a panel representing a research area. The panel reviewers independently assess the proposals and we then perform a triage, which is a pre-selection: clearly outstanding proposals and those that are clearly out of the running are recommended for funding and rejection respectively, without panel discussion. The remaining proposals forming the “middle group”, i.e. proposals that are neither clearly excellent nor clearly poor, are discussed and ranked in the panel meetings. Based on a newly established procedure at the SNSF, the scientific evaluation and the funding decision are now separate. This means that the committee that governs the evaluation panel decides where the funding cutoff will come to lie and to which proposals RS might potentially be applied. In this context, the SNSF is refining a method based on Bayesian statistics that can be used to consistently and objectively identify groups of proposals for a random draw.8 Finally, we apply a physical method to draw the lots. We write the numbers of the relevant proposals on pieces of paper and insert each of them into a non-transparent capsule. The capsules are then drawn from a transparent bowl, one by one, thereby delivering a final ranking based on chance. This defines which applications lie above the funding cutoff and which lie below it. The entire process is documented and recorded on film. In the approval or rejection letter sent to the applicant we explicitly mention if the decision was reached by drawing lots. This increases transparency. Researchers who are not awarded a grant at least learn that their proposal was of very high quality even if it could not be funded.
What are the main benefits of using random selection?
The motivations for using RS are manifold. For the SNSF, reducing subconscious biases in decision-making is definitely a central benefit of RS. Random funding allocation can also help to reduce the workload for evaluators. And it may encourage risk-taking, i.e. it could help research projects that might be considered too risky based on traditional evaluation methods relying on expert review. RS also makes the decision more transparent for applicants, who know that their proposal – though rejected – was of high quality and worthy of funding. Hence, with RS we not only have categories for “funded” and “rejected” proposals, but also an additional category for proposals that are “approved but not funded”. These are proposals which we would have funded if sufficient financial resources had been available.
How has random selection been received by applicants and by the Swiss research community in general?
The announcement that decisions could be made by drawing lots prompted a few questions from researchers applying for the Postdoc.Mobility scheme. And these were mostly of a technical nature, with no one objecting to the use of RS as such. Over the two-year trial of RS in the Postdoc.Mobility scheme, we noted that the acceptance level among applicants was generally high, with some of them even welcoming the selective use of RS. Among the panelists, on the other hand, the introduction of the procedure led to some discussion. With increasing practical experience, however, we saw the level of trust rise also among the reviewers on the panel. There was some concern that the introduction of RS might harm SNSF’s reputation and that a funding decision based on chance was not compatible with a merit-based system. However, as already mentioned, we apply RS only as a last resort or “tie-breaker” if rigorous scientific assessment fails to produce a clear result. Grant allocation is therefore still based on merit, and RS is only applied to very few proposals. It is not our intention to replace peer review with a mechanism that allocates research funding at random.
Note: The use of random selection in research funding decisions was the subject of the panel discussion ‘Luck of the draw’ during the Celebrating Openness event that took place in October 2020 (event co-organized by the UB and the OSCG).
- Guthrie, S., Ghiga, I. & Wooding, S. What do we know about grant peer review in the health sciences? F1000Res 6, 1335 (2018). doi: https://doi.org/10.1038/d41586-021-01232-3
- Severin, A. et al. Gender and other potential biases in peer review: cross-sectional analysis of 38 250 external peer review reports. BMJ Open 10, e035058 (2020). doi: http://dx.doi.org/10.1136/bmjopen-2019-035058
- Swiss National Science Foundation Postdoc.Mobility. https://www.snf.ch/en/XIZpfY3iVS5KRRoD/funding/careers/postdoc-mobility
- Swiss National Science Foundation - Drawing lots as a tie-breaker. https://www.snf.ch/en/JyifP2I9SUo8CPxI/news/news-210331-drawing-lots-as-a-tie-breaker
- Singh Chawla, D. Swiss funder draws lots to make grant decisions. Nature d41586-021-01232–3 (2021). doi: https://doi.org/10.1038/d41586-021-01232-3
- Adam, D. Science funders gamble on grant lotteries. Nature 575, 574–575 (2019). https://www.nature.com/articles/d41586-019-03572-7
- Bieri, M., Roser, K., Heyard, R. & Egger, M. Face-to-face panel meetings versus remote evaluation of fellowship applications: simulation study at the Swiss National Science Foundation. BMJ Open 11, e047386 (2021). doi: http://dx.doi.org/10.1136/bmjopen-2020-047386
- Heyard, R., Ott, M., Salanti, G. & Egger, M. Rethinking the Funding Line at the Swiss National Science Foundation: Bayesian Ranking and Lottery. arXiv:2102.09958 [stat] (2021).