Gambling with Funding

Can funding be made more fair by instituting a lottery?
Gambling with Funding
Like

Over the years in the lab, the following conversation has become more and more common at seminars and social hours:

Scientist A: "So, how's it going?"

Scientist B: "Well, my R01 didn't get funded..."

Scientist A: "Yeah, mine neither."

Scientist B: "So...more wine?

Many of the more established professors remember the halcyon days of single grant submissions and straightforward renewals; in contrast, those recently joining the professorial rank have only known the cycle of rejection, revision, and resubmission that constitutes the current NIH funding system. The reality is that despite an anticipated bump in NIH funding in the upcoming budget, the federal research money pot is not deep enough to fund all the worthy proposals out there.

So what is there to do when funding paylines are at a level where it's virtually impossible to clearly separate who deserves funding and who does not? Barring a massive injection of funds to cover all worthy proposals (which is unlikely given the number of researchers joining the field and general economic pressures on the budget from other areas), the current funding system deserves a long, hard look and alternative funding schemes should be considered.

This is the point that Ferric Fang (Editor in Chief of Infection and Immunity) and Arturo Casdevall (Founding Editor in Chief of mBio) have made in their recent editorial at mBio calling for a funding lottery (which they previously proposed in an Op-Ed in the Wall street Journal last year). While the editorial discusses how the NIH extramural funding system could be restructured, general ideas of how we think about the scientific endeavor might be informative for other funding agencies.

In the current NIH extramural funding system, grants are peer reviewed by study sections made up of other investigators. These committees score proposals based on criteria such as the worthiness/feasibility of the proposal, the level of innovation of the work, and the track record and likelihood of success of the researcher. When all scores are collected, a certain percentage of the best scored grants are awarded depending on the funding available, as low as 1 award per 10 proposals. One can easily imagine that when money is plentiful, it's easy to defend funding a work that was in the top 10%-30% but not one scoring in the 50th percentile. But these days, study sections are tasked with saying that a proposal in the 15th percentile is so much worse scientifically than one in the 10th that it deserves to be denied money. The authors say that making this distinction is virtually impossible and offer several lines of evidence to support this assessment:

1. Studies have found that reviewers of the same grant make different decisions in regards to scoring, leading to variability during peer review. This random difference between scores could make or break a given proposal, and some studies find that there is weak correlation with the percentile scores and the ultimate productivity (e.g., publications) of the funded researcher. So peer review may not be great at picking the best from the very good (or possibly even from the good).

2. There may be biases in the peer review process, with reviewers rewarding certain proposals based on the topic area, who the investigator is and other factors such as gender, race and institution. Small differences in scores again will have exaggerated effects on 'fundability'.

3. Fundamentally, it's particularly difficult to evaluate transformative potential. Sometimes 'niche' work can nucleate revolutionary ideas (e.g., CRISPR biology was seen as a microbiological curiosity with early papers famously struggling to find a journal home), but this could take decades to manifest and cannot accurately be judged at an early funding stage. Thus, current proposals tend to take a conservative approach to study ideas that are very likely to succeed (or have already) and otherwise eschew risk.

Thus, the editors use these arguments to make a pitch for a lottery-based system for awarding grants. In this system, a peer-review based triage process would first help whittle the meritorious proposal pool to a reasonable number (e.g., the top 30% of grants if a payline were 10%). Then a computer-generated lottery would select proposals from this pool to fund.

They argue that the idea is not as extreme as it sounds: if current paylines are such that minor differences in scores separate success and failure, then a lottery of sorts is already in place. Importantly, they argue that a totally random lottery will reduce the exaggerated impact of implicit biases, would save money on convening study sections and would psychologically buffer the sting of a rejection by allowing researchers to ascribe negative decisions to bad luck.

While the editorial focuses on a lottery mechanism, many other reform proposals have been made from scientists (including referendum-style voting on grants, or funding individual investigators based on track records (as HHMI currently does)). It is not clear which, if any of these ideas, may truly make it into policy and offer respite for weary investigators, but several of the messages behind these proposals are important for the general public to appreciate:

1. As our ability to predict the future importance of discoveries is questionable, much like how evolution operates, it's good to have many irons in the fire. The more diverse the science being funded, the more likely that there is something that can serve as the foundation of some truly transformative work in the future. How many great ideas have been lost from our scientific pipeline by effectively being seen as too risky and 'unfundable'?

2. With the acknowledgement of our own shortsightedness, it is important to understand that for innovation to occur successes will be met with a hugely disproportionate level of failure: there are few paths to get from point A to point B, but many wrong turns in between. Thus, while it is important that the public sees that tax dollars are being spent to revolutionize science and medicine, the public and political case should be made that investment needs to remain high (and inefficient) even in the face of low perceived 'productivity'.

3. Finally, though success in science is rare and spreading money around 'randomly' might be better for the future of the scientific enterprise as a whole, is this at odds with practical concerns about investigators' careers? If a renewal does not come because of luck (even if the project was a success), what implications does this have on things like promotion and tenure? How do institutions re-evaluate their metrics of success and failure?

So, for all that we wring our hands about publication being the key to success (i.e., publish or perish), the granting agencies have arguably the greatest impact in shaping scientific innovation and by extension scientific careers. It is clear that the funding conundrum is not going to be solved overnight, but as the authors state, the current NIH system has remained largely unchanged for decades and may no longer reflect the reality of research in the 21st century. What changes should be made and if they will even be effective remain unclear. Ironically, these questions can be addressed by conducting more transparent research (funded by NIH?) into how the system currently both supports and excludes scientists. Meta, much?

Please sign in or register for FREE

If you are a registered user on Microbiology Community, please sign in

Go to the profile of Nonia Pariente
over 7 years ago
Great post Mike!