Institute of Genetics: Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

The main messages and overall comments are in alignment with the feedback received over the past two years by our community, although these ad hoc comments have not been integrated into this analysis.

  1. The current system is a total failure
  2. Face to face peer review needs to be restored
  3. CIHR needs to listen to its community

Stakeholder Engagement Approach

We originally planned only to gather input from our former IAB as they were selected to represent the breadth of IG's community. However, none of them sent us their input directly. The results presented here therefore represent only what has been submitted through CIHR's electronic submission portal, which might include answers from our past IAB as well as answers from members of our community that heard about the survey through other channels.

Methodical Approach

As the answers to each question were generally consistent, a short summary is presented for each of them. In order to provide a clear and unbiased view of the stakeholders' perspectives, quotes are included in this report. These quotes have only been edited to correct typing mistakes.

Participants

All the participants associate themselves with the biomedical pillar, and only 3 participants out of 15 are women. A geographical bias is observed amongst the 14 respondents who identified their location: 7 from Alberta, 3 from British Columbia, 3 from Ontario and 1 from Quebec. There is also a stronger participation of senior investigators (9) as opposed to mid-career (5) and new/early career (1). Nine out of 14 (1 refused to answer) applied to one of these competitions, 4 of them successfully: 2014-2015 Foundation Grant competition, 2015-2016 Foundation Grant competition 2015 Project Grant competition. For those competitions, 3 out of 14 served either as reviewer or virtual chair.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

None of the respondents felt that the reforms addressed the objectives, and many thought that these objectives were either flawed, unclear or that there was incoherence into how those objectives have been presented. Only one answer mentioned that the original intent "was absolutely correct". Additional common theme pertained to the failure to support women scientists as well as new investigators.

"The CIHR reforms are the very worst where the online reviews for the round 2 Foundation grants I did for two years were opinionated and without the accountability of scholarship."

"The primary impact of the changes to peer review has been to lower the quality of the reviews, and decimate morale among the scientific community. Despite what Alain Beaudet seems to believe, 15% of all reviewers failing to meet deadlines is not a noteworthy accomplishment, it's a disaster."

"I think that the redesign has brought out the most superficial of review processes with non-experts making decisions based on their gut feelings about a grant, and not about the science."

"1 - the F-scheme is a flawed system in which the review criteria are set up to favour senior, male PIs -one example - using invited seminars & leadership positions as a criteria to judge scientists when there is a clear, documented evidence of a MASSIVE gender bias (against women) for both these things."

"I believe that the intent was absolutely correct. To provide stable funding to established leaders to take risks, and then to provide opportunistic sources for project-based ideas. The issue with the implementation clouded this."

"The peer-review process from everything that I have heard is not providing a vigorous and fair review."

"The objectives of the peer review changes were poorly articulated, which is one of the reasons they failed. In presentations by CIHR high level administrators and on the CIHR web site, there has been no coherent explanation of what the changes are trying to achieve… The peer review expert panel may do a wonderful job in proposing changes, which will then be ignored by CIHR, allowing them to claim they consulted and are simply following your recommendations. Yes, my distrust of CIHR now runs that deep."

"The leadership of CIHR was well aware of the problems that would occur before the last round of reviews. They should be held accountable rather than being put in charge of fixing their own mistakes."

"The current process is "peer review" in name only. There are an inordinate number of reviewers with lack of expertise and/or interest in the process."

"If the objectives were to increase the quality of peer review and decrease the burden of the application and review processes, the reforms failed miserably on all counts."

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

Common answer: no (often in CAPS). A common theme was that given the current level of funding, any process would be random in the end. Many answers to this question were more focused on the failure of the failure and lack of accountability of the online system.

Issues raised pertaining to the original question:

"These changes emphasize the high-level science without allowing sufficient depth for evaluation"

"The changes allow reviewers to neglect their obligations to applicants. CIHR have delegated peer review to a poorly organized internet chat room, where virtual chairs have zero power to compel reviewers to behave responsibly. The results are equally predictable and devastating."

"The present review system, in the view of many in the research community, was strongly biased to the "applied" end of the research spectrum, with criteria biased against those carrying out fundamental health research."

"Honestly, I think we need to do far more than tweak a program to truly change our science. But certainly, there were as many complaints with the previous system as there are with these. Our community is striking non-evidence based in their critique. In Trump-like denial, our community refuses to acknowledge that Canada supports University science at levels higher than all other G7 countries and it is the lack of business investment that drops our science funding to the bottom of the league tables. We also do not acknowledge the random nature of face-to-face peer review, which they hold sacrosanct despite clear evidence at its inability to rank proposals in a reproducible order, nor to select those proposals that will generate the most impactful science - especially at the current funding levels."

"The project format in particular favoured 'spin' and hyped claims of impact and translation over substantive and detailed proposals."

"The idea of the interdisciplinary review is good but with a low success rate I don't think that it is possible to do a fair and meaningful review."

"No, they made it worse. The review process did not assure that each application was assessed by expert reviewers. Nor did it require that each reviewer justify their scores and rankings for each application. This eliminated an essential control on the quality of the reviews."

"This is a difficult question to answer because the mandate keeps changing."

"The changes are foisting this on the community, rather than nurturing them. It is marginalizing basic science to the extreme. We are, I fear, nearing a tipping point in the basic science community where we will lose a generation (or more) of talent."

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on the peer review and how do CIHR's reforms address these?

Common challenges were identified, but none thought that the reforms addressed them.

"Based on my experience public funding agencies internationally use high quality professionals in their agencies to put together panels and study sections. This is totally missing at CIHR."

"face-to-face review and discussion of grants is essential"

"Talking to colleagues, my sense is that reviewer fatigue sets in when we're asked to work for journals or agencies where the quality of review is very poor, and the likelihood of success for talented applicants is very low. CIHR have managed to meet both those criteria."

"The CIHR reforms go against the standard norm for grant review in other countries - namely face to face review. This has been one of the biggest failures of the CIHR reforms."

"Again, there is no scientific way to rank proposals at the funding percentages we have now. No literature review or peer comparison will change this. Certainly the superstar proposals will get identified (1-3%) but the remainder is a crap shoot."

"Fairness, integrity, lack of conflict of interest, appropriate expertise of reviewers, ability to collect high quality reviews, ability o hold reviewers accountable for the quality of reviews, and providing reviewers with appropriate information to make a "good" decision in adjudication. Dealing with an increased "load" of applications is also an issue. In my opinion, the changes at CIHR do not address these issues.(...) CIHR has systematically ignored voices from its community and failed to engage the community in building a better review system. While the Peer Review Expert Panel is a positive initiative, it is only the start of a process that must have at its heart the goal of listening to the research community when making decisions… CIHR has abandoned its mandate which is stipulated in the act of parliament that created it."

"The review process fails to meet the international standard for a minimum quality of peer review. This was documented in detail recently by a peer review working group.(...) Why a second "international review" is required is beyond me and seems to be a play by CIHR management for space and time. An independent consultant reviewed the management and execution of these so-called reforms and wrote one of the most critical reports I have seen in my career."

"I am aware that reviewer fatigue is an issue, but this process seems to have inflated the number of required reviews rather than minimize it."

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

No, the majority of the respondents believe that the mechanisms set up by CIHR, including but not limited to the college of reviewers, failed. General feeling that CIHR has lost credibility and/or needs to be reformed were expressed. However, two respondents noted that if the reviewer matching was done appropriately, the system could be as good as other systems. One noted that "to have only CIHR funded investigators review are a really good step in the right direction."

"CIHR needs to be transformed. It was modelled on the NIH and should consider evolving to a selection of internal staff with the scientific excellence and rack records of the NIH director and the heads of the NIH institutes."

"CIHR continue to stumble from one self-inflicted disaster to another. I've never seen morale so low in a scientific community, and the blame for this lies squarely with CIHR. There is nothing coming out of that organization to suggest they have any interest in high (or even moderate) quality review."

"The new proposals to have only CIHR funded investigators review are a really good step in the right direction. Better matching of applications would also help. The college of reviewers is not a problem per se, as long as matching is well done."

"The college of reviewers system is established to have been a debacle, which was anticipated by the community. The community's view is that the college of reviewers system led to reviews that were exceptionally low in quality, leading to a great loss of trust in CIHR leadership… It is not clear what is meant by peer review "impact". Does this mean ability to identify research proposals that have potential for impactful research? No, the current system does a very poor job of identifying the most impactful proposals."

"During face-to-face meetings between reviewers, it would become obvious when reviewers lacked the expertise to back their opinions, or when they were biased against certain fields of research."

"The processes were farcical, disorganized, poorly executed, and open to abuse by poor reviewers."

"The college of reviewers is not yet functional to my knowledge and so it is difficult to assess how effective it will be. This of course is due to the leadership decisions around the roll out of the CIHR reforms which probably attempted to revise too many aspects of the open program at once. I think the virtual review process has been roundly and rightly criticized in the media and by scientists."

"Too many applications were reviewed by people that did not possess the necessary expertise. Reviewers were not required to participate in the discussion of the grants and they were not required to justify their scores and rankings in any forum."

"Peer review quality has degraded significantly."

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

The NIH system and face to face peer review have been repeatedly noted as being best practices.

Thorough answers:

"As background, I run an organization with laboratories in 6 countries (Brazil, UK, Sweden, Germany, United States and Canada), and am well acquainted with the funding mechanisms in those countries. Canada is different, but not materially. I like the MRC's program to allow grantees to rebut the reviews prior to a decision. I like Wellcome's (now-defunct) "strategic award" program, which allowed far-out ideas to be funded, ideas that would never be endorsed by peers. I like Takeda's TEC program, in which reviewers are given 3 "chits" and ~50 grants, and every grant you select gets funded - no need for peer consensus ("Innovative grants cannot be identified by peer review because by definition, innovators have no peers.", Tachi Yamada) And personally, I like programs like the IMI and Genome Canada, which awards large grants to ideas that have broader impact and that invite industry expertise to solve important problems - ideas that the CIHR community would find too difficult/big to support, or the NIH would be precluded from because of the industry ties. And finally, I love ARPA (DARPA), which again is a modified peer process (few decision makers) that supports far out (mostly dumb) ideas."

"Issue 1- Reviewers must have sufficient expertise to review. Highly qualified, respected, scientific chairs are needed to assign grants to reviewers. These individuals ned to be very familiar with the research community to identify reviewers with background able to assess particular applications. No algorithm or CIHR staffer has a hope of matching grants with reviewers. Issue 2- Reviewers must be identified who can deliver high quality reviews- One of the largest issues is to identify reviewers who "care" enough to spend the time and effort to deliver reasoned, dispassionate reviews. The old face to face panels succeeded because when someone has to present a review to a panel of peers across a table, there is no opportunity to do a superficial job without being caught. Informally committee chairs would not invite back individuals who did a bad job. Moreover, this was very rare because reviewers would at all cost want top avoid embarrassment in front of their peers. Assessment and accountability of reviewers are essential if the goal of the peer review system is to identify fairly the best grant applications. Reviewers need to feel that their contribution is valued. In part this comes from a community that is aware of an individual's reviewing contributions. Amazingly, the most recent CV form for the Project scheme did not provide a page to include information on one's peer review activities. This is an oversight that needs correction. Individuals who contribute need to be applauded and recognized openly and those who don't need to be shamed. Issue 3- Information provided to reviewers. A high quality CV, providing the critical information to reviewers in a simple way needs to be provided. The current CIHR CV requires revision to be optimal. Also, CIHR needs more carefully to balance the simplicity of application forms with the need for depth of information in order to make critical decisions. That is, the project scheme applications have been reduced in length, which makes grant writing and review faster and easier, but at the expense of having insufficient information for reviewers to make good decisions."

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

There have been many approaches presented below. The top themes are:

  • Change in CIHR leadership to restore trust
  • Consultation with scientists (chairs, reviewers and applicants)

"Right now there are none that are rigorous and accountable. Using the same methods of NIH as seen on their web pages would be helpful to CIHR but only after the appointments of new full-time President and internal staff."

"Reviewer satisfaction. Applicant satisfaction. The speed with which they fire Alain Beaudet for being a horrendously incompetent head."

"Leading indicators would be quality of publications in top tier journals, their citation rate and the translation of findings to the public good (health)."

"gender equality career-stage equality"

"gender and stage of career equity funding for the full breadth of researchers (basic to clinical)"

"I would try to find a way to support unique excellent research, not use the term "internationally competitive""

"The best evaluation would be through surveys of the CIHR community regarding their impression of the review process."

"Consult and listen to your constituency. Arbitrary changes without true consultation is not going to work. Surveys and other tools are helpful."

"Surveys of scientists, including not only consulting with but also heeding the words of leaders in the research community."

"Looking at means from initial scores with the huge standard deviations that we presently see SHOULD be a leading indicator that the current system is woefully broken. Face to face review, with named experts, it a time-tested method in use by most leading research funding agencies. I do not understand the need to search for new metrics to patch an already broken system."

"Virtual chairs can provide feedback on engagement of reviewers. Reviewers can report on quality of on-line discussions. Applicants can report back on the utility of the feedback that they receive. CIHR can look at the concordance of scores for applications that were submitted in duplicate to the Foundation and Project grant schemes."

"Right now with the low percent of funds available for grants, you cannot judge quality. More funds are needed to general research grants, less SPOR and empirical research. Data-driven research should be a priority."

Date modified: