Institute of Nutrition, Metabolism and Diabetes (INMD): Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 21, 2016

Main Messages

  1. The overall tone of the responses is one of frustration, incredulity, and anger; some respondents seemed to suggest that lack of funding, poor implementation, and lack of transparency doomed even the best intentions for peer review reform.
  2. Some respondents worried that the new system disenfranchised investigators, particularly those in in the mid-stages of their career.
  3. Respondents offered several suggestions to improve the peer review process. For instance, respondents suggested looking to other nations for guidance. The National Institutes of Health and the Juvenile Diabetes Research Foundation were identified as exemplars.

Stakeholder Engagement Approach

INMD used the on-line survey developed by the CIHR Performance and Accountability Branch that was posted on the CIHR web site. A personalized message from the Scientific Director was sent out to former INMD Institute Advisory Board members asking them to complete the survey. One reminder e-mail was sent out to those invited to participate in the survey. In addition, personalized messages from the Scientific Director were sent to key INMD stakeholders requesting their participation in the survey. These stakeholders were largely health charities related to the INMD mandate, as well as professional associations. These partners approached this task in various ways. In one case, the partner conducted their own survey of members (largely researchers) and received 57 responses and submitted a summary of these responses to INMD directly. The others submitted individual responses.

Participants

Thirteen respondents from across Canada (n=11) and Europe participated (n=1); 1 participant did not share demographic information. Of these, one response represents a total of 49 respondents in the Kidney Community, one field of research that falls within the INMD mandate. Of these, 47% were women and 53% were men. Of the individual responses, 23% were female and 62% were male (two respondents declined to identify their gender). The respondents in the Kidney Community represented the following Pillars: 45% Biomedical, 29% Clinical, 19% Health Services & Systems, and 7% Social, Cultural, Environmental and Population Health. The individual respondents represented the following Pillars: 54% Biomedical, 15% Clinical, 7% Heath Systems and Services, and 15% Social, Cultural, Environmental and Population Health; two respondents did not specify their theme of research. Respondents represented the following career stages (please note that for the remainder of the document, percentages for kidney community respondents are represented first): new investigators (16%; 8%), mid-career (50%; 23%) and senior (34%; 54%). Two respondents did not provide their career stage. Of the individual respondents, 69% respondents (n=9) indicated that they had applied for the 2014-2015 grant competition; of these, three were successful and two declined to respond. While one respondent had served as a virtual chair, most (n=10) had not; one respondent preferred not to answer, and one did not respond.

Summary of Stakeholder Input

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

Participants variably described their perceptions of CIHR's original objectives. Participants perceived that CIHR had set out (1) to reduce reviewer fatigue, (2) to enhance reviewer expertise, and (3) to "support world-class researchers." In general, respondents did not believe that the CIHR's reforms of investigator-initiated programs and peer review processes addressed their original objectives (60%, 70%)—even among those who had originally supported the reforms. Specifically, survey responses ranged from those that were wholly positive (33%; 15%) that CIHR met its objectives, to those who considered that "the reforms are a complete disaster. Peer-review has been destroyed." Others were unsure (7%; 15%). For those who responded positively, most did not provide many details, simply responding "yes." The outlier: "I am in a privileged position to have received a project scheme, and I am a new investigator. Many, although not all, of my peer- reviews were helpful in considering my grant. I agree with the decision to change the criteria so that someone can hold both a foundation scheme and a project scheme." One respondent answered that objectives were met, but only "If the objective of the reforms were to systematically dismantle trust in CIHR and destabilize careers, then yes, job well done."

Participants identified a variety of challenges that impeded CIHR from meeting its objectives including: (1) a lack of resources, (2) poor implementation, and (3) a lack of transparency. Several respondents articulated that there were "too many researchers vying for a limited amount of funding." There was a pervasive sense that "Many truly excellent grants are not funded because the success rates are too low," – a lingering problem even if CIHR had been able to fully meet its stated objectives. Some perceived that the lack of resources may be an insurmountable challenge, and one respondent worried that "Canada has lost its' global competitive edge in basic medical research because of inadequate funding for investigator-initiated fundamental research…" And even for those who believed that the objectives had been met, there was a sense that the main problem was not the reforms themselves, but how they were implemented. Respondents noted, however, that a lack of transparency made it difficult to fully assess whether or not objectives had been met: "Unless CIHR cares to release the results of this pilot so that the community might assess how successful it has been, it's hard to conclude anything other than it's been a complete and utter failure of an experiment."

As a result, anger reigned, and there was a sense of a "growing movement of sarcasm among the research population." Some worried that CIHR's polices were discriminatory, particularly to mid-career researchers: It is clear that all career stages are not treated equally in the new scheme. Male, senior investigators have been most successful. Mid career and early career investigators are discriminated against. Women are particularly discriminated against in the Foundation scheme." Others worried that lower quality, or less innovative, research was being funded, "I think that the Foundation grant program will provide increased funding for established researchers with a more clinical focus and will not capture the energy and innovation from smaller labs and research groups."

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

The responses to this question were very strongly weighted to the "no" side of the equation; 79% of the kidney community responses, and 92% of the INMD responses were negative. There was a pervasive sense that the changes in program architecture and peer review did not address the multiple challenges faced by CIHR. In fact, few felt that changes to program architecture helped: "Architecture (potentially) facilitates reviewers from a wider range of disciplines. Also facilitated more appropriate reviewers for grants. Reviewers can review across committees - and/or the concept of 'committee' is much more flexible." Most, however, characterized the system as arbitrary, inconsistent, and "short-sighted", and perceived that CIHR achieved the exact opposite of its goals. That is, accessibility decreased while complexity increased; in turn, this created additional burden for applicants: "Program accessibility has been reduced for most (except the select few foundation grantees) and complexity has been increased due to the horrible review structure/system and application process. For most applicants, this challenge is now greater."

The reasons cited for the negative responses related to the "chaos" of the current system, lack of accountability garnered by the on-line review, and the feeling that the new format emphasizes marketing over science. There was a perception that changes to CIHR's program architecture have created a system in which reviewers lacked the appropriate expertise to knowledgeably review grants. Consequently, applicants were presented with inconsistent reviews of their work. Respondents suggested that the limited space for comprehensive comments, and the lack of face-to-face interactions amongst reviewers compounded these challenges: "The bottom line is that many of us peer review because we: 1) like helping our colleagues improve their science; 2) enjoy learning about new and different science both in our fields and peripheral to them and 3) enjoy serving on panels, meeting peers and having discussions about science. The new system does not give reviewers any of these benefits as our feedback is now character limited and must fit into boxes; the applications are so devoid of details as to be mundane and boring and we have to sit and stare at computer screens and carry on asynchronous online review. Further, the CIHR did not consider that it might actually take MORE time to sit at your keyboard and type for multiple days on end to have an online discussion about a grant."

The few positive responses to this question were qualified: for example, "Maybe, but too early to tell", or "yes somewhat." Some respondents indicated that CIHR wasn't fully to blame; that is, limited funding may be an insurmountable barrier. With more applicants applying for fewer resources, there was a perception that CIHR was reluctant to fund innovative or "risky" proposals; this was perceived to have significant implications for the evolving nature of science: "Considering that funding is not available for novel innovative research that has limited or no preliminary data, I don't see how interdisciplinary research can take place in Canada." Consequently, respondents worried that collaborative work was becoming impossible, and that talented researchers would begin to defect from CIHR to seek funding elsewhere.

Participants suggested a need for more international reviewers, and to provide appropriate incentives to retain qualified, expert reviewers. Others perceived that some of the reforms should be reversed: "The gold standard for peer review is face-to-face committees. The CIHR should make the effort to return to this system where all grants are thoroughly reviewed and then triaged/streamlined during the face-to-face panel. This gives reviewers the chance to discuss any discrepancies that might exist and come to a consensus."

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

One respondent acknowledged that peer review is "never free of bias since the process is adjudicated by humans" and that all granting agencies must reconcile this challenge. However, this respondent perceived that: "CIHR has taken the worst elements of other granting agencies' processes." The same themes that were pertinent for other survey questions also resonated with the issue of adjudication of applications. Specifically, responses included perceptions that funding was being directed away from Biomedical research, that CIHR is a risk averse funding environment, that there are too many inconsistent approaches to evaluating and reviewing applications, and that there is a need to conduct continuous and rigorous evaluation of reviews and of the review process. For instance, participants perceived that: "reviewer fatigue and poor correlation between review score and impactful research," "injustices in funding decisions," and a lack of transparency and accountability remain key challenges. Respondents also continued to critique the lack of face-to-face meetings; one, however perceived additional challenges: "there are not enough reviewers in the system or per grant, the reviewers are not given enough freedom to be clear and helpful, and there is not enough requirement that they be accountable." Anger continued to permeate responses: "the key people overseeing it at CIHR should not be allowed near the review process again."

Participants offered several suggestions for addressing these problems and challenges. For instance: "individuals who have applications in a competition not being allowed to sit on a panel begins to address (bias)….continuing the process of not allowing chairs/vice chairs score an application also helps." Face-to-face meetings should be re-instated: "The discussion of grants is critical and my impression (though not a participant) is that the online discussion was not adequate. While I do not believe it needs to be face-to-face (in fact I find the travel cumbersome as a reviewer) it must be in REAL TIME by telecon or Webex. I am OK with further triaging (not full discussion) being performed at this stage, but prior to that only clear consensus triage grants should not come forward."

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

Similar to the responses to Question 2, the majority of respondents (80%; 77%) indicated that they did not feel that the mechanisms set up by CIHR were appropriate or sufficient. There was a general consensus that the removal of topic-specific review panels led to a decrease in the expertise to provide an accurate review of grants. Additionally, the removal of face-to-face meetings decreased accountability, and reviewers complained that they were asked to review grants for which they did not have substantive knowledge. "Peer review at the CIHR is in shambles. The College of Reviewers does not exist. All of the peer review expertise that used to exist at the CIHR has been squandered." One respondent seemed to describe the crux of the problem: "Ever heard of accountability of the reviewers and of the review process? Try that for a start. Try providing expert review rather than sending applications to unqualified reviewers. The undisclosed nature of the process by which CIHR matches applications to reviewers is a scandal. Where is accountability for scientific rigour?" Other respondents echoed these sentiments, and described the CIHR reforms as an "abysmal," "incoherent mess of garbage" that has "failed the scientific community." Again, limited face-to-face meetings were blamed. Their absence minimized opportunities for new and inexperienced reviewers to obtain mentorship and to develop the skills and the expertise to provide rigorous peer review.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

Responses to this question included: face-to-face meetings, ensuring that real experts in the field are reviewing the grants, being clear about the criteria for evaluation of grants, and potentially blinding the entire review process with codes for names on the CV and the application. Specifically, respondents suggested looking to the National Institutes of Health and the Juvenile Diabetes Research Foundation as models of exemplary peer review programs. Respondents also suggested re-instating mandatory face-to-face meetings for triaging and reviewing grants, engaging peer reviewers who are also "active members of the research community (i.e., active grants)," and drawing on the expertise not only of international expert reviewers, but also of patients or lay individuals "to understand what research projects are truly meaningful for patients."

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

"If everyone in the community is upset with a system, then chances are it has [failed]." This respondent's statement seemed to represent other respondents' responses. For instance, "CIHR set the global low-water mark for peer review in 2016. I cannot imagine it could get worse, although you never know. I think this is evident by the unprecedented level of dissatisfaction. Scientists are used to getting grants rejected, it actually doesn't usually bother most of us beyond the initial sting - what bothers us is chaos." Respondents indicated that the following methods/models would be beneficial for evaluating the quality and efficiency of its peer review system: Feedback from international leaders as part of the evaluation committees; continuous monitoring of successes of the grantees (in terms of scientific outcomes); including milestones within grants; research and researchers being evaluated by standard methods of research impact; having a sub-set of applications evaluated by two or more panels; design reviewer instructions and application form together; and insisting that committee members (and chairs and SOs) explicitly evaluate the performance of all other members. Moving forward, respondents suggested that it would be important "to assess questions of bias by measuring "where the funding is going and who is serving on the review panels." Another respondent also suggested that CIHR "pay attention to the many well established paradigms for scientific peer review, especially in countries with a significant rate of production of Nobel Laureates and other innovators in the health sciences."

Question 7: In July, CIHR released a report outlining steps it will take to correct some of the issues raised by the community. What is your opinion of this response?

The nearly unanimous response to this question indicated that members of the kidney community feel that this was a very small step in the right direction.

Prepared by Kori A. LaDonna, PhD

Date modified: