Institute of Musculoskeletal Health and Arthritis (IMHA): Stakeholder Engagement Report to the CIHR Peer Review Expert Panel

November 2016

Main Messages

Three key concerns were evident in the feedback collected from CIHR-IMHA's research community:

Skewing funding towards particular types of research and career stages: while the majority of participants found some merit in the conceptual basis of the Project and Foundation Schemes, they felt that each scheme was operationalized in a way that disadvantaged particular types of research (e.g., novel ideas and basic research) and career stages (e.g., first-year New Investigators and Mid-career Investigators). They also expressed concern about "lumping everything" into the two schemes. As a result, close to half of participants expected the new suite of programming to fall short on the objectives of accelerating the creation of new knowledge, developing Canadian capacity, and supporting researchers throughout career stages.

Compromised peer review: over 80% of respondents viewed the initial reforms to peer review as very problematic. While many acknowledged that peer review is an imperfect process, they felt that the reforms exasperated these imperfections by creating uncertainty and abandoning mechanisms that helped reduce the potential for bias and low quality reviews (e.g., face-to-face review). CIHR's recent amendments to peer review were viewed by most as a step in the right direction, but not a complete correction. Most respondents wanted to see improvement in the quality of reviews and the matching of expertise to applications. Many were apprehensive about the extent to which software could address these challenges. It was feared that it will take several years to optimize the College of Reviewers for the Canadian context.

Incentivizing Counterproductive Competitive Strategies: half of respondents expressed concern that the current configuration of the Foundation Scheme involved too many disincentives for "High Rollers" and would increase application pressure in the Project Scheme. They also feared that uncertainty and distrust in peer review would encourage researchers to submit a large number of applications. Nearly a quarter of respondents (23%) recommended lowering the cap for the Project Scheme to allow the funding of more applications. Another popular recommendation was adding support for trainees to the Foundation Scheme.

Stakeholder Engagement Approach

CIHR-IMHA engaged stakeholders via focus groups, email correspondence, and the CIHR webform. One focus group was held at the University of Manitoba (7 participants) and another at McGill University (5 participants). CIHR-IMHA promoted the webform to its newsletter subscriber list (n=1191); twelve completed it. Twelve researchers were invited to provide feedback via email correspondence; two did so.

Participants

Twenty-six stakeholders provided feedback. The male to female ratio of participants was exactly equal. Eighteen participants identified a primary pillar of research: Biomedical (13); Clinical (3); Health Systems and Services (1); and Social, Cultural, Environmental and Population Health (1). Nineteen participants identified their career stage: Senior (10); New/Early Career (5), and Mid-career (4). Thirteen participants indicated that they had applied to the Foundation or Project Scheme – four of which reported success. Six participants had served as a chair or reviewer for the new schemes (this includes the pilot competition).

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

Overall, 12% of respondents had confidence that the reforms would achieve their objectives; 19% thought they would partially do so, 62% disagreed, and 8% were uncertain. More than half of participants liked the idea of CIHR offering longer term and more flexible funding for programs of research, while also maintaining a separate funding scheme for specific projects. Most did not think CIHR got the mechanics of these two schemes right, nor did they like the prospects of "lumping everything" into them. The leading concerns raised are outlined in more detail below.

Comparing "Apples to Eggplants": this concern centred on the diversity of applications received and the evaluation criteria provided to reviewers. While the experience of the reviewers in CIHR-IMHA's sample varied in terms of the diversity of the applications they received, most reviewers expressed reservations about the evaluation criteria and their weightings. In particular, they felt that asking reviewers to score diversity of portfolios, commercialization potential, translational potential, and the promotion of collaboration across disciplines necessitated too much speculation and was not equally relevant to all applications. The majority of reviewers felt that the current assessment criteria disadvantaged innovative ideas and basic research.

Incentivizing Counterproductive Competitive Strategies: participants viewed the current configuration of the Foundation Scheme as unattractive, particularly for the "High Rollers," whom they believe the scheme was designed for. More specifically, they thought highly competitive researchers would be able to secure more funding overall through the Project Scheme and stagger grants to create a safety net – all without being locked in for seven years. Participants feared this would greatly increase application pressure in the Project Scheme. Additionally, they anticipated widespread distrust in peer review would exasperate the problem (e.g., tempting researchers to put in as many applications as they can in the hopes that at least one will be successful). Nearly a quarter of respondents (23%) recommended lowering the cap on the Project Scheme to fund more applications. Another popular recommendation was adding support for trainees to the Foundation Scheme to increase its attractiveness.

Gaps in Support for Particular Career Stages: participants expected Mid-career Investigators to face great difficulties in both the Project and Foundation Schemes. While most participants agreed that adding a quota for New Investigators in the Foundation Scheme was beneficial, they thought there would still be a major difference between the competitiveness of first year New Investigators and those who are just about to transition to mid-career. There was no consensus on how to address the perceived challenges for particular career stages.

Compromised Peer Review: participants expressed strong beliefs about the importance of face-to-face meetings for minimizing error and bias in peer review and improving the quality of reviews. They also identified face-to-face meetings as valuable for networking and learning.  Those who participated in the remote peer review process described it as dismal – particularly due to low participation rates and low quality exchanges. While the recent amendments to peer review were viewed as improvements, there was still concern about how well the shift to synchronous remote review would work. Participants recommended ensuring that the synchronous remote reviews involve both audio and visual communication.

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

Most participants acknowledged that it is a challenging time to practice and fund scientific research, with high application pressure, major resource constraints, and a lack of a national science policy in Canada. They also acknowledged that peer review is a time consuming and imperfect process that will likely always be subject to criticism. Yet outside of support for the conceptual basis of the Project and Foundation Schemes, participants were hard-pressed to identify features of the reforms that they thought worked particularly well, or that represented a well-strategized response to the challenges CIHR faces. Over two thirds of participants felt that the reforms resulted in a great deal of chaos and uncertainty. Close to a third suspected budget considerations were a leading factor in decision making, especially with regards to peer review.

In addition to the concerns mentioned in response to question one, participants viewed the following changes as problematic: the alpha numeric scoring system; lost opportunities to discuss grants amongst a larger set of panel / committee members; the time required to fill out the common CV; and the introduction of "new field dominated" forms. In reference to effectively assessing multidisciplinary research, participants felt more refined matching, and perhaps even more reviews, would be needed. While participants appreciated the recent amendments to the Project Scheme, they anticipated it would take a lot more work to improve the matching of expertise to applications while managing the burden on reviewers.

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

Participants identified multiple challenges with respect to adjudicating funding applications (though few referenced specific studies or organizations in doing so):

  • Identifying and implementing mechanisms to minimize bias, error, and conflict of interest;
  • Managing the cost and time requirements of peer review;
  • Providing clear assessment criteria that necessitate little speculation;
  • Providing useful comments for applications that show promise but were not successful on the first try;
  • Balancing efforts to reduce the burden on reviewers with ensuring that reviewers have the necessary context, and calibrating scores across reviewers;
  • Balancing efforts to find the best match of expertise for each application with ensuring that each reviewer has the appropriate context, and calibrating scores across reviewers;
  • Establishing an effective triage system to optimize the use of reviewers; and
  • Attracting/developing an appropriate pool of experienced reviewers with the necessary expertise.

While participants were not able to determine how well CIHR's reforms addressed all of the above challenges, they did feel strongly that the reforms abandoned (at least originally) the strongest mechanism for minimizing bias and error (face-to-face review). Several participants also felt strongly that the assessment criteria required too much speculation and that the instructions provided to reviewers were insufficient. Most participants acknowledged that CIHR is still in the process of formulating its response to the challenges of matching expertise to applications and managing the burden on reviewers. There was both optimism and apprehension about the College of Reviewers - particularly in terms of how ambitious it could be, how long it would take to optimize, and what role software would play in it.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

Participants found this question challenging to answer for the following reasons:

  • Multiple reforms were enacted at once, making it difficult to determine the appropriateness or sufficiency of any particular mechanism;
  • Many participants felt that there was a lack of clarity and transparency to the reforms;
  • Several concerns were at least partially addressed in the recent amendments to the Project Scheme; and
  • Most participants were unclear on the specifics of CIHR's vision for the College of Reviewers.

Few participants had strong opinions about the College of Reviewers. The most common feedback was that it would likely be complicated and time consuming to get right, there may not be a large enough base of Canadian researchers to support it, and the quality of instructions and assessment criteria will play a large role in its success. Some concern was expressed that CIHR would get caught up in an overly ambitious, complicated, and time-consuming approach – and that there may not be enough Canadian expertise to support it.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

Several participants did not have a response to this question, or were only able to identify the names of agencies that they felt had better practices than CIHR. The most detailed recommendations included:

  • Ensure that at least 3 or 4 experts review a grant and have the opportunity to discuss applications face-to-face, and in front of a larger panel of experts, thereby increasing opportunities to correct misunderstandings, address gaps in expertise, and eliminate bias (e.g., NIH);
  • Ensure that experienced Scientific Officers and Chairs are selected, given clear instructions, and utilized to help match expertise to applications, focus discussion, address discrepancies in scoring, and hold reviewers accountable (e.g., NIH). Many participants felt that CIHR does not have policies "with teeth" or a track record of holding reviewers accountable that is on par with the efforts of the NIH. While many participants thought asynchronous remote review exasperated the accountability issue, they thought the need for more clarity and policing was a broader issue;
  • Make better use of external/international reviewers: given the size of Canada's research community, many participants thought CIHR would have to use more external reviewers. One participant recommended a process from "the ERC" which involved using up to five external reviewers per application, but only releasing the external reports once the internal ones had been submitted. Some cautioned that using external reviewers presents special challenges when it comes to calibrating scores;
  • Limit the number of applications per reviewer: most participants thought 8-12 grants per reviewer would be reasonable, but questioned if CIHR could maintain this given the ratio of applications to reviewers. Improving the use of triage was identified as a promising way to make the most of reviewers' time, but this too was described as challenging to get right. Other recommendations included offering honorariums to external reviewers and continuing to reflect on the ideal application length;
  • Implement an international audit process; and
  • Consider a regional allocation process similar to the NIH's (this was recommended by one participant who felt regional disparities in funding were exasperated by the reforms).
Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

Several participants did not answer this question. Those who did focused on indicators that can be tracked immediately, such as asking applicants to rate the quality of the reviews/comments they receive, as well as surveying chairs and reviewers about their satisfaction with the process. Other short-term recommendations included monitoring the discrepancy in scoring for applications, the number of applications per applicant, and the extent to which unsuccessful applications are improved, resubmitted, and funded.

Recommendations that cannot be assessed immediately with respect to the recent competitions include: "bibliometric assessments of the association between funding awards and volume/impact of published output" and "systematic assessments of the association between funding awards and the number and quality of trainees."

Date modified: