Institute of Health Services and Policy Research: Stakeholder Engagement

November 2016

Main Message

  1. The Health Services and Policy Research community believes that the virtual Peer Review system does not work efficiently. It places a lot of responsibility on the chair without any reward or acknowledgment, reduced accountability for reviewers and inadequate experience and background knowledge of reviewers for the applications. Incentives and/or accountability should be placed within the system to ensure its success.
  2. Program architecture should encourage interdisciplinary research (which must be reviewed by a proper interdisciplinary committee), proper funding amounts to reduce the continual application cycle and consider various means to reduce application pressure (ie. allow funding to be used past the expiry date of the grant).
  3. We can learn from international funding bodies and CIHR may want to do a systematic review of other grant agency approaches.

Stakeholder Engagement Approach

IHSPR obtained input two ways:

  1. Direct email to former IAB members. All members received a personal email with all six questions and were asked for their input.
  2. One hour session during the Annual Community Based Primary Health Care (CBPHC) Meeting. Participants were sent the questions in advance to reflect on them and an open discussion was facilitated during the meeting allowing ten minutes per question.

Participants

IHSPR asked a total of approximately 50 individuals for input either via direct email or during a one hour session at the annual CBPHC meeting. These individuals ranged from HSPR researchers (early, mid, and late career), SPOR-related individuals (those associated with the Networks and SUPPORT Units), and community stakeholders (individuals involved with data platforms, representatives from various organizations – Canadian Medical Association, Canadian Nurses Association, Canadian Institutes of Health Information, Institute for Clinical and Evaluative Sciences).

Question 1: Does the design of CIHR's reforms of investigator-initiated programs and peer review processes address their original objectives?

Based on the following extractions, these were one respondent's comments:

International Panel Review report (2011)

"CIHR's peer review system, which must serve the diverse science and research workforce, is critical to the success of the agency. The peer review system is currently under review and improvements are underway. Nevertheless, the proliferation of committees and reviewers needs immediate attention to ensure the continued health of the process. In addition, the IRP suggests that strategic changes to the grants policy, such as awarding larger and longer grants and creating a regular and more formal process for research program portfolio planning, would enhance the efficient and effective performance of the research enterprise in Canada."

Proposed Changes to CIHR's Open Suite of Programs and Enhancements to the Peer Review Process (2012)

"Our peer review system and processes fail to adequately accommodate research across all of CIHR's pillars, new and evolving areas of research, and paradigm-shifting research. They also fail to ensure that the right expertise is engaged in reviewing the spectrum of grant applications received. At the same time, growing application pressure, and the complexity of many applications, has meant that potential peer reviewers increasingly express their reluctance to volunteer for the heavy workload."

Proliferation of committees and reviewers: It is not clear what the proliferation of reviewers means (reviewers who do a "one-off"?), but in my experience of the Foundation review, and what I heard from colleagues, the new virtual panels have in fact greatly increased the number of "committees", since each application gets its own little panel of 4-5. Thus, I am not at all convinced that the virtual review panels are effective in reducing this proliferation. And in my experience as both reviewer and applicant in this context, the virtual nature of the committee interactions had a substantial negative impact on the quality of the reviews. Instead of two (mostly) thoughtful reviews, I received terse one-liners, and even in one reviewer's case, only "A" in each response box. I would say that this is wholly to be expected. The virtual meetings remove the accountability and yes, the social pressure of producing a thoughtful review to maintain status relative to other peer reviewers in in-person meetings (as an uninformed and disengaged reviewer loses reputation). In person meetings are a very effective way to harness desirability bias to the funding agency's advantage, while virtual reviews essentially remove that mechanism, producing less than optimal reviews (science on bullying in social media could also be used to caution against reviews that are unduly negative or injurious in an entirely virtual setting). This being said, the CIHR have returned to in-person peer review meetings this round, so this should be a positive development.

In addition, many people find that having in person meetings is not only beneficial for determining which grants to fund as it allows the reviewers to have thoughtful scientific discussion and allows a conversation to evolve that may change the way a certain grant is ranked, but it is also beneficial for the reviewers. This face to face meeting allows reviews to network and meet people they would normally not have the opportunity to. There is nothing "rewarding" the reviewer now. This type of discussion and networking cannot be accomplished virtually.

Strategic changes to the grants policy (longer and larger grants). The Foundation grant scheme has certainly achieved that aim, but it may have been at the cost of other missions of the CIHR, as the split of funds between foundation and project contributed to lower success rates in the project scheme. If persistent differentials remain in terms of successful foundation grantees (in terms of pillars and stage of career), then this will create distortion in the system. Remedial measures regarding junior researchers were taken, so that should mitigate effects among this group, but the foundation stream may remain better suited to certain (fundamental biological research) pillars. Thus, this point could also suggest (depending on your admin data analysis) that the goal of addressing the observation that the peer-review system failed "to adequately accommodate research across all of CIHR's pillars" may not have been reached with these latest reforms.

Decrease the complexity of applications. This was attempted at the last project round, but it was too extreme and prejudicial to certain types of research – most notably the cutting edge kind that was to be promoted in reforms. It is very difficult to express complex ideas in a tightly controlled "text box" structured framework. The streamlining of budget was however a positive change (except for the global budget approach equally spread over the tenure of the grant, which can only lead to grant inflation, as many grants require more substantial investments in the first year than in later years, but the only way to get enough operating funds for the first year is to "pad" the remaining years with as much spending).

Additionally, the Foundation Scheme will not achieve the goal of getting people off of the treadmill of continually applying for grants because of the overall lack of funding. Many people that applied ended up receiving less than they asked for. The amount of money offered to people will not keep them going for seven years without needing more funding.

Question 2: Do the changes in program architecture and peer review allow CIHR to address the challenges posed by the breadth of its mandate, the evolving nature of science, and the growth of interdisciplinary research?

The evolving nature of science and growth of interdisciplinary research work in large part through training. The loss of the training grants in favor of the foundation program – which is primarily investigator, not idea-driven, is not a positive development in this regard.

The loss of some of the special programs, particularly the PHSI (Partnerships for Health System Improvement) competition, as particularly troublesome. This was a program that facilitated the building of partnerships between researchers and decision makers, with matching funding required. The work was typically of direct relevance to system improvement, with integrated knowledge translation, and very competitive, ensuring high quality research.

The importance of interdisciplinary research is now widely recognised, especially if we are to tackle some of the more challenging systemic weaknesses in Canadian health care. Anything that inhibits our ability to build effective interdisciplinary teams, for example, the loss of formal team grants and the focus on individual investigators (through Foundations), might be problematic.

One other issue is the end date of a grant. Researchers tend to feel forced to use all their money by this date. If they had extra time, they could use the money more thoughtfully. What is the reason for the expiry date and can this actually be changed? This would lessen the pressure to continually apply (at least somewhat).

Question 3: What challenges in adjudication of applications for funding have been identified by public funding agencies internationally and in the literature on peer review and how do CIHR's reforms address these?

Ensure that the right expertise is engaged in reviewing the spectrum of grant applications received. It is not at all clear that the current system with five reviewers, or even the college of reviewers, will reach that goal. Many international agencies (ANR, ERA-NET, Horizon 2020) rely solely on panel members for the peer-review. Another set of countries, like the SSHRC, request reports from external expert evaluators that are then being considered by the grant panel in their assessment.

Question 4: Are the mechanisms set up by CIHR, including but not limited to the College of Reviewers, appropriate and sufficient to ensure peer review quality and impacts?

Not in my opinion. I think establishing the College of Reviewers is a positive move but, in itself, does not solve the fundamental problem – poor quality review. Matching reviewers to applications, in terms of knowledge and experience, is essential. Addressing the problem of institutional recognition for peer review activity is critical – junior faculty focus on activities that will enhance their tenure package.

I am also concerned about the reliance on ranking of applications to inform the final judgements. I would like to see a thorough investigation of the robustness of this approach, especially as different reviewers rank a different set of applications.

There is lack of reviewer accountability – reviewers wrote comments online and did not show up to defend them. If someone receives a grant there should be accountability to CIHR and they should have to review for a certain number of competitions. It would be good to know the volunteer rate of the reviewers.

College is a good concept but how involved is it with reviewers? Many people indicated they reviewed early in their careers because it was good for their resume CVs. Now junior people are reviewing and they don't have the expertise. You need to bring more senior people back to the College to teach the younger people. Perhaps allowing people to get "certified" by the College would be beneficial. If they can then actually use this certification at their University as part of their annual review it would actually mean something. Another option would be that all people who receive a CIHR grant should be held accountable to review for a certain number of years or competitions.

Virtual chair experience: was not successful for many people. Without the stronger committee structure from before, the match between everyone is more difficult. Two thirds of reviewers are well matched, while the others can be very far off. In terms of health services and policy research, the reviewers sometimes don't understand the basics of how the research is done. There could definitely be better matching. In addition, facilitating 45 reviews is extremely difficult. The role of the chair is a lot of work. We should empower chair and perhaps reward them for their work?

One way to reduce the demand of the number of applications is having universities and research institutes screen applications. There should be an internal review processes. Applications should be vetted before. Sick Kids has highest success rate in the country – all grants that go in must be signed off by the VP Research. They will only sign off if it's gone through the internal review process. We should start the socialize this idea that Research Institutes and Universities do this before the grant goes in. It should be noted that this may be difficult in terms of small Universities where the people who would have to sign off are the same people applying for the grants.

Another interesting option would be to include patients in review panels to help decide how and what we fund. Should we be looking at what other stakeholders think outside researchers?

CIHR should engage in a systematic survey in best practices in Peer Review. This would help to guide us to make evidence-based decisions on how to best move forward.

Question 5: What are international best practices in peer review that should be considered by CIHR to enhance quality and efficiency of its systems?

Seeking external expert reports on grants (though I do not know if it considered an international best practice).

One consideration is for conditional approvals. That is, where a grant is strong but with some minor and remediable weaknesses, instead of rejection and resubmission (often then reviewed by a different set of reviewers), investigators have the opportunity to address the problems and funding then follows. This potentially improves the efficiency of the process, on both the applicant and reviewer sides.

Having another reviewer look over someone's review would help to determine if the review is appropriate. Reviews that have a discrepancy can be used to be quality monitoring. There is a field of science about quality monitoring that we can learn from. In addition, perhaps applicants should be allowed to provide feedback to reviewers.

Question 6: What are the leading indicators and methods through which CIHR could evaluate the quality and efficiency of its peer review systems going forward?

Satisfaction surveys of applicants and reviewers

Analysis of variation to identify any concerning trends in funding success, applications, etc.

Reviewers should receive statistical measures about their reviews. For example, you indicated that X number should be funded compared to people who said Y number should be funded.

Date modified: