Design Discussion Document - Proposed Changes to CIHR's Open Suite of Programs and Enhancements to the Peer Review Process - References Cited, Annex, Footnotes

References Cited

  • [1] Demicheli, V. and Di Pietrantonj, C. (2008). Peer review for improving the quality of grant applications. The Cochrane Library, 2: 1-15
  • [2] Anonymous. (2009). Making an Impact: A preferred framework and indicators to measure return on investment in health research. Retrieved on January 18, 2012
  • [3] Nason, E. (2008). Health and Medical Research in Canada: Observatory on health research systems. Retrieved on January 18, 2012
  • [4] Zerhouni, E. et. al. (2011). International Review Panel Report 2005-2010. Retrieved on November 1, 2011
  • [5] Ioannidis, J.P.A. (2011). [Comment]. Fund people not projects. Nature, 477: 529-531.
  • [6] Azoulay, P. Graff Zivin, J.S. and Manso, G. (2009). Incentives and Creativity: Evidence from theAcademic Life Sciences (NBER Working paper No.15466). Retrieved on November 1, 2011
  • [7] Anonymous. (2011). NIH Research Grant Program R01. Retrieved on November 4, 2011
  • [8] Anonymous. (2011). Grand Challenges in Global Health: Overview. Retrieved on November 1, 2011
  • [9] Graves, N., Barnett, A.G. and Clarke, P. (2011). Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel. BMJ, 343: 1-8.
  • [10] Anonymous. (2008). National Institutes of Health 2007-2008 Peer Review Self Study: Final Draft. Retrieved on November 4, 2011
  • [11] Obrecht, M., Tibelius, K., and D'Aloisio, G. (2007). Examining the value added by committee discussion in the review of applications for research awards. Research Evaluation, 16(2): 79-91.
  • [12] Vener, K.J., Feuer, E.J., and Gorelic, L. (1993). A statistical model validating triage for the peer review process: keeping the competitive applications in the review pipeline. Faseb J, 7:1312-1319.
  • [13] Cole, S., Cole, J.R., and Simon, G.A. (1981). Chance and Consensus in Peer Review. Science NewSeries, 214(4523): 881-886.
  • [14] Mayo, N. E., et.al. (2006). Peering at peer review revealed high degree of chance associated with funding of grant applications. Journal of Clinical Epidemiology, 59: 842-848.
  • [15] Cicchetti, D.V. (1991). The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioural and Brain Sciences, 14: 119-186.
  • [16] Bell, J.I. et. al. (2006). International review panel report 2000-2005. Retrieved on November 1, 2011
  • [17] Anonymous. (2011). European Peer Review Guide: Integrating Policies and Practices into Coherent Procedures. Retrieved on November 3, 2011
  • [18] Marsh, H.W., Jayasinghe, U.W. and Bond, N.W. (2008). Improving the peer-review process for grant applications: Reliability, validity, bias and generalizability. American Psychologist, 63(3): 160-168.

Annex

In addition to the references cited in the main text, CIHR has considered a wide range of evidence and opinions to inform the design of its new Open Suite of Programs. CIHR's review of various journal articles, expert opinions, existing practices and technical reports was comprehensive, but not exhaustive. The following list includes evidence that was considered to inform the new design.

Evidence Considered to Inform the New Design

Anonymous. (1996). Report on the Committee on Rating of Grant Applications. Retrieved on November 4, 2011

Anonymous. (2008). National Institutes of Health 2007-2008 Peer Review Self Study: Final Draft. Retrieved on November 4, 2011

Anonymous. (2008). Promoting Excellence in Research: An International Blue Ribbon Panel Assessment of Peer Review Practices at the Social Sciences and Humanities Research Council of Canada. Retrieved on November 8, 2011

Anonymous. (2009). AED Survey Report. Retrieved on November 4, 2011

Anonymous. (2009). Making an Impact: A preferred framework and indicators to measure return on investment in health research. Retrieved on January 18, 2012

Anonymous. (2009). Results of EPSRC Peer Review Survey. Retrieved on November 14, 2011

Anonymous. (2010). Peer Review: A Guide for Researchers. Retrieved on November 4, 2011

Anonymous. (2011). A summary of peer review for project grant applications to NHMRC 2011. Retrieved on November 8, 2011

Anonymous. (2011). Empowering the Nation through Discovery and Innovation: NSF Strategic Plan for Fiscal Years 2011-2016. Retrieved on November 4, 2011

Anonymous. (2011). European Science Foundation Survey Analysis Report on Peer Review Practices Retrieved on November 3, 2011

Anonymous. (2011). European Peer Review Guide: Integrating Policies and Practices into Coherent Procedures. Retrieved on November 3, 2011

Anonymous. (2011). Grand Challenges in Global Health: Overview Retrieved on November 1, 2011

Anonymous. (2011). NIH Research Grant Program R01. Retrieved on November 4, 2011

Anonymous. (2011). Principles for Review of Research Proposals in Canada. Ottawa, ON: Natural Sciences and Engineering Research Council.

Anonymous (2011). [Hearing Charter]. The Merit Review Process: Ensuring Limited Federal Resources are Invested in the Best Science. Retrieved on November 8, 2011

Anonymous. (2011). [Editorial]. Tough love: A British research council's 'blacklisting' rule is a radical, unpopular but courageous effort to address a crisis in the peer-review system. Nature, 464(7288): 465. Retrieved on October 1, 2011

Andrade, H.B., de los Reyes López, E., and Martìn, T.B. (2009). Dimensions of scientific collaboration and its contribution to the academic research groups' scientific quality. Research Evaluation, 18(4): 301-311.

Azoulay, P., Graff Zivin, J.S., and Manso, G.. (2009). Incentives and Creativity: Evidence from the Academic LifeSciences (NBER Working paper No.15466). Retrieved on November 4, 2011

Bacchetti, P. et.al. (2008). Simple, defensible sample sizes based on cost efficiency. Biometrics, 64: 577-594.

Baxt, W.G., et. al. (1998). Who reviews the reviewers? Feasibility of using a fictitious manuscript to evaluate peer reviewers performance. Annals of Emergency Medicine, 32: 310-317.

Bell, J.I. et. al. (2006). International review panel report 2000-2005. Retrieved on November 1, 2011

Benda, W.G.G., and Engels, T.C.E. (2011). The predictive validity of peer review: A selective review of the judgmental forecasting qualities of peers , and the implications for innovation in science. International Journal of Forecasting, 27: 166-182.

Berg, T.D., and Erwin, C.E. (2009). Blind no more. Journal of Adolescent Health, 45: 7.

Berlin, J.A. (1997). Does blinding of readers affect the results of meta-analyses? Lancet, 350: 185-186.

Bonetta, L. (2006). Growing pains for NIH grant review. Cell, 125: 823-825.

Bornmann, L., and Daniel, H-D. (2005). Criteria used by a peer review committee for selection of research fellows. A Boolean probit analysis. International Journal of Selection and Assessment, 13: 296-303.

Bornmann, L. and Daniel, H-D. (2005). Selection of research fellowship recipients by committee peer review, reliability, fairness and predictive validity of Board of Trustees' decisions. Scientometrics, 63: 297-320.

Bornmann, L. and Daniel, H-D. (2007). Convergent validation of peer review decisions using the h index. Extent of and reasons for type I and type II errors. Journal of Informetrics, 1: 204–213.

Bornmann, L., Wallon, G. and Ledin, A. (2008). Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European molecular biology organization programmes. PLoS ONE, 3: e3480.

Bornmann, L., Mutz, R. and Daniel, H-D. (2008). Latent Markov modeling applied to grant peer review. Journal ofInformetrics, 2: 217-228.

Bornmann, L. and Daniel, H-D. (2008). The effectiveness of the peer review process: Inter-referee agreement and predictive validity of manuscript refereeing at Angewandte Chemie. Angewandte Chemie International Edition, 47: 7173-7178.

Bornmann, L. and Daniel, H-D. (2009). Extent of type I and type II errors in editorial decisions: A case study on Angewandte Chemie International Edition. Journal of Informetrics, 3: 348-352.

Bornmann, L., Leysdesdroff, L., and Van den Besselarr, P. (2010). A meta-evaluation of scientific research proposals: Different ways of comparing rejected to awarded applications. Journal of Informetrics, 4: 211-220.

Braben, D. W. (2008) [Comment]. Why Peer Review Thwarts Innovation. In New Scientist No. 2644, February 23.

Brown, T. (2004). Peer Review and the Acceptance of New Scientific Ideas: Discussion paper from a Working Party on equipping the public with an understanding of peer review. Retrieved on November 4, 2011

Cañibano, C., Bozeman, B. (2009). Curriculum vitae method of in science policy and research evaluation: The state-of-the-art. Research Evaluation, 18: 86-94.

Cañibano, C., Otamendi, J., and Andújar, I. (2009). An assessment of selection processes among candidates for public research grants: The case of the Ramón y Cajal Programme in Spain. Research Evaluation, 18: 153-161.

Cicchetti, D.V. (1991). The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioural and Brain Sciences, 14:119-186.

Cook, W.D., Golany, B., Kress, M., and Penn, M. (2005). Optimal allocation of proposals to reviewers to facilitate effective ranking. Management Science, 51: 655-661.

Cole, S., Cole, J.R., and Simon, G.A. (1981). Chance and Consensus in Peer Review. ScienceNewSeries, 214(4523): 881-886.

Costello, L.C. (2010). [Perspective]. Is NIH funding the "best science by the best scientists"? A critique of the NIH R01 research grant review policies. Academic Medicine, 85: 775-779.

Davidoff, F. (1998). Masking, blinding, and peer review: The blind leading the blinded. Annals of Internal Medicine,128: 66-68.

Demicheli, V. and Di Pietrantonj, C. (2008). Peer review for improving the quality of grant applications. TheCochrane Library, 2: 1-15

Dinov, I.D. (2006). [Correspondence]. Grant review: American Idol or Big Brother? Cell, 127: 662.

Donovan, C., and Butler, L. (2007). Testing novel quantitative indicators of research 'quality', esteem and 'user engagement': An economics pilot study. Research Evaluation, 16: 231-242.

Dowdy, S.F. (2006). [Correspondence].The anonymous American Idol manuscript reviewer. Cell, 127: 662.

Dumais, S.T., and Nielsen, J. (1992). Automating the assignment of submitted manuscripts to reviewers. Retrieved on November 14, 2011

Errami, M., et. al. (2007). eTBLAST: A web server to identify expert reviewers, appropriate journals and similar publications. Nucleic Acids Research, 35: W12-W15.

Fang, F.C., and Casadevall, A. (2009). [Editorial]. NIH peer review reform – change we need or lipstick on a pig? Infection and Immunity, 77: 929-932.

Geard, N. and Noble, J. (2010). Modelling academic research funding as a resource allocation problem. In: 3rd WorldCongress on Social Stimulation, 6-9 September 2010, University of Kassel, Germany.

Gordon, R. and Poulin, B. J. (2009). Cost of the NSERC science grant peer review system exceeds the cost of giving every qualified researcher a baseline grant. Accountability in Research, 16: 13-40.

Grant, J., and Allen, L. (1999). Evaluating high risk research: An assessment of the Wellcome Trust's Sir Henry Wellcome Commemorative Awards for Innovative Research. Research Evaluation, 8: 201-204.

Graves, N., Barnett, A.G. and Clarke, P. (2011). Funding grant proposals for scientific research: Retrospective analysis of scores by members of grant review panel. BMJ, 343: 1-8.

Graves, N., Barnett, A.G. and Clarke, P. (2011). [Correspondence]. Cutting random funding decisions. Nature, 469: 299.

Harari, O. (1998). Attracting the best minds. Management Review, 87: 23-26.

Haslam, N., and Laham, S. (2009). Early-career scientific achievement and patterns of authorship: The mixed blessings of publication leadership and collaboration. Research Evaluation, 18: 405-410.

Hettich, S., and Pazzani, M.J. (2006). Mining for proposal reviewers: Lessons learned at the National Science Foundation. Retrieved on November 14, 2011

Hodgson, C. (1997). How reliable is peer review? An examination of operating grant proposals simultaneously submitted to two similar peer review systems. Journal of Clinical Epidemiolgy, 50: 1189-1195.

Holbrook, A. (2000). Evaluation of research sponsored by federal granting councils in Canada: the social contract. Research Evaluation, 9: 47-56.

Ioannidis, J.P.A. (2011). [Comment]. Fund people not projects. Nature, 477: 529-531.

Ismail, S., Farrands, A. and Wooding, S. (2009). Evaluating Grant peer Review in the Health Sciences: A review of the literature. Retrieved on November 4, 2011

Jadad, A.R., et.al. (1996). Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Controlled Clinical Trials, 17: 1-12.

Jayasinghe, U.W., Marsh, H.W. and Bond, N. (2003). A multilevel cross-classified modelling approach to peer review of grant proposals: The effect of assessor and researcher attributes on assessor ratings. Journal of the Royal Statistical Society, 166: 279-300.

Jayasinghe, U.W., Marsh, H.W. and Bond, N. (2006). A new reader trial approach to peer review in funding research grants: An Australian experiment. Scientometrics, 69: 591-606.

Johnson, S.C., and Hauser, S.L. (2008). Peer review at National Institutes of Health: Small steps forward. Annals of Neurology, 65(4): A15-A17.

Johnson, V. E. (2008). Statistical analysis of the National Institutes of Health peer review system. PNAS, 105:11076-11080

Justice, A.C., et.al. (1998). Does masking author identity improve peer review quality? A randomized controlled trial. JAMA, 280: 240-242.

Kaplan, D. (2007). [Point] Statistical analysis in NIH peer review--identifying innovation. Faseb J,21: 305-308.

Kaplan, D., Lacetera, N. and Kaplan, C. (2008). Sample size and precision in NIH peer review. PLoS ONE, 3: e2761

Langfeldt, L. (2001). The decision-making constrants and processes of grant peer review, and their effects on the review outcome. Social Studies of Science, 31: 820-41.

Langfeldt, L. (2006). The policy challenges of peer review: managing bias, conflict of interests and interdisciplinary assessements. Research Evaluation, 15: 31-41.

Lao, N., and Cohen, W.W. (2010). Relational retrieval using a combination of path-constrained random walks. MachineLearning, 81: 53-67.

Ledford, H. (2008). Stats reveal bias in NIH grant review. In Nature News. doi:10.1038/news.2008.988.

Libby, E., and Glass, L. (2010). The calculus of committee composition. PLoS ONE, 5: e12642.

Mandviwalla, M. et. al. (2008). Improving the peer review process with information technology. Decision SupportSystems, 46: 29-40

Marsh, H.W., Jayasinghe, U.W. and Bond, N.W. (2008). Improving the peer-review process for grant applications: Reliability, validity, bias and generalizability. American Psychologist, 63: 160-168.

Mayo, N. E., et.al. (2006). Peering at peer review revealed high degree of chance associated with funding of grant applications. Journal of Clinical Epidemiology, 59: 842-848.

McNutt, R.A., et.al. (1990). The effects of blinding on the quality of peer review. JAMA, 263: 1371-1376.

Moher, D., et.al. (1998). Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet, 352: 609-613.

Morissette, K., et.al. (2011). Blinded versus unblinded assessments of risk bias in studies included in a systematic review (Review). Cochrane Database of Systematic Reviews, 9:MR000025.

Munger, K. (2006). [Correspondence]. American Idol and NIH grant review - redux. Cell,127: 661-662.

Nason, E. (2008). Health and Medical Research in Canada: Observatory on health research systems. Retrieved on January 18, 2012

Obrecht, M., Tibelius, K., and D'Aloisio, G. (2007). Examining the value added by committee discussion in the review of applications for research awards. Research Evaluation, 16(: 79-91.

Pagano, M. (2006) [Correspondence]. American Idol and NIH Grant Review. Cell,126: 637-638.

Pagano, M. (2006). [Response]. More money and less time! Cell, 127: 664-665.

Porter, A.L., Roessner, D.J., Herberger, A.E. (2008). How interdisciplinary is a body of research? Research Evaluation, 17: 273-282.

Rafols, I., Porter, A.L., and Leydesdorff, L. (2009). Science overlay maps: A new tool for research policy and library management. Retrieved on November 14, 2011

Reale, E., Barbara, A., and Costantini, A. (2007). Peer review for the evaluation of academic research: lessons from the Italian experience. Research Evaluation, 16: 216-228.

Reckling, F.J., and Fischer, C. (2010). Factors influencing approval probability in Austrian Science Fund (FWF) decision-making procedures – FWF stand-alone projects programme, 1999 to 2008. Retrieved on November 4, 2011

Rodriguez-Navarro, A. (2011). A simple index for the high-citation tail of citation distribution to quantify research performance in countries and institutions. PLoS ONE, 6: e20510.

Roebber, P.J. and Schultz, D.M. (2011). Peer review, program officers and science funding. PLoS ONE, 6: e18680.

Rothwell, P.M., and Martyn, C.N. (2000). Reproducibility of peer review in clinical neuroscience. Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123: 1964-1969

Rubin, H.R, et. al. (1993). [Comment]. How reliable is peer review of scientific extracts? Looking back at the 1991 annual meeting of the Society of General Internal Medicine. Journal of Internal Medicine, 8: 255-258.

Schroter, S., et. al. (2004) Effects of training on quality of peer review: Randomised controlled trial. BMJ, 328: 673

Schroter, S., Groves, T. and Højgaard, L. (2010). Surveys of current status in biomedical science grant review: Funding organisations' and grant reviewers' perspectives. BMC Medicine, 8: 62.

Schuemie, M.J., and Kors, J.A. (2008). Jane: Suggesting journals, finding experts. Bioinformatics, 24:727-728.

Spiegel, A.M. (2010). [Commentary]. New guidelines for NIH peer review: Improving the system or undermining it? Academic Medicine, 85: 746-748.

Spier, R.E. (2002). Peer review and innovation. Science Eng. Ethics,8: 99-108.

Tavana, M., LoPinto, F., and Smither, J. W. (2008) Examination of the similarity between a new Sigmoid function-based consensus ranking method and four commonly-used algorithms. Int. J. Operational Research,3: 384-398.

Thorngate, W. (2002). Mining the archives: Analyses of CIHR research grant adjudications. Retrieved on November 4, 2011

van den Besselaar, P., and Leydesdorff, L. (2009). Past performance, peer review and project selection: A case study in the social and behavioural sciences. Research Evaluation, 18: 273-288.

Van Noorden, R., and Brumfield, G. (2010). [Special Report]. Nature News: Fixing a grant system in crisis. Retrieved on November 8, 2011

van Rooyen, S., et. al. (1999). Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ, 318: 23-27.

Vener, K.J., Feuer, E.J., and Gorelic, L. (1993). A statistical model validating triage for the peer review process: keeping the competitive applications in the review pipeline. Faseb J, 7:1312-1319.

Wagner, R.M., and Jordan, G.B. (2009). Moving towards impact when evaluating research programs: Introduction to a special section. Research Evaluation, 18(5): 339-342.

Ware, M. (2008). Peer Review: benefits, perceptions and alternatives. Retrieved on November 4, 2011

Zaïane, O.R., Chen, J., and Goebel, R. (2009). Mining research communities in bibliographical data. Lecture Notes in Computer Sciences, 5439: 59-76.

Zerhouni, E. et. al. (2011). International Review Panel Report 2005-2010. Retrieved on November 1, 2011

Zululeta, M.A., and Bordons, M. (1999). A global approach to the study of teams in multidisciplinary research areas through bibliometric indicators. Research Evaluation, 8: 111-118.

Zumeta, W., and Raveling, J.S. (2003). Attracting the best and the brightest. Issues in Science and Technology, 19: 36-40

Footnotes

  1. CIHR defines a knowledge-user as an individual who is likely to be able to use the knowledge generated through research to make informed decisions about health policies, programs and/or practices. A knowledge user's engagement in the research process may vary depending on the nature of the research and their information needs. Examples of knowledge users may include: a practitioner, policy-maker, educator, decision-maker, health care administrator, community leader, or an individual in a health charity, patient group, private sector organization or a media outlet.
  2. Integrated Knowledge Translation is defined on the CIHR website.
  3. In this context, CIHR defines a new/early career researcher as an applicant who has either never applied before to CIHR, or whose last degree ended five years or less before the original competition date.
  4. CIHR defines a knowledge-user as an individual who is likely to be able to use the knowledge generated through research to make informed decisions about health policies, programs and/or practices. A knowledge user's engagement in the research process may vary depending on the nature of the research and their information needs. Examples of knowledge users may include: a practitioner, policy-maker, educator, decision-maker, health care administrator, community leader, or an individual in a health charity, patient group, private sector organization or a media outlet.
  5. Integrated Knowledge Translation is defined on the CIHR website.
  6. Through its current Open Suite of Programs, CIHR currently funds a small number of very large grants. CIHR is still working to determine the best mechanism to support the large grants that do not fit within the current modeling parameters of the Foundation/Programmatic Research and Project Schemes.
  7. In this context, CIHR defines a new/early career researcher as an applicant who has either never applied before to CIHR, or whose last degree ended five years or less before the original competition date.
  8. CIHR defines a knowledge-user as an individual who is likely to be able to use the knowledge generated through research to make informed decisions about health policies, programs and/or practices. A knowledge user's engagement in the research process may vary depending on the nature of the research and their information needs. Examples of knowledge users may include: a practitioner, policy-maker, educator, decision-maker, health care administrator, community leader, or an individual in a health charity, patient group, private sector organization or a media outlet.
  9. In this context, CIHR defines partners as organizations identified by the applicants themselves that contribute cash and/or in-kind resources to specific projects of research, according to terms negotiated by the applicants.
  10. CIHR defines a knowledge-user as an individual who is likely to be able to use the knowledge generated through research to make informed decisions about health policies, programs and/or practices. A knowledge user's engagement in the research process may vary depending on the nature of the research and their information needs. Examples of knowledge users may include: a practitioner, policy-maker, educator, decision-maker, health care administrator, community leader, or an individual in a health charity, patient group, private sector organization or a media outlet
  11. In this context, CIHR defines partners as organizations identified by the applicants themselves that contribute cash and/or in-kind resources to specific projects of research, according to terms negotiated by the applicants
Date modified: