Evaluation strategy and information sources
Midterm Evaluation of the Pandemic Preparedness Strategic Research Initiative[ Table of Contents ]
2.1 Evaluation issues and questions
Table 3 lists the evaluation issues and questions addressed in this evaluation. These were validated by the PPSRI Midterm Evaluation Steering Committee. As this was a midterm evaluation, all questions had a formative intent, aiming to identify possible improvements and alternatives.
2.2 Information sources
The following data sources were used to address the evaluation questions. Appendix 2 provides a matrix summarizing the indicators and data sources for each evaluation question.
Review of documentation relevant to program design and delivery: Documents relevant to program design and delivery were reviewed systematically, searching for information relevant to the evaluation questions and the program logic model. The documentation reviewed included:
- background material on the AI/PI and the PPSRI including the reports on planning and consultation leading up to program design;
- minutes and decision records of meetings of the Task Group;
- notes from peer review sessions, where these were available.
Appendix 3 contains a list of documents reviewed.
Review of administrative data on program outputs: III provided administrative databases containing data on applicants, applications and results of all PPSRI funding opportunities to date. These data, in Excel format, were transferred to SPSS and indicators relevant to program uptake and funding opportunities results extracted.
Key informant interviews: A main source of evaluation information was key informant interviews with 22 PPSRI stakeholders. These key informants, identified by the consultants with the help of III and input from the Evaluation Steering Committee, included key partners and participants in program design, implementation and funding, as well as researchers and peer reviewers. Although no stakeholder declined to participate, four of those approached did not reply to the invitation or could not be reached. Table 4 summarizes the number of interviewees by category.
Seven interviews were conducted in-person at the Canadian Pandemic Preparedness Meeting: From Discovery to Frontlines, held November 6-8, 2008 in Winnipeg, and the remainder were conducted by telephone between December 2008 and March 2009. The interviews ranged from 20 to 90 minutes in length, and were conducted in English or French, using a semi-structured interview guide (Appendix 4), addressing all evaluation issues and questions identified above. The majority of the interviews were recorded. Analysis was based on interview notes, with reference to the recordings for accuracy.
Survey of researchers: non-applicants and successful and unsuccessful applicants. The evaluation data sources also included a brief web survey of the research community for the PPSRI. This included: nominated principal investigators (NPIs), principal investigators and co-applicants who applied to any of the PPSRI initiatives, whether successfully or unsuccessfully. For principal investigators and co-applicants, only those who would have been eligible for CIHR funding as an NPI were included, i.e., trainees and others working in non-eligible organizations were excluded. Also surveyed were researchers who did not apply to the PPSRI although their work is relevant to pandemic preparedness. This group included NPIs who received grants in the last eight years from any other CIHR grant program whose application keywords, project title, title of funding opportunity and abstract review indicated that pandemic influenza and/or influenza might have been relevant to their research work1. III provided lists of distinct researchers in Excel format to the consultants, including the following fields: role (NPI, co-investigator or co-applicant), funding reference number, last name, first name, preferred language, current primary institution, and primary e-mail address. The consultants merged these lists to create a final list of distinct researchers.
The survey collected information on a subset of the evaluation questions (see Appendix 4). It was adapted for each of the four subsamples: researchers responded to the version of the survey that represented their closest association with the program2. The survey was pretested in English and French with two III pandemic researchers prior to its launch, and one minor adjustment made.
The survey invitation was emailed from the consultants to respondents with a header flagging it as being sent on behalf of III. Each survey invitation contained a unique URL. It was available in English and French and took about 15 minutes to complete. Two reminders were sent, at one-week intervals. The survey was open for data collection for a total of three weeks in February 2009.
From a total population of 486 researchers, 157 responses were received. Seventeen email addresses were no longer valid, four invitees were not available and two stated they were not aware enough to respond, resulting in an overall response rate of 34%. The table below shows the numbers of researchers who were invited and who responded in each of the categories. As might be expected, response rates were highest among funded applicants, especially NPIs.
Table 6 shows survey respondents' characteristics. These data suggest that the sample can be considered representative of the disciplines and settings of researchers involved in influenza-related research, insofar as there are no striking absences of respondents in the expected categories.
In terms of distribution across the PPSRI priority research areas, a majority of applicants stated that their work is relevant to prevention and treatment (61%) and vaccines and immunization (52%) (62 respondents checked both these categories). Ethical, legal or social aspects were relevant to 39% of respondents and virus biology and diagnostics were relevant to 35%.
The evaluation data were analyzed using standard quantitative and qualitative techniques. Survey data were received and stored on a secure server and transferred to SPSS. Descriptive analyses were conducted with the latter, comparing non-applicants and successful and unsuccessful applicants as well as research domains. Qualitative data from key informant interviews, integrating material from document review, were analyzed using matrix techniques: respondent types were crossed with evaluation questions, and interview material entered into summary matrices and emergent patterns synthesized from the patterns across and within rows and columns.
Note that of the 157 survey respondents, 135 (86%) had heard of the PPSRI. The questionnaire was constructed so that only those who had heard of the PPSRI were asked questions about its features. Those who had not heard of it – a group that included some researchers who applied to it – were asked only questions about strategic research funding more generally, and provided background information. Thus, most of the survey data are from respondents who were aware of the PPSRI. In addition, in presentation of the survey results 'don't know' responses are excluded from the denominator, so that percentages reflect only responses from individuals knowledgeable enough to respond to the particular question. In most cases, 'don't know' responses frequencies were less than 5% of the total and so would not affect interpretation. However, for a few of the questions, the proportion of 'don't know' responses was quite high: these are mentioned in the corresponding text.
[ next section ]
- As a test of the extent to which these CIHR applicants completely cover the intended population of researchers who could make a research contribution to pandemic preparedness within the priority research domains, the obtained list was compared to the list available of British Columbia and Quebec researchers in the Interprovincial Directory of Researchers database, identified using the keywords ‘influenza’ and “pandemic”. Of the 20 researchers thus identified in the Directory, two were not among those in the CIHR or PPSRI applicants. This suggests that our list reasonably, but not totally, captured the relevant population.
- If the respondents had submitted successful applications as an NPI and as a co-investigator or co-applicant, they were provided with the NPI version; if they had submitted both a successful and an unsuccessful application, they were provided with the version for successful applicants.
- Date modified: