Educational Research and Evaluation

At the Bok Center we view evaluation as an integral piece of instruction, curriculum development and university strategic planning. Anchored within a Center for Teaching and Learning our work is deeply connected to the practice of teaching and scholarship and we are committed to:

1) Collaborating Across the Disciplines

Universities have generally arranged themselves into units that reflect academic disciplines, their attendant cultures, and their specialized ways of knowing. While a great deal of research has focused on learning and instruction, only a limited attempt has been made to understand how disciplinary context affects how students learn, how and what we teach, and what and how we measure has been made. Taking a one-size-fits-all approach to evaluation without providing discipline-specific translation and appropriate modification to methods serves as a barrier to faculty adoption of assessment and evaluation into the instructional process. At Bok we consciously try to root the practice of evaluation within the disciplines. This begins with familiarizing ourselves with discipline-specific distinctions in philosophical approaches, intended learning goals and content. Most importantly, at Bok the design of an evaluation process for a particular course/program/curriculum is driven by answering the pedagogical questions of faculty in a methodologically robust manner.

2) Bringing Research to Practice

Here at the Bok Center, we offer a new educational space where research, evaluation and practice intersect. In expanding our role to incorporate research and evaluation, the Center is uniquely positioned to serve as a model of how to bridge the gap between research and practice and to disseminate knowledge. Faculty have direct dialogue with expert teaching staff and educational researchers and evaluation specialists to collaborate and translate findings into best practices. We also offer professional development opportunities, which serve as a venue for disseminating the most current knowledge to incoming faculty members and graduate students-the future generation of educators. Such a collaborative model serves to advance educational research and improve undergraduate education at Harvard.  Traditional top-down models for the development and dissemination of educational research are limited in that the focus of such studies does not often address practitioner needs and/or is not translated in ways that are meaningful for users. Also, while many universities have established assessment offices or staff dedicated to facilitating the process of assessment and evaluation, these units are mostly viewed by faculty as administrative as they are disconnected from the practice of teaching and scholarship.

3) A Highly-Centralized Approach

Evaluation works best in an office that is independent, impartial, and insulated from the influence of the programs that are being evaluated. This approach avoids conflict of interest. The Bok Center reports into the Dean of FAS, which allows for more objective evaluation of programs in the College. Evaluation also works best when it is highly centralized and avoids a number of apparent liabilities including: 1) the cost escalation as few economies of scale can be realized and each lesson learned must be repeated in every case; 2) the inability to ensure that uniform and comprehensive implementation occurs across the FAS; 3) an absence of coordination with reporting requirements and 4) the inability to focus the best expertise on evaluation issues. Finally successful implementation of evaluation in the academic sector should be tied to the scholarship of teaching and learning; this cannot occur effectively under the reporting structure of a line administrator or administrative office. If line administrators become too directly involved with setting academic policy, then many groups of faculty will become quickly disengaged with evaluation. At the Bok Center, our focus is on teaching and learning, our Center is overseen by a faculty director and our evaluation team works closely with pedagogy experts in the disciplines and the Graduate School of Education.

4) Building Knowledge Networks

In many universities, graduate students constitute the largest teaching resource. Incorporating integrative evaluation training into their professional development can address local challenges in cultivating the adoption of more self-reflective processes into instruction within Harvard departments and programs. Additionally, this training can have a broader impact in enhancing the teaching practice of the future professorate. The Bok Center is committed to the professional development of Harvard graduate students as effective college and university teachers. Currently the Center offers relevant training to prepare graduate students to address issues they will face in their teaching. We have expanded our professional development and training role to enhance research skills among graduate students and others who want to produce and use evaluation and educational research in their teaching, grant work or research. We provide professional development opportunities including training courses and workshops (in both on-line open-source and live formats), and a Bok Center Fellow’s program, where graduates from the GSE gain practical experience working closely with faculty and administration in evaluating their courses and programs.

What we do in the Bok Center:

Our team’s primary objectives include:

• Communicating to our faculty and the university community the value and intentions of evaluation in a way that is meaningful to them.

• Partnering with faculty, academic and co-curricular programs to help them build and integrate systematic process of self-reflection into their teaching and programming.

• Enabling continuous learning and development by organizing and promoting assessment and evaluation materials and workshops and internship opportunities for both faculty and graduate students.

How We Conduct Our Work

Impartiality is the absence of bias, in the methodology and approach of the evaluation, and in considering and presenting achievements and challenges of the program. Independence is the freedom from conflict of interest and the ability to be transparent in reporting results. The requirements for independence and impartiality are necessary at all stages of the evaluation process, including designing evaluation instruments and protocols and presenting findings.

However, evaluation does not occur in a vacuum. A good evaluation requires cooperation between program staff and evaluators in order to generate the most valid data and interpretations to answer the questions set forth by the program. As part of this cooperation, each party brings different strengths to the evaluation process. The program staff brings real questions and inquiries about their activities, resources and needs, content expertise about the program, and the context for the evaluation. The Derek Bok Research and Evaluation team offers objectivity, methodological expertise, confidentiality, and constructive and relevant recommendations.

Program staff will work with the Bok team to provide guidance in articulating program goals and framing evaluation questions. They will be given the opportunity to review draft evaluation plans, instruments and reports in order to correct any factual errors and to comment on the findings and recommendations. This should be done in such a way that maintains the independence and impartiality of the evaluation team so that the resulting evaluation will be as objective as possible. The Bok Center Research and Evaluation team will lead the evaluation and will explore with the program staff the strengths and shortcomings both of various evaluation questions and the various approaches that might be used for answering questions. Good question design is a critical dimension of the quality of evaluation. To maintain objectivity and to ensure scale validity, item development will be the responsibility of the evaluation team and not the program staff. We work closely with program staff to ensure that valid and reliable items are created to match the program’s evaluation questions. This is an iterative process. The Bok Center evaluation team will finalize all instruments once the evaluation’s substantive content has been reviewed by the program staff, and will then collect and analyze data, and generate reports.

We as evaluators commit to the principles intended to guide the professional practice of evaluation developed by the American Association of Evaluation:

  • Systematic Inquiry: Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.
  • Competence: Evaluators provide competent performance to stakeholders.
  • Integrity/Honesty: Evaluators ensure the honesty and integrity of the entire evaluation process.
  • Respect for People: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.
  • Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.

Authorship:
While most of the research conducted by Bok Research and Evaluation (R&E) is intended for internal use within the university, occasionally one of our clients will have the opportunity to publish results that are directly related to the work that R&E conducted for the client.
If a client seeks to publish publically in a journal any aspect of the work that has been substantially contributed to by R&E, we have the following expectations:

1) Depending on the nature of the data that is to be collected, IRB approval may be necessary. If Bok R&E is going to be substantially involved in data collection, analysis, and/or reporting, then the Director of R&E should be at least co-PI on the IRB project protocol.

2) If data collected from the student body in FAS is to be published, approval must be received from the      Dean of FAS.

3) If Bok R&E contributes significantly to the publication, our expectation is that the relevant team members will be authors on the paper. Our criteria for authorship are based on the “Uniform Requirements for Manuscripts Submitted to Biomedical Journals,” developed by ICMJE and last updated December 2016.  Many journals voluntarily use these recommendations. If all are met, we expect that the contributing Bok R&E team member(s) will be included as authors on the publication.

(i)“Substantial contributions to conception and design, or acquisition of data, or analysis and interpretation of data;
(ii)    Drafting the information product or revising it critically for important intellectual content; and
(iii)   Final approval of the version to be published.”