You will find lots of talk in the literature on teaching and learning about the importance of articulating learning goals and learning objectives for your students. Indeed, at some universities, faculty are required to submit formal learning objectives when proposing a course and/or to include them in the course catalogue and on the syllabus.
Though we agree on the importance of learning goals and objectives, at the Bok Center we tend to talk about them in slightly different terms when consulting with instructors. There are two reasons for this:
- First, the terms themselves can be unnecessarily confusing. In common parlance, "goal" and "objective" are synonyms; what, then, is the difference between a learning goal and a learning objective?
- Second, these terms can have the ring of bureaucracy about them, particularly to instructors in more humanities-oriented disciplines who may well reject the notion that it is possible to determine in advance where a student's encounter with a text or object may or may not lead. Is the requirement to articulate learning objectives not just part of a plan to reduce the ineffable process of intellectual exploration to something crudely quantifiable?
The first of these two concerns is not, in fact, that severe. The distinction between "learning goals" and "learning objectives" is actually pretty commonsensical: in this context goals generally refer to the higher-order ambitions you have for your students, while objectives are the specific, measurable competencies which you would assess in order to decide whether your goals had been met. (To give one example: if it were your goal to teach students how to critique theories of state formation, the corresponding objective might be: "By the end of this course, students should be able to write an essay that explains one major theory of state formation and makes an argument about how well it describes the historical experience of a relevant country.")
The second concern is perhaps best countered by acknowledging that while your goals and objectives might not be entirely quantifiable, this is hardly an excuse for not at least engaging in the process of thinking them through. Whether you are a graduate student teaching for the first time, or a senior faculty member with many years of experience behind you, every course you teach presents some mixture of freedom and constraint. Many of the things that we teach, and the ways that we relate to our students, are predetermined by the calendars and status hierarchies of our universities. As a graduate student, for example, you may be free to decide how you will organize the individual discussion sections or labs in the course you are teaching, but most likely not the syllabus itself, which will have been set by your course head. Likewise, as a faculty member, you may be given wide latitude to choose the subjects covered in your courses, the readings you assign, and the terms of your students' assignments, but you almost certainly will have to factor your department's curriculum or the needs of its graduate program into your decisions. Our disciplinary identities impinge upon our teaching still further. It is hard to go against the grain and choose not to assign a term paper in a writing-intensive discipline, or to engage students in a creative art project in a very quantitative STEM field—even when we suspect that an unconventional assignment might be more apt to test our students' mastery. Whether we recognize it or not, we all come to the act of teaching with at least a few stubborn preconceptions about what we and our students are "supposed to" be doing. Pretending that these preconceptions don't exist easily leads one to over-naturalize them—and, therefore, to forget that your students are unlikely to share all of them.