Concept Inventories as Diagnostic Instruments for Improving Pedagogy

Date: 
Monday, December 15, 2008 - 11:06am

UCSB COMPUTER SCIENCE DEPARTMENT PRESENTS:
WEDNESDAY, JANUARY 7, 2009
3:30 – 4:30
Computer Science Conference Room, Harold Frank Hall Rm. 1132

HOST: PHILLIP CONRAD

SPEAKER: David Klappholz
Associate Professor, Computer Science
Stevens Institute of Technology

Title: Concept Inventories as Diagnostic Instruments for Improving Pedagogy

Abstract:

A bit over twenty years ago, David Hestenes, a physics professor at
Arizona State University discovered that students in Introductory
Physics courses were able to solve quantitative and algorithmic problems
in Newtonian mechanics without conceptually mastering Newton’s notion of
force. (Actually, Hestenes’ graduate student discovered this, and he and
Hestenes verified its validity.) Hestenes and a number of colleagues
then developed a new type of diagnostic instrument, the Force Concept
Inventory /Test/ (FCI), which none of us would likely have heard of had
a famous physics researcher at Harvard not heard of Hestenes’ discovery
and verified that it was true of Harvard students as well as of students
at Arizona State. (For some reason the word “test” was dropped from
Force Concept Inventory /Test/, so that “concept inventory” sounds like
a list of some type, rather than a survey instrument.)

Hestenes’ original intent in developing the FCI was: (i) to restrict its
scope to a single fundamental concept, Newton’s concept of force; and
(ii) for it to be a diagnostic instrument that would help educators
understand students’ mental mismodelings of aspects of Newton’s concept
of force, so that the former could correct ineffective pedagogy — not
as a grading instrument. Once the rest of the Science, Technology,
Engineering, and Mathematics (STEM) community decided to start
constructing their own concept inventories (CI’s), they almost
universally: (i) constructed individual CI’s that cover all the concepts
in one or more introductory courses; and (ii) turned them into grading
instruments, essentially “conceptual final exams.” In this flurry of
activity – more than 30 CI’s have been constructed — the notion of a CI
to be used for diagnostic purposes for informing improved pedagogy, and
of the statistical validation necessary if a CI is to serve that
purpose, have been lost.

In this talk we will discuss the history of CI’s,and explain in detail
what they were originally meant to be, and how they can be validated for
that purpose. An opportunity will be given to attendees interested in
pursuing the development of diagnostic CI’s to join Dr. Klappholz and
his educational psychology collaborator, Dr. Steven J. Condly, in a
projected NSF proposal on this topic.

Bio:

Dr. David Klappholz is an associate professor of computer science at
Stevens Institute of Technology, where his specialty is software
engineering. In addition to his interest in empirical software
engineering research, Prof. Klappholz works, under NSF funding, with an
educational psychologist on issues relating to engineering education
pedagogy. He is also a member of a Stevens-based, DoD-supported, team
that is crafting a reference standard M.S. curriculum in software
engineering, a curriculum with a heavy systems engineering slant. In a
previous incarnation Prof. Klappholz did research, supported by NSF, IBM
Research, DoE, and others, on parallel machine architecture, automatic
code parallelization, compiler optimizations, and, in his professional
infancy, on natural language understanding and translation.