This Springer Brief provides theory, practical guidance, and support
tools to help designers create complex, valid assessment tasks for
hard-to-measure, yet crucial, science education standards.
Understanding, exploring, and interacting with the world through models
characterizes science in all its branches and at all levels of
education. Model-based reasoning is central to science education and
thus science assessment. Current interest in developing and using models
has increased with the release of the Next Generation Science Standards,
which identified this as one of the eight practices of science and
engineering. However, the interactive, complex, and often
technology-based tasks that are needed to assess model-based reasoning
in its fullest forms are difficult to develop.
Building on research in assessment, science education, and learning
science, this Brief describes a suite of design patterns that can help
assessment designers, researchers, and teachers create tasks for
assessing aspects of model-based reasoning: Model Formation, Model Use,
Model Elaboration, Model Articulation, Model Evaluation, Model Revision,
and Model-Based Inquiry. Each design pattern lays out considerations
concerning targeted knowledge and ways of capturing and evaluating
students' work. These design patterns are available at http:
//design-drk.padi.sri.com/padi/do/NodeAction?state=listNodes&NODE_TYPE=PARADIGM_TYPE.
The ideas are illustrated with examples from existing assessments and
the research literature.