From CoolFutures
Jump to navigation Jump to search

More than a decade of science education research that has shown that making and defending claims with supporting evidence are hallmarks of sound scientific understandings (Berland & McNeill, 2010; Kuhn, 2010; National Research Council, 2012a; Osborne, 2010). Scientific argumentation involves both scientific reasoning to draw inferences from initially available information (Holyoak & Morrison, 2005), and critical thinking to sort out evidence for making conditional claims (Yeh, 2001).

Most science education research on the construction of scientific arguments in written form has focused on the scientific reasoning necessary to coordinate evidence with scientific knowledge. But the critical reasoning that embodies uncertainties — expressed to reflect argument strength — has been largely neglected in work on scientific argumentation (Duschl & Osborne, 2002; NRC, 2012a, b). Therefore, helping students evaluate sources of uncertainty in their claims and their evidence is important for developing scientific argumentation practice. Scientific uncertainty is related to conceptual and methodological limitations imposed by the specific scientific inquiry method/s applied to an investigation (Allchin, 2012). As such, helping students understand the limitations of data and the models from which they are drawing conclusions, as well as exploring their own personal uncertainties, are key skills that need to be incorporated into developing scientific argumentation practices.

While there is considerable research on argumentation with data (e.g., Osborne, 2010) and some (much less) on argumentation with models (e.g., Pallant & Lee, 2015) we notice that the use of computationally supported argumentation in education in general, and EfS in particular, is still underdeveloped. This is despite considerable progress in computational argumentation (Scheuer et al., 2010), such as argumentation mapping. To our knowledge, completely absent in K-12 education are computational methods to aggregate evidence and to calculate the strength of hypothesis (claims) relative to the available knowledge and evidence. While scientists routinely use methods such as statistical meta-analysis, Bayesian methods and Fuzzy Logic methods for evidence integration, this aspect of scientific practice is not included in science curriculums, and is unknown to most teachers. In consequence, evidential reasoning and decision-making under uncertainty is often portrayed as a largely subjective process taking place in scientists’ head. De-mystifying evidentiary reasoning by including and computational methods and tools for modelling evidentiary arguments becomes central to the EfS curriculum. Research on computer-supported argumentation developed in education (e.g., Wegerif, 2007), along with work in Artificial Intelligence (e.g., Pearl and Mackenzie, 2018) can be applied to that effort.