Effective Use of K12 Data
Effective Use of K12 Data

There are some very important issues we need to confront in our use of K12 achievement data. The first is the difference between explanation and prediction, which is covered in this excellent paper. In essence, in explanatory modeling we test hypotheses about the causes of the effects we observe. In contrast, in predictive modeling we forecast future effects that will be produced by a set of causes we observe, given an underlying theory that relates them to each other.

The second issue is the difference between the types of evidence (this chart provides a handy guide to this issue). At the top of the evidence pyramid are meta-analyses, which combine the results from different studies based on the strength of the methodologies they have used. Single studies based on solid data and analysis are in the middle. And anecdotes -- which too often seem to be given great weight in K12 discussions -- are at the bottom.

The third issue is the difference between frequentist and Bayesian statistics. Academics (including not a few people with doctorates in education) are fond of statistical tests like p-values that fundamentally assume the underlying system generating a set of data does not change over time (as is true of most physical systems). In contrast, the social systems that most real-world managers (be they in the private or public sector) have to deal with every day have data generating processes that are constantly evolving. As such, rather than traditional frequentist statistics, real world managers use Bayesian statistics. This difference is explained more fully in this paper. The essence of the Bayesian approach is that you start with a prior view -- for example, one based on accumulated research findings (or, even better, meta-analyses that summarize them) -- and then update that prior view using new data you collect, to generate what is called a "posterior" view. This becomes your new prior as the cycle of continuous learning repeats.

The fourth issue to keep in mind is that, when you encounter traditional frequentist-based analyses, while reading them and weighing their conclusions you should keep in mind criticisms that have been leveled at this approach (most famously by John Ioannidis). You can read three of these critiques here, here, and here. And here is a damning new report which finds that most published educational research findings (usually based frequentist statistics) have never been replicated.

The fifth issue is the need to become more disciplined about the approach we take to "experimenting our way to improved K12 performance." Here is an excellent overview of how to take a systematic approach to integrating ongoing experimentation in the operation of a school, and here is a detailed guide to using this approach. Here is another guide to running experiments in schools, and here is an OpEd by Brookings asking why more schools don't use this approach, and "underinvest in evidence".

Last but not least, when analyze data to draw either explanatory or predictive conclusions, we have to write clearly about our research for the non-quant audiences whom we are trying to convince too accept and implement our recommendations. This recent paper provides an excellent guide to doing just this.