Amid all the shuffle of the #mtt2k phenomenon and my piece on Khan Academy this week — which is well on its way to being the most-read and -retweeted article I’ve ever done — Konstantin Kakaes put up a response to critiques of his Slate piece on educational technology. In it, he addresses both my critique and that of Paul Karafiol. I wanted to give just a few counter-critiques here. I haven’t had a chance to read Paul’s piece, so I’m just going to focus on the part of the response that referenced my post. (Here’s the full post I wrote about the Slate article.)
Let’s go back to the original Slate piece, which said:
Though no well-implemented study has ever found technology to be effective, many poorly designed studies have—and that questionable body of research is influencing decision-makers.
The Slate piece suggests that researcher bias, brought on by having a financial stake in the outcome of the research, has something to do with this, and gives a couple of examples. And clearly this is not false — corporately-connected biases do exist and lead almost invariably to bad research. Kakaes gives a couple of examples to illustrate his point. (Just like I could give a couple of examples to illustrate the opposite point… but never mind about that for now.)
Just as clearly, not all educational research is biased by corporate interests. I gave links to some of my favorite educational journals as examples, and those links are by no means exhaustive. I can’t recall a single article among those that had any fingers in a corporate pie. It’s just not the case that ALL education research is so tainted, so let’s just all agree to ignore those that are.
So now we’re only talking about research done by non-corporately influenced researchers, so we ought to be able to judge the research on its own merits, right? Not so fast. A few paragraphs later, the Slate piece makes a big step upward in its claim about education research:
Despite the lack of empirical evidence, the National Council of Teachers of Mathematics takes the beneficial effects of technology as dogma. There is a simple shell game that goes on: Although there is no evidence technology has been useful in teaching kids math in the past, anyone can come up with a new product and claim that this time it is effective.
(Emphasis added.) That second bolded sentence is a biggie. The author is claiming one of two things: (1) that no research study has ever shown any evidence that technology has been useful in teaching kids math, regardless of whether or not it is tainted by corporate interests; or (2) all educational research is in fact tainted by corporate interests and is therefore not trustworthy. This is a bold statement that deserves unpacking.
It’s an easy library exercise to see that (2) is not true. You don’t even have to go to a library — just browse the links I provided in my article. Unless you believe that researchers who do not disclose financial stakes in the outcome of their research are lying. But if that’s the case, there’s no real reason to restrict that suspicion to education research only! You should be paranoid about all research and find it untrustworthy, and in that event there’s no point in discussing research at all.
So let’s go with interpretation (1). Going by the author’s own words, education research has failed to meet a pitifully low bar: There are no research studies that provide any evidence that technology has been even so much as useful in teaching kids math. But I think a cursory glance through the journals I linked last time should provide enough counterexamples to that claim. In other words, while there are no doubt some clunkers in the mix of articles that get published in education research journals, there are also plenty of articles that do provide evidence that technology can be useful in helping students learn math.
There is a third possible meaning to this statement: That there is no such thing as a research article that gives evidence for the usefulness of technology in teaching kids math that is not corporately funded and “well-designed” at the same time.
But like a lot of discussions about technology where the anti-technology side wants “evidence”, once the evidence is provided, the goalposts begin a steady march backward. In the reply article, the author says:
Others criticized the piece for neglecting a body of empirical evidence that “proves” the efficacy of some technology or other.
The link is to my article, and the word “proves” is Kakaes’ word — the quotes around should not be taken to mean that I said there was research that “proves” the efficacy of technology, and in fact I did not say that, because the lack of proof is not at issue in the Slate piece, it’s the lack of evidence. This is far more than a mere semantic point. Are we looking for evidence, or proof? This is a big difference.
If we’re looking for evidence, we have that in spades. I don’t have to give a mountain of research in order to refute the original claim (which, remember, said that there is no evidence of technology’s usefulness) — I just have to give one piece of evidence. So, here. [PDF]. Since I’m feeling generous, here’s another and here’s another. If you want to quibble, two of those are studies in physics and engineering (respectively) but I think we get the point. The evidence is there.
And this is just for one particular kind of technology (classroom response systems). If you open it up to more kinds of technology — discussion boards, computer algebra systems, graphing calculators, programming languages — your evidence multiplies. And technically those links are not even really about the technology as such — they are about instructional methods that effectively use technology. But these nuances are often lost these discussions when people frustratingly lump all “technology” together as if it were all uniform, and then expect “technology” to do something, as if of its own accord, for student learning without any reference to the instructional design choices being made by teachers in the trenches.
Does this evidence constitute a proof? By no means. This isn’t pure mathematics, or even laboratory physics. Education research is messy and complicated. There are confounds upon confounds. So should we be skeptical of the evidence and seek further understanding when a research result is published? Absolutely. Just as we should with research in the natural sciences. But does it make sense to flatly deny that the evidence is there, in every study? That seems as imprudent as blindly accepting it.
The only thing left on the nay-sayer side to say is that, OK, we’ve got evidence and it’s clear the research is not corporately funded — but the research that gives evidence of the usefulness of technology is not “well-designed”. (Whereas the evidence that shows technology is not useful is well designed? I’m losing track of the rules at this point.) But this term “well-designed” means just whatever the other person wants it to mean. The Slate article wants it to mean, research that is at same level of empirical standards as FDA research, clinical trials, or advanced laboratory physics.
Education research is probably never going to be at this level, because we are working with human beings and human brains, neither of which are well-understood. But even within such an imperfect environment, we can still design studies that capture evidence of some sort of effect — like technology being used effectively to improve student learning. If it turns out that upon further study, the effect is more complicated than just the use of technology, then fine — more research is needed. But it doesn’t mean the evidence doesn’t exist.
In the end, all of us who are interested in education have a choice to make. We can either take the conservative approach and not try anything to improve student learning until it passes a litmus test for empirical rigor that we concede can never be met; or we go with the best evidence we have that certain approaches can improve student learning, and try to help students as much as we can based on it, and stay open to change if it appears there are still better ways. Which way is better for students?