Before reading the research study, I bought the app myself and reviewed it for instructional design features and functions that I think are important. I was quite pleased with the app. It has feedback for every learner response, levels up when the learner does well, and remediates when the learner makes mistakes (see image at right and caption). The user interface was nice and clean, without distracting elements that are irrelevant to the learning task. It was also easy to use, the required learner response being pretty intuitive and including basic instructions.
I really only had two criticisms of the app. First, it is not mastery-based; the learner can choose easy, medium, or difficult options. I would prefer that the learner automatically start on the easy setting and earn the opportunity to use the more difficult settings. My other criticism is that there are no reports of learner performance. If I was a parent or teacher of a student using this app, I would want to know which fractions my child has mastered and which are posing difficulty.
So that is my basic review, in a nutshell. Once I had used the app myself for a while, I went ahead and read parts of the research study. I will confess that I didn’t read the entire paper. As a research methodologist, my main interest was in the methods section of the paper. And as an instructional designer, my main interest was in the app’s effectiveness in impacting students’ abilities to use fractions. I plan to go back to read the rest of the paper this week. But what I can say now is that the research design looked good…the author did a basic crossover design, with pre-, mid- and post-tests. There are a couple of details that I need to go back and examine further, like why the authors only used items 1-20 of the tests. (If you read the study and have methodological comments, please share them!)
So why am I writing this now? When I’ve just told you that I need to do some further reading of the study? Because I want to give HUGE kudos to Motion Math for sponsoring this research. A faculty member at the University of Southern California conducted the research and the research paper has been made public for all of us to read. If you want to check it out, please do, but clicking here.
Folks, this is exactly the kind of thing we want not only to be encouraging of app publishers, but demanding of them. There are upwards of 20,000 educational apps in the iTunes app store to date. One of the bigger problems that customers have is figuring out what is “good.” The most common reviews and review sites offer opinions on whether kids like the apps and think they’re fun. Very few of the app publishers offer actual empirical evidence that their app works in teaching kids.
We need more evidence that apps work. If you think so too, vote with your dollars by supporting publishers who have empirical data, preferably from studies conducted by independent scientists. I don’t know the Motion Math folks. The closest connection I have to them is that I follow them on twitter and own a few of their apps, so I have no vested interest in you buying the Motion Math app. What I do have a vested interest in, and I know you all do as well, is promoting effective instruction for our kids. By supporting, with our dollars, companies that can prove effectiveness, we can encourage other companies to start doing the same.
So think about it. And if you agree, consider spending $2.99 here to tell Motion Math that you like their focus on evidence of effectiveness. And tell them how much we would like them to do the same for the rest of their apps!
What other educational apps do you know of that provide empirical evidence of their effectiveness? I would love to know about them!
The picture that appears here is a screenshot of Motion Math that appeared originally at http://www.cultofmac.com/133990/study-claims-ipad-app-boosts-student-math-skills/. I reviewed Motion Math HD, v. 1.1.6 on Saturday, August 25, 2012.