[edm-discuss] EDM techniques for paper-based curricula

  • From: Kevin Hall <kevhall@xxxxxxxxxxx>
  • To: <edm-discuss@xxxxxxxxxxxxx>
  • Date: Mon, 21 Sep 2009 17:50:22 +0000

I am a teacher and curriculum author who plans to seek
research partners to evaluate the effectiveness of paper-based instructional
materials being designed in light of PSLC research.  At the moment, the 
curriculum is still being
written and is not ready to be evaluated. 
However, when it is ready, what EDM techniques could be used identify its
effective and ineffective elements in order to continually refine it?  Because 
it is a paper-based curriculum, data
could most feasibly be taken during students’ tests and quizzes (for example,
if assessments are scantron-type, the data can easily be coded and logged to a

The two most relevant articles I’ve read on using EDM to
identify effective and ineffective components of a curriculum are Feng,
Heffernan, & Beck [1] and Pavlik, Cen, & Koedinger [2].  I’m a classroom 
teacher, not an EDM
researcher, so please excuse any errors in the following analysis, but it seems
neither technique would work well on a paper-based curriculum.  The reason is 
that both methods use learning
curve analysis.  Learning curves can be
tracked for software-based curricula because the software logs every
interaction a student has with the program. 
But if you’re just using paper-based quizzes/tests as data, you see only
a small subset of students’ attempts to solve problems: you see their attempts
on quizzes, but not during in-class discussions, homework problems, etc.  You 
can probably record their attempts on at
most 10% of the problems they see.  Can a
meaningful learning curve be generated and analyzed with such sporadic
data-taking?  If not, how can authors of
paper-based curricula use EDM techniques to continually refine their materials?

Many school districts these
days require teachers to give scantron-type tests and quizzes.  Commercial 
software tracks each student’s
skill level for each identified skill and reports progress back to teachers and
administrators.  One such product is the
ExamView suite, which is widely used and which can be seen here: 
.  In my EDM readings, I have been
surprised not to find much research so far using the data collected from such
systems.  I’m sure there are easy ways to
use the data to identify broad trends such as which schools/teachers/curricula
are on average more effective.  But I’d
like to find a way to see inside a
curriculum, and to see which pieces of it work and which need to be
redesigned.  In other words, my goal is
to “close the development loop” in authoring curricula. 


Closing the development
loop is critical now because of a recent explosion in the number of open-source
curricula being written by volunteers and distributed over the web.  The State 
of California is considering
shifting to such textbooks: 
  As all these new curricula are being developed,
how can the EDM community enable authors to test their materials’ effectiveness
without having to get major grants for controlled, experimental studies?


My own project, which is still in
its early stages, is available at the following URL if you’d like to see it:



Thank you to any EDM
community members who are willing to respond. 
I always enjoy reading your research.


Kevin Hall


 [1] Using
Learning Decomposition to Analyze Instructional Effectiveness in the ASSISTment

[2]  Performance
Factors Analysis – A New Alternative to Knowledge Tracing


Ready for Fall shows? Use Bing to find helpful ratings and reviews on digital 

Other related posts: