On 9/21/09, Kevin Hall <kevhall@xxxxxxxxxxx> wrote: > reason is that both methods use learning curve analysis. Learning curves > can be tracked for software-based curricula because the software logs every > interaction a student has with the program. But if you’re just using > paper-based quizzes/tests as data, you see only a small subset of students’ > attempts to solve problems: you see their attempts on quizzes, but not > during in-class discussions, homework problems, etc. You can probably > record their attempts on at most 10% of the problems they see. Can a > meaningful learning curve be generated and analyzed with such sporadic > data-taking? Actually, learning curve research done with intelligent tutors suffers from a similar problem, especially if the tutor is used sporadically. Imagine the following scenario: September 16: use tutor and solve 3 problems on skill A (right, wrong, right) November 15: use tutor and solve 3 more problems on skill A (wrong, right, right) [note: we do not need to assume that two months elapsed between uses of the tutor, although that is one possibility. perhaps during the intervening time the student worked on different skills on the tutor] There were a couple months of "something" happening between the 3rd and 4th attempts at solving problems with skill A. Odds are the students saw examples in class, solved homework problems, etc. The typically assumption is those anomalies even out over time. But I'm not aware of anyone testing it, or proposing a good workaround. > If not, how can authors of paper-based curricula use EDM > techniques to continually refine their materials? One thought would be to create fictional practice opportunities that are unobserved. The EDMer has no idea how the student actually did on those practice opportunities, but still counts them as opportunities. A dynamic Bayesian research would simply add additional time slices but have the observed node as unobserved. For the above data on skill A, a traditional learning curve approach could have something like Practice opportunity Correct? 1 yes 2 no 3 yes 23 (i.e. 4+"19") no 24 yes 25 yes Where the "19" as an estimate of how much practice the student had in the intervening months. The big question would be deciding how many phantom practice opportunities to include. One solution would be whatever number gives the smoothest learning curve (subject to some bounds such as 0<=N<=1000), and insist that all members of a class have the same value (or treat it as a mixed effects model). Actually, estimating phantom practice opportunities based on learning curves being lawful sounds like a reasonable and interesting approach. Any grad students with some spare cycles out there? :-) > Many school districts these days require teachers to give scantron-type > tests and quizzes. Commercial software tracks each student’s skill level > for each identified skill and reports progress back to teachers and > administrators. One such product is the ExamView suite, which is widely > used and which can be seen here: > http://www.einstruction.com/pdf/brochures/K-12_ExamViewAS%20SS.pdf > . In my EDM readings, I have been surprised not to find much research so > far using the data collected from such systems. A big reason is that EDM grew out of the ITS and AIED communities, so the initial push was intelligent tutoring systems researchers. We're picking up some people who use traditional data sources, but it's a minority of the population. Also, the field is new enough that there are plenty of low-hanging fruit, so why not work with a data source that is plentiful? > I’m sure there are easy ways to use the data to identify broad trends such as > which > schools/teachers/curricula are on average more effective. Yes. Tread carefully though, as some teachers and schools don't want ratings such as those available. On a prior project I came up with some of those for internal consumption. joe -- Joseph E. Beck Assistant Professor Computer Science Department, Fuller Labs 138 Worcester Polytechnic Institute