atw: Re: Should we always give users what they ask for?

  • From: "Christine Kent" <c.bkent@xxxxxxxxxxxxxx>
  • To: <austechwriter@xxxxxxxxxxxxx>
  • Date: Mon, 9 Mar 2009 15:56:27 +1100

Is this a valid test?  What does it actually test?  What kinds of material
was communicated?  What fonts and font sizes were used in the two media?
What kinds of online display were used? Was the text supported with
pictures?  Was the text laid out to optimise print and screen, or in a
different way for each medium?  What kind of questions were asked?  

 

Without knowing these answers you cannot know what learning situations this
test is applicable to.

 

The results may apply to someone learning philosophy but not to a learner
who needs to press a button, push in a rod, and pull down the lever, in that
order, at the right speed, or die.   That person will be mentored.  The
mentor will press the button at the same time as they say, ?Press in the
button?.  They will push in the rod and say the words.  Then they will pull
down the lever and say the words.  Next step, they will get the learner to
do it for them selves as they say the words for each step.  Then they will
get the learner to do it for themselves.  If there are any nuances like
speed or pressure, they will introduce them verbally in hindsight.
Somewhere near the machine is a picture showing a man pressing the button,
pushing in the rod and pulling down the lever.  Not a written word in sight,
except maybe some writing under the pictures for good measure, knowing full
well that very few people in the environment will ever read them. 

 

This is a bit different from the level of comprehension of a student who
reads a treatise in ancient Greek and is required to précis it. 

 

You still have to define your learner and learning environment before you
can test ?comprehension?.  

 

Christine

 

From: austechwriter-bounce@xxxxxxxxxxxxx
[mailto:austechwriter-bounce@xxxxxxxxxxxxx] On Behalf Of Geoffrey Marnell
Sent: Monday, 9 March 2009 3:41 PM
To: austechwriter@xxxxxxxxxxxxx
Subject: atw: Re: Should we always give users what they ask for?

 

Hi Stuart,

 

The statistic was explained earlier. Two groups of people were given a text
to read. One text was in printed form; the other was read online. Both
groups were then given a set of questions about the text they had read (the
same set of questions). When repeated many times, the results was this:
those who read the printed text got more questions right. The difference
between the groups was 60%.

 

Cheers

 

 

Geoffrey Marnell

Principal Consultant

Abelard Consulting Pty Ltd

T: +61 3 9596 3456

F: +61 3 9596 3625

W:  <http://www.abelard.com.au> www.abelard.com.au

  _____  

From: austechwriter-bounce@xxxxxxxxxxxxx
[mailto:austechwriter-bounce@xxxxxxxxxxxxx] On Behalf Of Stuart Burnfield
Sent: Monday, March 09, 2009 3:21 PM
To: Austechwriter
Subject: atw: Re: Should we always give users what they ask for?

 

Sorry folks, pressed Send by mistake. I was going to finish off
by saying:

My feeling is that comprehension is a factor (of course), 
reader preferences are a factor (of course), the environment
is a factor, the nature of the text or task is a factor...

So "always"? No. "reader preferences carry the day: yes or no?"
Maybe? Sometimes? It depends?

My main reservation about this thread is that I don't know how 
much weight to put on the comprehension studies you cite.
"Up to 60%" isn't a statistic I can do anything with. What does
it mean?
- every subject's comprehension was worse and the worst of
  the lot was 60% worse
- some were worse, some were better, but on average more
  were worse
- results varied depending on the material, and for a particular
  sort of material the readers' (reader's?) comprehension was
  60% worse

Stuart

Other related posts: