Explained by Chris
Reiser and Dick (1990) present their new model for evaluating instructional software,
including a study in which the new model was field tested.
Their model focuses on the extent to which students learn the skills
a software package is intended to teach.
Using this approach, educators will be better able to reliably identify software that is instructionally effective.
Software evaluation organizations are helping teachers and school administrators
select appropriate software for the students' needs.
Most organizations use similar criteria to evaluate instructional software.
Evaluators have to make subjective judgements about:
- the accuracy of the content and its match to typical school objectives,
- the instructional and technical quality of the software.
Without testing students after they use the software, judgements regarding the instructional effectiveness of software are primarily speculative.
Subject-matter experts (teachers) are not able to reliably identify software that is instructionally effective
- The teachers' ratings were not valid indicators of the instructional effectiveness of the software.
- The most effective spelling program received the lowest teacher rating.
- The least efffective spelling program was highly rated by the teachers.
- The rank correlation between the educators' ratings and the actual instructional effectiveness of the various verions of the program was -.75.
- Those versions that the educators rated as likely to be highly effective were actually the least effective, and vice versa.
Subjective ratings are more negative than ratings based on field tests
- In 10 of the 36 cases, the earlier ratings were more negative than the ratings based on the field tests.
- In another seven instances, there were areas of agreement and disagreement.
- Valuable information can be gathered by field testing software.
Subjective ratings differ largely from person to person
- Very low correlation between the two sets of ratings.
- Their ratings of the instructional and technical characteristics of 29 pieces of software contrasted even more.
- Subjective evaluations of software are not reliable.
|Description of the model|
The authors' primary criterion to judge the effectiveness of software
is the extent to which students learn the skills the software is intended to teach.
Step 1: Identify software of interest
Step 2: Identify general characteristics of software
Step 3: Still interested in software?
Step 4: Identify or develop instructional objectives
Step 5: Indentify or develop test items and attitude questions
Step 6: Conduct one-on-one evaluation
Step 7: Is further evaluation necessary?
Step 8: Need to change test items?
Step 9: Make changes to test items
Step 10: Conduct small group evaluation
Step 11 (two weeks later): Administer retention test
Step 12: Write evaluation report
|Initial evaluation of the model|
The authors determined the effectiveness of the model:
- one with 8 correct answers (out of 15),
- one with 5 correct (average),
- and one with 1 correct.
- The high-ability student (8 in pretest) spelled 14 words correctly, gain: 6.
- The average student (5 in pretest) spelled 10 words correctly, gain: 5.
- The low-ability student (1 in pretest) spelled 9 words correctly, gain: 8.
8 and 9
- pretest 4.2 correct,
- posttest 11.1 correct,
- retention test 6.8 correct.
2.6 (average gain) * 100 / 10.8 (possible average gain = 15-4.2) = 24%.
- Have learned something from lesson (1.1 average response).
- Program operated smoothly and effectively (1.1 average response).
- Would recommend the instruction to a friend (1.1 average response).
- Enjoyed the experience (1.3 average response).
Final decision about this spelling program:
- They were pleased with the large initial gain on the posttest,
- and they were impressed with the extremely positive response of the students toward this instruction.
- They liked the informations about performance of particular students (who did better than expected).
Therefore, the teachers indicated that they would recommend this software program to their colleagues.
|Implications of the initial evaluation of the model|
The authors interviewed the classroom teacher and the resource teacher
to get their reactions to the software evaluation process:
- They appreciated the idea to collect student data as part of the software evaluation process.
- If they had the time to do so, they would use the model to evaluate other pieces of software.
- Unless they were given some release time to evaluate software, they would be unlikely to use the model.
- They suggested, that resource teachers could conduct such evaluations, or that software evaluation services should incorporate the model.
- The teachers suggested to simplify the model: only one-on-one stage (and no small-group session), all three students participating individually but together in a single one-on-one session.
Chris Mueller (email@example.com)
++41 (0)52 301 3301 phone
|97 05 04|