Home | Travaux STAF | Fiche-lecture (Wenger Etienne) - STAF11 | STAF-E |

ARTIFICIAL INTELLIGENCE and TUTORING SYSTEMSComputational and Cognitive Approaches to the Communication of Knowledge by Etienne Wenger (1987)Chapter 6 / Existing CAI traditions: other early contributions pp. 101-122 |

Introduction |

This chapter presents two separate projects that existed before SCHOLAR (a classic of Intelligent Tutoring Systems) and evolved out of established CAI traditions:

- One line of research, conducted at the Institute for Mathematical Studies at Stanford University (IMSSS), was geared toward the production of complete curricula for use in real settings.
- The other, at the University of Leeds in England, dealt with the automation of intelligent teaching decisions.

**Early attempts to tailor problem-solving experiences**

IMSSS has had a long tradition of research in educational computer use.
Systems have been developed for teaching in such varied domains as logic,
axiomatic mathematics and foreign languages. Not to mention the computer
speech generation with their system MISS that contributed to the development
of CAI.

We are going to explore the following tutors:

- EXCHECK, a proof checker that uses natural inference methods,
- INTEGRATION, a tutor for symbolic integration (numerical methods) and
- BIP, a tutor for novice programmers that optimizes the sequencing of programming tasks with a symbolic representation of a curriculum.
- QUADRATIC tutor, based on a first-order logic theorem prover and
- Self's tutor, a grading system, that applies some natural-language processing principles to check the syntactic correcteness of German sentences in a test.

Even though these systems are not very typical of the ITS paradigm,
-they were motivated more by interest in educational issues- they have had
some direct influence on the field. A distinguishing feature is the emphasis
given to large experiments with systems in teaching contexts, and to gathering
and analyzing data about their performance.

TOP

Classic and proven useful in class (the core of un undergraduate course at Stanford for years), creates an intelligent interface with a powerful model of domain expertise to provide a learning environment where the student can get feedback during problem solving. It emulates human proof techniques with macro-operators that make use of a theorem prover while bringing to bear knoeledge specific to the domain of proofs in set theory. It communicates with the student via a formal language of abbreviations.

**Conclusion**
It lacks most of the features of AI such as: it doesn't form a global model
of the student and it doesn't use pedagogical strategies to make its'
interventions contextually relevant and effective.

For the domain of mathematical proofs, this is a **nontrivial
achievement**. That is, to have a friendly environment, get an intelligent
feedback and have student's work verified in terms they understand.

TOP

(doctoral dissertation of Ralph Kimball 1973, 1982)
It uses matrices of probabilistic values to represent judgemental knowledge.
The domain expertise is represented as a matrix that relates all problem
classes to all solution methods.

Each matrix element is a value indicating the probability of applying a
given problem-solving approach to a given problem class, thus generating
a subproblem in a new class. Student's knowledge is also matrix represented,
compared to the expert.

The simple language interface basically consists of multiple-choice questions,
where the tutor maintains full control over the interaction.

For diagnosis, the system updates the student's matrix. Like that, Kimball
claims that we can get precise measurements of student learning revealing its'
discontinuities.

The system adopts as its standard the student's approach when it leads to a
better solution than the experts, kind of a self-improving.

To conclude, the advantages of this simple idea are:

- use of probability theory to update the student model,
- measure the distance between the expert and the student in order to select the appropriate exercises,
- incorporate the best moves of the student into the system's expertise.

**Conclusion**
Unfortunately, it did not receive the recognition it deserves.
However, in practice, a reasonable tutorial interaction can be achieved with
probabilities **as long as explanations are not required**.

TOP

It is presented as a "problem-solving laboratory" for introductory programming
classes. It attempts to individualize the instruction by selecting the tasks
from a pool of 100 sample problems.

It's representation is more traditional AI, however it's not really an active
programming tutor. It's Curriculum Information Network (CIN) is more
important because it provides a complex representation of the curriculum
highlighting pedagogically relevant relations between topics.

**BIP-I**

The curriculum is divided into 3 conceptual layers from top to bottom:

- Techniques, central issues of expertise
- Skills, low-level knowledge units (not internally ordered, not mutually disjoint) and
- Tasks, to exercise skills.

**Conclusion**
After a test on two groups of students with the same tutor but the one with
the above task strategy and the other with the predetermined typical branching
of CAI, the BIP-I group performed significantly better.

**BIP-II**

It refines and augments the information contained in the CIN, by ordering the
skills and organizing them to networks themselves. Skills are now connected by
pedagogical links also, including analogical relations, functional
dependencies and relative difficulty. All these, in a second network whose
nodes are the primitive elements of the domain.

All the rest, are similar to BIP-I, especially the task selection procedure,
only now is more refined and precise in the construction of skills needed to
exercise. So, the sequences proposed are different, especially if the student
performs well initially.

**Conclusion**
Even though, these new links are supposed to inferre student's knowledge,
their real potential for use in diagnosis and remediation has not been
explored.

As far as it cocerns feedback, it is unable to diagnose logical errors as it tests only the input output results without analyzing the algorithm. It can only check the syntax of the program by going through all the key words included.

TOP

In England, while SCHOLAR was being developed at BBN, a group at the
Computer-based Learning project at the University of Leeds came to similar
conclusions after working on advanced CAI systems for teaching medical
diagnosis and arithmetic operations. Hartley and Sleeman (1973) tried to
define some characteristics of "Intelligent Teaching Systems".

Their classification into 4 classes concentrates on the teaching process.
Between CAI and ITS they see an intermediate type of **generative systems**
that generate tasks by putting problems together. Also, they divide ITS into 2
nondisjoint categories.

One of them, is adaptive systems or more specifically **self-improving**
systems that refine their knowledge by evaluating their own performance.

The following two tutors that are presented are the pioneering work of two of
their students:

- Quadratic tutor by Tim O'Shea is interested in the design of self-improving systems that monitor their own performance while
- John Self attempts to define teaching decisions formally in terms of a student model.

TOP

The domain is the solution of simple quadratic equations of the form
x^{2} + c = bx based on the Vieta's general root theorem.

It's a self-improving tutor. It has the ability to set up experiments using
variations of its strategies and to adopt those that seem to produce the best
results. It uses a database (also self-updated) of possible modifications and
corresponding expected results which he names "theory of instruction" for the
domain.

For improvements to be possible, not only it is necessary to have an explicit
and modular representation of the teaching strategies, but the **tutorial
objectives** must be clearly defined.

So, there are four distinct tutorial goals:

- increase the number of students completing the session,
- improve their score on the post-test,
- decrease the time taken by the students to learn the rules and their combinations and
- decrease computer time used in the process.

- a task difficulty matrix, for the selection of new problems with well-defined teaching goals (domain fixed),
- a student model, which is a set of hypotheses of student's current knowledge of rules and their combinations may be (regularly updated) and
- tutorial strategies, the core of the tutor, a set of production rules.

**Conclusion**
It's a first attempt at automating the educational research, although it
didn't cause any dramatic improvements in the domain but it has been well
accepted by the students that endeavoured learning and outperformed.

We've got also to point out the lack of sufficient statistical evaluations
for the modifications of the teaching strategies.

The most foundamental limitation of the system is that learning is empirical
and not analytical, because it is impossible to reason about rules without
knowing the principles they embody. We can say that it is an "empirical
theory of instruction".

TOP

The domain is the acquisition of simple conjuctive concepts in a relational
language close to first-order logic.

Taking an analytical approach in contrast with the empirical experiments of
O'Shea, Self is interested in formalizing teaching actions in terms of a
student model that is predicted as well as inspectable which started an
important trend in the field.

**Conclusion**
It's learning model constructs optimal instructional sequences in an
artificial domain. It's an elegant piece of research, but still remains a
laboratory experiment because it doesn't address many difficult issues,
notably diagnosis.

TOP

**Suppes, P.**

(1981) University-level Computer-assisted Instruction at Stanford: 1968-1980.

Institute for Mathematical Studies in the Social Sciences, Stanford University, Stanford, California.

**McDonald, J.**

(1981) The EXCHECK CAI system. In Suppes, P. (Ed.) University-level Computer-assisted Instruction at Stanford: 1968-1980.

Institute for Mathematical Studies in the Social Sciences, Stanford University, Stanford, California.

**Blaine, L.H.**

(1981) Programs for structured proofs. In Suppes, P. (Ed.) University-level Computer-assisted Instruction at Stanford: 1968-1980.

Institute for Mathematical Studies in the Social Sciences, Stanford University, Stanford, California.

**Smith et al.**

(1975) Computer-assisted axiomatic mathematics: informal rigor. Blaine, L.H,; and Smith, R.L. (1977) Intelligent CAI: the role of the curriculum in suggesting computational models of reasoning.

Proceedings of the National ACM Conference, Seattle, Washington, pp. 241-246. Association for computing Machinery, New York.

**O'Shea, T. **

(1979b) A self-improving quadratic tutor.

Int Jrnl Man-Machine Studies, vol. 11, pp. 97-124. (Reprinted in Sleeman, D.H.; and Brown, J.S. (Eds) Intelligent Tutoring Systems. Academic Press, London.)

**O'Shea et al.**

(1984) Tools for creating intelligent computer tutors.

In Elithorn, A.; and Barneji, R. (Eds) Human and Artificial Inteligence. North-Holland, London.

**Heines, J.M.; and O'Shea, T.**

(1985) The design of rule-based CAI tutorial.

Int Jrnl Man-Machine Studies, vol. 23, pp. 1-25.

**Self, J.A.**

(1974) Students models in CAI.

Int Jrnl Man-Machine Studies, vol. 6, pp. 261-276

**Self, J.A.**

(1977) Concept teaching. Artificial Intelligence,

vol. 9, no. 2, pp. 197-221

© Vivian Synteta (11/04/99) updated 11/04/99

synteta8@etu.unige.ch