Incremental learning with partial instance memory

Marcus A. Maloof and Ryszard S. Michalski

Agents that learn on-line with partial instance memory reserve some of the previously encountered examples for use in future training episodes. In earlier work, we selected extreme examples—those from the boundaries of induced concept descriptions—combined these with incoming instances, and used a batch learning algorithm to generate new concept descriptions. In this paper, we extend this work by combining our method for selecting extreme examples with two incremental learning algorithms, AQ11 and GEM. Using these new systems, AQ11-PM and GEM-PM and using two real-world applications, those of computer intrusion detection and blasting cap detection in X-ray images, we conducted a lesion study to analyze the trade-offs between predictive accuracy, examples held in memory, learning time, and concept complexity. Empirical results showed that although the use of our partial-memory model did decrease predictive accuracy, it also decreased memory requirements, decreased learning time, and in some cases, decreased concept complexity. We also present results from an experiment using the STAGGER Concepts, a synthetic data set involving concept drift, suggesting that our methods perform comparably to the FLORA2 system in terms of predictive accuracy, but store fewer examples. Moreover, these outcomes are consistent with earlier results using our partial-memory model and batch learning.

Paper available in PDF from ScienceDirect (subscription required).

@article{maloof.ai.04,
  author = "Maloof, M.A. and Michalski, R.S.",
  title = "Incremental learning with partial instance memory",
  journal = "Artificial Intelligence",
  year = 2004,
  volume = 154,
  pages = "95--126"
  annote = {
  }}