From a Broad Survey of Continual Learning to a Novel Project Hypothesis
After selecting Rehearsal as the most promising strategy and studying its state-of-the-art implementation in papers like iCaRL (Rebuffi et al., 2017), a key limitation was identified:
The Inefficiency of Constant Rehearsal: Existing methods employ a *static* rehearsal strategy. They constantly replay exemplars with a fixed intensity, regardless of whether the model is actually forgetting. This is computationally wasteful, analogous to studying flashcards for a subject you already know perfectly.
The identified gap (inefficient rehearsal) led to a crucial question: "How can a system know *when* it's starting to forget?"
The answer lies in a different but related field: **Stream Mining**. This field has developed robust algorithms for **Concept Drift Detection**—methods designed to detect when the statistical properties of a data stream change. By re-framing "forgetting" as a drop in performance on past tasks, we can see it as a form of concept drift.
Provides the strong, proven baseline for storing and replaying exemplars.
Acts as an efficient, real-time "forgetting alarm" by monitoring performance.
Connects the two components to initiate "rehearsal bursts" only when necessary.