Methodologies for Continuous Life-Long
Machine Learning for AI
Systems
Authors:
- Dr. James Crowder, CAES APD
- John Carbone, Electrical and Computer Engineering Dept, Southern Methodist University
Abstract:
Current machine learning architectures, strategies,
and methods are typically static and non-interactive, making them incapable of adapting to changing and/or heterogeneous data environments, either in real-time, or in near-real-time.
Typically, in real-time applications, large amounts of disparate data must be processed, learned from, and actionable intelligence provided in terms of recognition of evolving
activities. Applications like Rapid Situational Awareness (RSA) used for support of critical systems (e.g., Battlefield Management and Control) require critical analytical assessment
and decision support by automatically processing massive and increasingly amounts of data to provide recognition of evolving
events, alerts, and providing actionable intelligence to operators and analysts.
Herein we prescribe potential methods and strategies for continuously adapting, life-long machine learning within a selflearning and self-evaluation environment to enhance realtime/ near real-time support for mission critical systems. We describe the notion of continuous adaptation, which requires an augmented paradigm for enhancing traditional probabilistic machine learning. Specifically, systems which must more aptly operate in harsh/soft unknown environments without the need of a priori statistically trained neural networks nor fully developed learning rules for situations that have never been thought of yet. This leads to a hypothesis requiring new machine learning processes, in which abductive learning is applied. We utilize varying unsupervised/self-supervised learning techniques, statistical/fuzzy models for entities, relationships, and descriptor extraction. We also involve topic and group discovery and abductive inference algorithms. to expand system aperture in order to envision what outlying factors could have also caused current observations. Once extended plausible explanations are found, we will show how a system uses the afore mentioned implements to potentially learn about new or modified causal relationships and extend, reinterpret, or create new situational driven memories.
Full access to this whitepaper requires completing and submitting the following form: