资 源 简 介
THIS SITE IS DEPRECATED.
go to http://agi.io
The Memory-Prediction Framework (MPF) has been widely applied to unsupervised learning problems, for both classification and prediction. However, so far there has been no attempt to incorporate MPF/HTM in reinforcement learning or other adaptive systems; that is, to use knowledge embodied within the hierarchy to control a system, or to generate behaviour for an agent. This problem is interesting because the human neocortex is believed to play a vital role in the generation of behaviour, and the MPF is a model of the human neocortex.
We propose some simple and biologically-plausible enhancements to the Memory-Prediction Framework. These cause it to explore and interact with an external world, while trying to maximize a continuous, time-varying reward function. All behaviour is generated and controlled within the MPF hierarchy. The homogeneity (self-sameness) of the MPF hierarchy is preserve