Eric Laukien - Feynman Machine: The Universal Dynamical Systems Computer (2016)

History / Edit / PDF / EPUB / BIB /
Created: August 5, 2017 / Updated: February 6, 2021 / Status: finished / 5 min read (~883 words)

  • Generate predictions for the next state in a RNN and correct the prediction based on the error generated by this prediction
  • Can it only learn based on sequential training data? (given that predictions are based on prior states)
    • Given it is "the universal dynamical systems computer", I'm guessing it's only for time-based data sets

  • One major difficulty is that the computation appears to be emergent across large populations of neurons in a region, but at the same time the details of individual synaptic connections, the "wiring diagram", the importance of individual and bursts of spikes, and the numerous genetically defined neuron types all seem crucial to understanding
  • We propose in this paper to describe a simple model which is both derived directly from computational models built on empirical neuroscience, and also uses information-theoretic results from a relatively new branch of Applied Mathematics, namely the study of coupled Dynamical Systems
  • The Theorem of Floris Takens: models derived from time series signals are essentially true analogues of the system producing the signals
  • The general Feynman Machine: a simple network of hierarchy of regions composed of paired predictive encoder and generative decoder modules
  • Feynman Machines have several interesting properties which distinguish them from existing Deep Learning and similar systems
    • Due to the much higher density and locality of processing, Feynman Machine-based system can perform at least comparably while dramatically reducing the footprint in computational power, training data and fine-tuning
    • Feynman Machines can be arbitrarily split into modules distributed across clusters and the Internet, and systems running on low power devices such as phones can be dynamically and robustly augmented using low-bandwidth connections to larger networks running in the cloud
    • Models can be trained on powerful infrastructure and transferred for operation and further custom learning on smaller devices
    • The same architecture can be used as a component in unsupervised, semi-supervised, fully supervised and reinforcement learning contexts

  • A Dynamical System is a mathematical model whose dynamics are characterised by express update rules, typically differential equations in continuous systems, or difference equations in discrete time

  • A Feynman Machine is a collection of Dynamical Systems modules called regions, connected together in a network or hierarchy, and to its external world, via sensorimotor channels, each of which carries a (usually high-dimensional) time series signal of some kind
    • Each region is capable of adaptively learning, representing, predicting, and interacting with the spatiotemporal, sensorimotor structure of its "world"
  • Unlike the Turing Machine (or any digital computer), the Feynman Machine is not "programmed" in the everyday sense. Instead, the structure of the network, the choice and configuration of regions, and connections to its external world together dictate the functionality and capability of the system, and the actual performance is achieved by online learning of the structure in the world
  • A Feynman Machine region typically has two "faces": the "visible", "downward" or "input" face and the "hidden", "upward" or "output" face
  • Each face has both inputs and outputs, but their semantics are different for each face
  • The "visible inputs" correspond to sensorimotor inputs to the region, and the "visible outputs" to predictions of future inputs, feedback predictive signals to lower Regions and/or control/behavioral/attention signals
  • The "hidden outputs" correspond to encodings of the visible inputs which are directed to visible inputs in higher regions, and the "hidden inputs" receive feedback/control/attention/routing signals from higher regions' visible outputs

  • It has been known for over a century that the neocortex has a laminar structure which involves very significant local feedback loops, but there is no clear consensus on the reasons for the preservation of this structure over scales of six orders of magnitude across mammalian species, and over the majority of 225 million years of evolution
  • In terms of the Feynman Machine, the main role of Layer 4 is to act as the downward input face, receiving afferent sensorimotor inputs and predicting individual transitions; Layer 2/3 temporally pools over these transitions and acts as part of the encoder's hidden output; Layer 5 combines input from Layer 2/3 and from higher regions via Layer 1, thus forming the main hidden input face; and both Layer 5 and 6 form outputs: Layer 5 to motor areas as well as upwards, and Layer 6 produces attention/feedback/modulation signals to lower regions and Layer 4
  • We show in our experiments that the Feynman Machine operation is not very sensitive to the exact choice of algorithm in each Encoder and Decoder module, and that different connection schemes may be used to connect modules internally as well as between regions
  • The important thing is that each region has an internal feedback loop where encodings affect decodings and vice versa

  • Our software implementations of the Feynman Machine share a common architecture: a ladder-like hierarchy of paired encoders and decoders
  • This resembles the ladder networks of Rasmus et al., but extends it to perform predictions of the next timestep of data
  • Each encoder/decoder pair is a kind of spatiotemporal autoencoder, receiving information, mapping to a hidden representation, and using that to predict the next frame of information

  • Laukien, Eric, Richard Crowder, and Fergal Byrne. "Feynman Machine: The Universal Dynamical Systems Computer." arXiv preprint arXiv:1609.03971 (2016).