Skip to ContentSkip to Navigation
Research Bernoulli Institute Calendar

Colloquium Artificial Intelligence - Matthew Cook, University of Groningen

When:We 12-10-2022 15:00 - 16:00
Where:5161.0165 Bernoulliborg

Title: Learning in networks of relations

Abstract:

Belief propagation, also known as the generalized distributive law, is an algorithm for calculating sums of products. One common application is for calculating conditional probabilities, which can be used for inference in AI. The representation used in this algorithm, called a factor graph, can "learn" its internal quantities by "observing" data. By incorporating lateral connections (as occur in the brain), the system will self organize to reflect the topology of the data (similarly to Kohonen "self-organizing maps", but the dimensionality emerges rather than needing to be built in). This allows the system to naturally become robust to noise, performing signal restoration, cue integration, and decision making. If the operations involved satisfy the absorptive law in addition to the distributive law, then factor graphs with cycles will reliably and quickly converge (unlike the traditional case of addition and multiplication, which is intractable when there are cycles). In this general case we can think of the factor graph as a network of relations. Learning can be done via a clustering algorithm like k-means (using a lot of means), or it can be done by just remembering a subset of individual data samples ("snapshots"), or any other method that represents a distribution via samples from that distribution. Latent (unobserved) variables can also be learned in this way, using just Hebbian learning, dynamic competition among units, and homeostasis for average activity levels of units. When used for probabilistic inference, the resulting structure does not need to break the symmetry of how random variables are treated, as both P(x|y) and P(y|x) are given by the same matrix (related to Sklar's theorem on copulas, but without the need for an ordered domain). This simplifies belief networks, and makes forward models be the same as inverse models. Looking at the structure underlying the sum-of-products representation, we can define "zipper diagrams", as well as rewriting rules that can be used to help simplify these diagrams. These rules can automatically discover the inherent structure of a world that is observed in a brain-like way.