Bayes’ rule uses a signal and the assumed joint probability distribution of signals and events to estimate the probability of an event of interest. Call this event a first-order event and the signal a first-order signal. Which joint probability distribution is the correct one is a second-order event, so second-order events are first-order probability distributions over first-order events and signals. The second-order signal consists of a first-order event and a first-order signal.
If the particular first-order joint probability distribution puts higher probability on the co-occurrence of this first-order event and signal than other first-order probability distributions, then observing this event and signal increases the likelihood of this particular probability distribution. The increase is by applying Bayes’ rule to update second-order events using second-order signals, which requires assuming a joint probability distribution of second-order signals and events. This second-order distribution is over first-order joint distributions and first-order signal-event pairs.
The third-order distribution is over second-order distributions and signal-event pairs. A second-order signal-event pair is a third-order signal. A second-order distribution is a third-order event.
A joint distribution of any order n may be decomposed into a marginal distribution over events and a conditional distribution of signals given events, where both the signals and the events are of the same order n. The conditional distribution of any order n>=2 is known by definition, because the n-order event is the joint probability distribution of (n-1)-order signals and events, thus the joint probability of a (n-1)-order signal-event pair (i.e., the n-order signal) given the n-order event (i.e., the (n-1)-order distribution) is the one listed in the (n-1)-order distribution.
The marginal distribution over events is an assumption above, but may be formulated as a new event of interest to be learned. The new signal in this case is the occurrence of the original event (not the marginal distribution). The empirical frequencies of the original events are a sufficient statistic for a sequence of new signals. To apply Bayes’ rule, a joint distribution over signals and the distributions of events needs to be assumed. The joint distribution itself may be learned from among many, over which there is a second-order joint distribution. Extending the Bayesian updating to higher orders proceeds as above. The joint distribution may again be decomposed into a conditional over signals and a marginal over events. The conditional is known by definition for all orders, now including the first, because the probability of a signal is the probability of occurrence of an original event, which is given by the marginal distribution (the new event) over the original events.
Returning to the discussion of learning the joint distributions, only the first-order events affect decisions, so only the marginal distribution over first-order events matters directly. The joint distributions of higher orders and the first-order conditional distribution only matter through their influence on updating the first-order marginal distribution.
The marginal of order n is the distribution over the (n-1)-order joint distributions. After reducing compound lotteries, the marginal of order n is the average of the (n-1)-order joint distributions. This average is itself a (n-1)-order joint distribution, which may be split into an (n-1)-order marginal and conditional, where if n-1>=2, the conditional is known. If the conditional is known, then the marginal may be again reduced as a compound lottery. Thus the hierarchy of marginal distributions of all orders collapses to the first-order joint distribution. This takes us back to the start – learning the joint distribution. The discussion above about learning a (second-order) marginal distribution (the first-order joint distribution) also applies. The empirical frequencies of signal-event pairs are the signals. Applying Bayes’ rule with some prior over joint distributions constitutes regularisation of the empirical frequencies to prevent overfitting to limited data.
Regularisation is itself learned from previous learning tasks, specifically the risk of overfitting in similar learning tasks, i.e. how non-representative a limited data set generally is. Learning regularisation in turn requires a prior belief over the joint distributions of samples and population averages. Applying regularisation learned from past tasks to the current one uses a prior belief over how similar different learning tasks are.