Tag Archives: statistics

Bayesian updating of higher-order joint probabilities

Bayes’ rule uses a signal and the assumed joint probability distribution of signals and events to estimate the probability of an event of interest. Call this event a first-order event and the signal a first-order signal. Which joint probability distribution is the correct one is a second-order event, so second-order events are first-order probability distributions over first-order events and signals. The second-order signal consists of a first-order event and a first-order signal.

If the particular first-order joint probability distribution puts higher probability on the co-occurrence of this first-order event and signal than other first-order probability distributions, then observing this event and signal increases the likelihood of this particular probability distribution. The increase is by applying Bayes’ rule to update second-order events using second-order signals, which requires assuming a joint probability distribution of second-order signals and events. This second-order distribution is over first-order joint distributions and first-order signal-event pairs.

The third-order distribution is over second-order distributions and signal-event pairs. A second-order signal-event pair is a third-order signal. A second-order distribution is a third-order event.

A joint distribution of any order n may be decomposed into a marginal distribution over events and a conditional distribution of signals given events, where both the signals and the events are of the same order n. The conditional distribution of any order n>=2 is known by definition, because the n-order event is the joint probability distribution of (n-1)-order signals and events, thus the joint probability of a (n-1)-order signal-event pair (i.e., the n-order signal) given the n-order event (i.e., the (n-1)-order distribution) is the one listed in the (n-1)-order distribution.

The marginal distribution over events is an assumption above, but may be formulated as a new event of interest to be learned. The new signal in this case is the occurrence of the original event (not the marginal distribution). The empirical frequencies of the original events are a sufficient statistic for a sequence of new signals. To apply Bayes’ rule, a joint distribution over signals and the distributions of events needs to be assumed. The joint distribution itself may be learned from among many, over which there is a second-order joint distribution. Extending the Bayesian updating to higher orders proceeds as above. The joint distribution may again be decomposed into a conditional over signals and a marginal over events. The conditional is known by definition for all orders, now including the first, because the probability of a signal is the probability of occurrence of an original event, which is given by the marginal distribution (the new event) over the original events.

Returning to the discussion of learning the joint distributions, only the first-order events affect decisions, so only the marginal distribution over first-order events matters directly. The joint distributions of higher orders and the first-order conditional distribution only matter through their influence on updating the first-order marginal distribution.

The marginal of order n is the distribution over the (n-1)-order joint distributions. After reducing compound lotteries, the marginal of order n is the average of the (n-1)-order joint distributions. This average is itself a (n-1)-order joint distribution, which may be split into an (n-1)-order marginal and conditional, where if n-1>=2, the conditional is known. If the conditional is known, then the marginal may be again reduced as a compound lottery. Thus the hierarchy of marginal distributions of all orders collapses to the first-order joint distribution. This takes us back to the start – learning the joint distribution. The discussion above about learning a (second-order) marginal distribution (the first-order joint distribution) also applies. The empirical frequencies of signal-event pairs are the signals. Applying Bayes’ rule with some prior over joint distributions constitutes regularisation of the empirical frequencies to prevent overfitting to limited data.

Regularisation is itself learned from previous learning tasks, specifically the risk of overfitting in similar learning tasks, i.e. how non-representative a limited data set generally is. Learning regularisation in turn requires a prior belief over the joint distributions of samples and population averages. Applying regularisation learned from past tasks to the current one uses a prior belief over how similar different learning tasks are.

How to learn whether an information source is accurate

Two sources may be used to check each other over time. One of these sources may be your own senses, which show whether the event that the other source predicted occurred or not. The observation of an event is really another signal about the event. It is a noisy signal because your own eyes may lie (optical illusions, deepfakes).

First, one source sends a signal about the event, then the second source sends. You will never know whether the event actually occurred, but the second source is the aggregate of all the future information you receive about the event, so may be very accurate. The second source may send many signals in sequence about the event, yielding more info about the first source over time. Then the process repeats about a second event, a third, etc. This is how belief about the trustworthiness of a source is built.

You cannot learn the true accuracy of a source, because the truth is unavailable to your senses, so you cannot compare a source’s signals to the truth. You can only learn the consistency of different sources of sensory information. Knowing the correlation between various sensory sources is both necessary and sufficient for decision making, because your objective function (utility or payoff) is your perception of successfully achieving your goals. If your senses are deceived so you believe you have achieved what you sought, but actually have not, then you get the feeling of success, but if your senses are deceived to tell you you have failed, then you do not feel success even if you actually succeeded. The problem with deception arises purely from the positive correlation between the deceit and the perception of deceit. If deceit increases the probability that you later perceive you have been deceived and are unhappy about that perception, then deceit may reduce your overall utility despite initially increasing it temporarily. If you never suspect the deception, then your happiness is as if the deception was the truth.

Your senses send signals to your brain. We can interpret these signals as information about which hypothetical state of the world has occurred – we posit that there exists a world which may be in different states with various probabilities and that there is a correlation between the signals and these states. Based on the information, you update the probabilities of the states and choose a course of action. Actions result in probability distributions over different future sensations, which may be modelled as a different sensation in each state of the world, which have probabilities attached. (Later we may remove the states of the world from the model and talk about a function from past perceptions and actions into future perceptions. The past is only accessible through memory. Memory is a current perception, so we may also remove time from the model.)

You prefer some future sensations to others. These need not be sensory pleasures. These could be perceptions of having improved the world through great toil. You would prefer to choose an action that results in preferable sensations in the future. Which action this is depends on the state of the world.

To estimate the best action (the one yielding the most preferred sensations), you use past sensory signals. The interpretation of these signals depends on the assumed or learned correlation between the signals and the state. The assumption may be instinctive from birth. The learning is really about how sensations at a point in time are correlated with the combination of sensations and actions before that point. An assumption that the correlation is stable over time enables you to use past correlation to predict future correlation. This assumption in turn may be instinctive or learned.

The events most are interested in distinguishing are of the form “action A results in the most preferred sensations”, “action B causes the most preferred sensations”, “action A yields the least preferred sensations”. Any event that is useful to know is of a similar form by Blackwell’s theorem: information is useful if and only if it changes decisions.

The usefulness of a signal source depends on how consistent the signals it gives about the action-sensation links (events) are with your future perceptions. These future perceptions are the signals from the second source – your senses – against which the first source is checked. The signals of the second source have the form “memory of action A and a preferred sensation at present”. Optimal learning about the usefulness of the first source uses Bayes’ rule and a prior probability distribution on the correlations between the first source and the second. The events of interest in this case are the levels of correlation. A signal about these levels is whether the first source gave a signal that coincided with later sensory information.

If the first source recommended a “best action” that later yielded a preferred sensation, then this increases the probability of high positive correlation between the first source and the second on average. If the recommended action was followed by a negative sensation, then this raises the probability of a negative correlation between the sources. Any known correlation is useful information, because it helps predict the utility consequences of actions.

Counterfactuals should be mentioned as a side note. Even if an action A resulted in a preferred sensation, a different action B might have led to an even better sensation in the counterfactual universe where B was chosen instead. Of course, B might equally well have led to a worse sensation. Counterfactuals require a model to evaluate – what the output would have been after a different input depends on the assumed causal chain from inputs to outputs.

Whether two sources are separate or copies is also a learnable event.

P-value cannot be less than 1/1024 in ten binary choices

Baez-Mendoza et al (2021) claim that for rhesus macaques choosing which of two others to reward in each trial, „the difference in the other’s reputation based on past interactions (i.e., how likely they were to reciprocate over the past 20 trials) had a significant effect on the animal’s choices [odds ratio (OR) = 1.54, t = 9.2, P = 3.5 × 10^-20; fig. S2C]”.

In 20 trials, there are ten chances to reciprocate if I understand the meaning of reciprocation in the study (monkey x gives a reward to the monkey who gave x a reward in the last trial). Depending on interpretation, there are 6-10 chances to react to reciprocation. Six if three trials are required for each reaction: the trial in which a monkey acts, the trial in which another monkey reciprocates and the trial in which a monkey reacts to the reciprocation. Ten if the reaction can coincide with the initial act of the next action-reciprocation pair.

Under the null hypothesis that the monkey allocates rewards randomly, the probability of giving the reward to the monkey who previously reciprocated the most 10 times out of 10 is 1/1024. The p-value is the probability that the observed effect is due to chance, given the null hypothesis. So the p-value cannot be smaller than about 0.001 for a 20-trial session, which offers at most 10 chances to react to reciprocation. The p-value cannot be 3.5*10^-20 as Baez-Mendoza et al (2021) claim. Their supplementary material does not offer an explanation of how this p-value was calculated.

Interpreting reciprocation or trials differently so that 20 trials offer 20 chances to reciprocate, the minimal p-value is 1/1048576, approximately 10^-6, again far from 3.5*10^-20.

A possible explanation is the sentence “The group performed an average of 105 ± 8.7 (mean ± SEM) trials per session for a total of 22 sessions.” If the monkey has a chance to react to past reciprocation in a third of the 105*22 sessions, then the p-value can indeed be of the order 10^-20. It would be interesting to know how the authors divide the trials into the reputation-building and reaction blocks.

Contraception increases high school graduation – questionable numbers

In Stevenson et al 2021 “The impact of contraceptive access on high school graduation” in Science Advances, some numbers do not add up. In the Supplementary Material, Table S1 lists the pre-intervention Other, non-Hispanic cohort size in the 2010 US Census and 2009 through 2017 1-year American Community Survey data as 300, but Table S2 as 290 = 100+70+30+90 (Black + Asian + American Indian + Other/Multiple Races). The post-intervention cohort size is 200 in Table S1, but 230 in Table S2, so the difference is in the other direction (S2 larger) and cannot be due to the same adjustment of one Table for both cohorts, e.g. omitting some racial group or double-counting multiracial people. The main conclusions still hold with the adjusted numbers.

It is interesting that the graduation rate for the Other race group is omitted from the main paper and the Supplementary Material Table S3, because by my calculations, in Colorado, the Other graduation rate decreased after the CFPI contraception access expansion, but in the Parallel Trends states (the main comparison group of US states that the authors use), the Other graduation rate increased significantly. The one missing row in the Table is exactly the one in which the results are the opposite to the rest of the paper and the conclusions of the authors.

Improving the accuracy of waist circumference measurement

Waistline needs to be measured for clothes size determination and health evaluation, but the instructions to do it are vague: keep the stomach neutral, neither sucked in nor bulged out, hold the measuring tape not too loosely nor too tight. A person’s waist expands and contracts with the breath, so at which point of the breath cycle should the measurement be taken? How much tension should the measuring tape have?

Averaging several measurements would be more accurate than a single vaguely defined one, both in the sense of replication across different measurers and in the sense of corresponding to the physical quantity of interest (health status, tightness of the clothes). One simple way is to average the measured perimeter of a maximally sucked-in gut and a maximally pushed-out gut.

A better measure, but quite difficult to take, is to average the waist circumference across the whole cycle of breathing in and out, for several cycles. A device to do this would be a loop of flexible measuring tape pulled into a roll by a spring, similarly to a construction measuring tape made of metal. The roll would contain electronics which continuously record how much tape is in the roll, therefore the length of the loop outside the roll. The length data could be averaged to obtain the average waistline.

A more modern method is to use two cameras pointed at the person from two sides at waist height, recording a 3D video of the waist over the breath cycle. The video could then be averaged on a computer to find the mean volume. Whole-body volume could be determined similarly – no need for Archimedes’ Eureka method of submerging the body in a bathtub with a volume scale on it. The volume and the weight determine the density of the body, which gives partial information about its fat percentage.

Identifying unmeasurable effort in contests

To distinguish unmeasurable effort from unmeasurable exogenous factors like talent or environmental interference in contests, assumptions are needed, even for partial identification when overall performance can be objectively measured (e.g., chess move quality evaluated by a computer). Combining one of the following assumptions with the additive separability of effort and the exogenous factors provides sign restrictions on coefficient estimates. Additive separability means that talent or the environment changes performance the same way at any effort level.

One such identifying assumption is that effort is greatest when it makes the most difference – against an equal opponent. By contrast, effort is lower against much better and much worse opponents.

A similar identifying assumption is that if there is personal conflict between some contest participants but not others, then effort is likely higher against a hated opponent than a neutral one.

The performance of a given contestant against an equal opponent compared to against an unequal one is a lower bound on how much effort affects performance. Similarly, the performance against a hated rival compared to against a neutral contestant is a lower bound on the effect of effort. The lower bound is not the total influence of effort, because even against an unequal neutral opponent, effort is still positive.

Computer vision training sets of photos are endogenous

In principle, every pixel could be independent of any other, so the set of possible photos is the number of pixels times the number of colours – billions at least. No training data set is large enough to cover these photo possibilities many times over, as required for statistical analysis, of which machine learning is a subfield. The problem is solved by restricting attention to a small subset of possible photos. In this case, there is a reasonable number of possible photos, which can be covered by a reasonably large training data set.

Useful photos on any topic usually contain just one main object, such as a face, with less than 100 secondary objects (furniture, clothes, equipment). There is a long right tail – some useful photos have dozens of the main object, like a group photo full of faces, but I do not know of a photo with a thousand distinguishable faces. Photos of mass events may have ten thousand people, but lack the resolution to make any face in these useful.

Only selected photos are worth analysing. Only photos sufficiently similar to these are worth putting in a computer vision training dataset. The sample selection occurs both on the input and the output side: few of the billions of pixel arrangements actually occur as photos to be classified by machine vision and most of the training photos are similar to those. There are thus fewer outputs to predict than would be generated from a uniform random distribution and more inputs close to those outputs than would occur if input data was uniform random. Both speed learning.

When photo resolution improves, more objects of interest may appear in photos without losing usefulness to blur. Then such photos become available in large numbers and are added to the datasets.

Moon phase and sleep correlation is not quite a sine wave

Casiraghi et al. (2021) in Science Advances (DOI: 10.1126/sciadv.abe0465) show that human sleep duration and onset depends on the phase of the moon. Their interpretation is that light availability during the night caused humans to adapt their sleep over evolutionary time. Casiraghi et al. fit a sine curve to both sleep duration and onset as functions of the day in the monthly lunar cycle, but their Figure 1 A, B for the full sample and the blue and orange curves for the rural groups in Figure 1 C, D show a statistically significant deviation from a sine function. Instead of same-sized symmetric peaks and troughs, sleep duration has two peaks with a small trough between, then a large sharp trough which falls more steeply than rises, then two peaks again. Sleep onset has a vertically reflected version of this pattern. These features are statistically significant, based on the confidence bands Casiraghi and coauthors have drawn in Figure 1.

The significant departure of sleep patterns from a sine wave calls into question the interpretation that light availability over evolutionary time caused these patterns. What fits the interpretation of Casiraghi et al. is that sleep duration is shortest right before full moon, but what does not fit is that the duration is longest right after full and new moons, but shorter during a waning crescent moon between these.

It would better summarise the data to use the first four terms of a Fourier series instead of just the first term. There seems little danger of overfitting, given N=69 and t>60.

A questionable choice of the authors is to plot the sleep duration and onset of only the 35 best-fitting participants in Figure 2. A more honest choice yielding the same number of plots would pick every other participant in the ranking from the best fit to the worst.

In the section Materials and Methods, Casiraghi et al. fitted both a 15-day and a 30-day cycle to test for the effect of the Moon’s gravitational pull on sleep. The 15-day component was weaker in urban communities than rural, but any effect of gravity should be the same in both. By contrast, the effect of moonlight should be weaker in urban communities, but the urban community data (Figure 1 C, D green curve) fits a simple sine curve better than rural. It seems strange that sleep in urban communities would correlate more strongly with the amount of moonlight, like Figure 1 shows.

Clinical trials of other drugs in other species to predict a drug’s effect in humans

Suppose we want to know whether a drug is safe or effective for humans, but do not have data on what it does in humans, only on its effects in mice, rats, rhesus macaques and chimpanzees. In general, we can predict the effect of the drug on humans better with the animal data than without it. Information on “nearby” realisations of a random variable (effect of the drug) helps predict the realisation we are interested in. The method should weight nearby observations more than observations further away when predicting. For example, if the drug has a positive effect in animals, then predicts a positive effect in humans, and the larger the effect in animals, the greater the predicted effect in humans.

A limitation of weighting is that it does not take into account the slope of the effect when moving from further observations to nearer. For example, a very large effect of the drug in mice and rats but a small effect in macaques and chimpanzees predicts the same effect in humans as a small effect in rodents and a large one in monkeys and apes, if the weighted average effect across animals is the same in both cases. However, intuitively, the first case should have a smaller predicted effect in humans than the second, because moving to animals more similar to humans, the effect becomes smaller in the first case but larger in the second. The idea is similar to a proportional integral-derivative (PID) controller in engineering.

The slope of the effect of the drug is extra information that increases the predictive power of the method if the assumption that the similarity of effects decreases in genetic distance holds. Of course, if this assumption fails in the data, then imposing it may result in bias.

Assumptions may be imposed on the method using constrained estimation. One constraint is the monotonicity of the effect in some measure of distance between observations. The method may allow for varying weights by adding interaction terms (e.g., the effect of the drug times genetic similarity). The interaction terms unfortunately require more data to estimate.

Extraneous information about the slope of the effect helps justify the constraints and reduces the need for adding interaction terms, thus decreases the data requirement. An example of such extra information is whether the effects of other drugs that have been tested in these animals as well as humans were monotone in genetic distance. Using information about these other drugs imposes the assumption that the slopes of the effects of different drugs are similar. The similarity of the slopes should intuitively depend on the chemical similarity of the drugs, with more distant drugs having more different profiles of effects across animals.

The similarity of species in terms of the effects drugs have on them need not correspond to genetic similarity or the closeness of any other observable characteristic of these organisms, although often these similarities are similar. The similarity of interest is how similar the effects of the drug are across these species. Estimating this similarity based on the similarity of other drugs across these animals may also be done by a weighted regression, perhaps with constraints or added interaction terms. More power for the estimation may be obtained from simultaneous estimation of the drug-effect-similarity of the species and the effect of the drug in humans. An analogy is demand and supply estimation in industrial organisation where observations about each side of the market give information about the other side. Another analogy is duality in mathematics, in this case between the drug-effect-similarity of the species and the given drug’s similarity of effects across these species.

The similarity of drugs in terms of their effects on each species need not correspond to chemical similarity, although it often does. The similarity of interest for the drugs is how similar their effects are in humans, and also in other species.

The inputs into the joint estimation of drug similarity, species similarity and the effect of the given drug in humans are the genetic similarity of the species, the chemical similarity of the drugs and the effects for all drug-species pairs that have been tested. In the matrix where the rows are the drugs and the columns the species, we are interested in filling in the cell in the row “drug of interest” and the column “human”. The values in all the other cells are informative about this cell. In other words, there is a benefit from filling in these other cells of the matrix.

Given the duality of drugs and species in the drug effect matrix, there is information to be gained from running clinical trials of chemically similar human-use-approved drugs in species in which the drug of interest has been tested but the chemically similar ones have not. The information is directly about the drug-effect-similarity of these species to humans, which indirectly helps predict the effect of the drug of interest in humans from the effects of it in other species. In summary, testing other drugs in other species is informative about what a given drug does in humans. Adapting methods from supply and demand estimation, or otherwise combining all the data in a principled theoretical framework, may increase the information gain from these other clinical trials.

Extending the reasoning, each (species, drug) pair has some unknown similarity to the (human, drug of interest) pair. A weighted method to predict the effect in the (human, drug of interest) pair may gain power from constraints that the similarity of different (species, drug) pairs increases in the genetic closeness of the species and the chemical closeness of the drugs.

Define Y_{sd} as the effect of drug d in species s. Define X_{si} as the observable characteristic (gene) i of species s. Define X_{dj} as the observable characteristic (chemical property) j of drug d. The simplest method is to regress Y_{sd} on all the X_{si} and X_{dj} and use the coefficients to predict the Y_{sd} of the (human, drug of interest) pair. If there are many characteristics i and j and few observations Y_{sd}, then variable selection or regularisation is needed. Constraints may be imposed, like X_{si}=X_i for all s and X_{dj}=X_j for all d.

Fused LASSO (least absolute shrinkage and selection operator), clustered LASSO and prior LASSO seem related to the above method.

Leader turnover due to organisation performance is underestimated

Berry and Fowler (2021) “Leadership or luck? Randomization inference for leader effects in politics, business, and sports” in Science Advances propose a method they call RIFLE for testing the null hypothesis that leaders have no effect on organisation performance. The method is robust to serial correlation in outcomes and leaders, but not to endogenous leader turnover, as Berry and Fowler honestly point out. The endogeneity is that the organisation’s performance influences the probability that the leader is replaced (economic growth causes voters to keep a politician in office, losing games causes a team to replace its coach).

To test whether such endogeneity is a significant problem for their results, Berry and Fowler regress the turnover probability on various measures of organisational performance. They find small effects, but this underestimates the endogeneity problem, because Berry and Fowler use linear regression, forcing the effect of performance on turnover to be monotone and linear.

If leader turnover is increased by both success (get a better job elsewhere if the organisation performs well, so quit voluntarily) and failure (fired for the organisation’s bad performance), then the relationship between turnover and performance is U-shaped. Average leaders keep their jobs, bad and good ones transition elsewhere. This is related to the Peter Principle that an employee is promoted to her or his level of incompetence. A linear regression finds a near-zero effect of performance on turnover in this case even if the true effect is large. How close the regression coefficient is to zero depends on how symmetric the effects of good and bad performance on leader transition are, not how large these effects are.

The problem for the RIFLE method of Berry and Fowler is that the small apparent effect of organisation performance on leader turnover from OLS regression misses the endogeneity in leader transitions. Such endogeneity biases RIFLE, as Berry and Fowler admit in their paper.

The endogeneity may explain why Berry and Fowler find stronger leader effects in sports (coaches in various US sports) than in business (CEOs) and politics (mayors, governors, heads of government). A sports coach may experience more asymmetry in the transition probabilities for good and bad performance than a politician. For example, if the teams fire coaches after bad performance much more frequently than poach coaches from well-performing competing teams, then the effect of performance on turnover is close to monotone: bad performance causes firing. OLS discovers this monotone effect. On the other hand, if politicians move with equal likelihood after exceptionally good and bad performance of the administrative units they lead, then linear regression finds no effect of performance on turnover. This misses the bias in RIFLE, which without the bias might show a large leader effect in politics also.

The unreasonably large effect of governors on crime (the governor effect explains 18-20% of the variation in both property and violent crime) and the difference between the zero effect of mayors on crime and the large effect of governors that Berry and Fowler find makes me suspect something is wrong with that particular analysis in their paper. In a checks-and-balances system, the governor should not have that large of influence on the state’s crime. A mayor works more closely with the local police, so would be expected to have more influence on crime.