Tag Archives: statistics

Leader turnover due to organisation performance is underestimated

Berry and Fowler (2021) “Leadership or luck? Randomization inference for leader effects in politics, business, and sports” in Science Advances propose a method they call RIFLE for testing the null hypothesis that leaders have no effect on organisation performance. The method is robust to serial correlation in outcomes and leaders, but not to endogenous leader turnover, as Berry and Fowler honestly point out. The endogeneity is that the organisation’s performance influences the probability that the leader is replaced (economic growth causes voters to keep a politician in office, losing games causes a team to replace its coach).

To test whether such endogeneity is a significant problem for their results, Berry and Fowler regress the turnover probability on various measures of organisational performance. They find small effects, but this underestimates the endogeneity problem, because Berry and Fowler use linear regression, forcing the effect of performance on turnover to be monotone and linear.

If leader turnover is increased by both success (get a better job elsewhere if the organisation performs well, so quit voluntarily) and failure (fired for the organisation’s bad performance), then the relationship between turnover and performance is U-shaped. Average leaders keep their jobs, bad and good ones transition elsewhere. A linear regression finds a near-zero effect in this case even if the true effect is large. How close the regression coefficient is to zero depends on how symmetric the effects of good and bad performance on leader transition are, not how large these effects are.

The problem for the RIFLE method of Berry and Fowler is that the small apparent effect of organisation performance on leader turnover from OLS regression misses the endogeneity in leader transitions. Such endogeneity biases RIFLE, as Berry and Fowler admit in their paper.

The endogeneity may explain why Berry and Fowler find stronger leader effects in sports (coaches in various US sports) than in business (CEOs) and politics (mayors, governors, heads of government). A sports coach may experience more asymmetry in the transition probabilities for good and bad performance than a politician. For example, if the teams fire coaches after bad performance much more frequently than poach coaches from well-performing competing teams, then the effect of performance on turnover is close to monotone: bad performance causes firing. OLS discovers this monotone effect. On the other hand, if politicians move with equal likelihood after exceptionally good and bad performance of the administrative units they lead, then linear regression finds no effect of performance on turnover. This misses the bias in RIFLE, which without the bias might show a large leader effect in politics also.

The unreasonably large effect of governors on crime (the governor effect explains 18-20% of the variation in both property and violent crime) and the difference between the zero effect of mayors on crime and the large effect of governors that Berry and Fowler find makes me suspect something is wrong with that particular analysis in their paper. In a checks-and-balances system, the governor should not have that large of influence on the state’s crime. A mayor works more closely with the local police, so would be expected to have more influence on crime.

Virulence of a disease may cause vaccines to be effective

My uninformed speculation: vaccines may be so effective against Covid-19 (90-95% vs flu vaccine 70%) for the same reason why Covid-19 is so infectious – it binds strongly to biochemicals in the organism. If high affinity to the angiotensin-converting enzyme 2 on the surfaces of lung cells is positively correlated with strong binding to antibodies and immune cells, then the immune system, once triggered, removes the viral particles faster for those respiratory viruses that infect cells more easily. Strong binding and the consequent intense immune triggering may also be the reason for the life-threatening immune overreaction (cytokine storm) to the novel coronavirus.
This hypothesis could be tested on a cross-sectional dataset of viral diseases using some measure of the infectiousness of a disease, the effectiveness of a vaccine against it and the frequency of immune overreaction to it.
Infectiousness may be measured by ID50: what number of microbes makes half the organisms exposed to this number sick. This measure depends on the state of the organisms studied. For example, if people’s immune system is weaker in the winter on average, then ID50 measured in the winter is lower than in the summer.
Vaccine effectiveness is typically measured in percent – what fraction of vaccinated people are protected, in the sense that they do not catch the disease in circumstances in which unvaccinated people catch it. This measure of may depend on what the exposure to the disease is. For example, if a large enough dose of the microbe makes everyone sick, vaccinated or no, then exposure to this dose shows zero effect of the vaccine. Similarly, if a small enough dose fails to infect anyone, then the vaccine effect seems zero, but at least the lack of infections among the unvaccinated shows that no information about vaccine efficacy can be obtained from this exposure test.
Immune overreaction needs to be confidently ascribable to the disease studied for it to be a relevant measure for testing the theory about the connection between virulence and vaccine efficacy.

Tissue sampling by piggybacking on vaccination or testing campaigns

Obtaining tissue samples from a large population of healthy individuals is useful for many research and testing applications. Establishing the distribution of genes, transcriptomes, cell distributions and morpologies in a normal population allows comparing clinical laboratory findings to reference values obtained from this baseline. The genetic composition of the population can be used to estimate historical migration patterns in paleoanthropology and selective pressures in evolutionary biology.

Gathering tissue samples from many people is expensive and time-consuming, unless it happens as a byproduct of existing programs. Collecting used vaccination needles or coronavirus nasal swabs that have a few cells attached allows anonymous tissue sampling of almost the entire population. A few cells per person are enough for many analyses in modern biology. Bulk collection of needles or swabs has built-in untraceability of biological material to an individual, which should alleviate privacy concerns and reduce the bureaucratic burden of ethics approvals.

If top people have families and hobbies, then success is not about productivity

Assume:

1 Productivity is continuous and weakly increasing in talent and effort.

2 The sum of efforts allocated to all activities is bounded, and this bound is similar across people.

3 Families and hobbies take some effort, thus less is left for work. (For this assumption to hold, it may be necessary to focus on families with children in which the partner is working in a different field. Otherwise, a stay-at-home partner may take care of the cooking and cleaning, freeing up time for the working spouse to allocate to work. A partner in the same field of work may provide a collaboration synergy. In both cases, the productivity of the top person in question may increase.)

4 The talent distribution is similar for people with and without families or hobbies. This assumption would be violated if for example talented people are much better at finding a partner and starting a family.

Under these assumptions, reasonably rational people would be more productive without families or hobbies. If success is mostly determined by productivity, then people without families should be more successful on average. In other words, most top people in any endeavour would not have families or hobbies that take time away from work.

In short, if responsibilities and distractions cause lower productivity, and productivity causes success, then success is negatively correlated with such distractions. Therefore, if successful people have families with a similar or greater frequency as the general population, then success is not driven by productivity.

One counterargument is that people first become successful and then start families. In order for this to explain the similar fractions of singles among top and bottom achievers, the rate of family formation after success must be much greater than among the unsuccessful, because catching up from a late start requires a higher rate of increase.

Another explanation is irrationality of a specific form – one which reduces the productivity of high effort significantly below that of medium effort. Then single people with lots of time for work would produce less through their high effort than those with families and hobbies via their medium effort. Productivity per hour naturally falls with increasing hours, but the issue here is total output (the hours times the per-hour productivity). An extra work hour has to contribute negatively to success to explain the lack of family-success correlation. One mechanism for a negative effect of hours on output is burnout of workaholics. For this explanation, people have to be irrational enough to keep working even when their total output falls as a result.

If the above explanations seem unlikely but the assumptions reasonable in a given field of human endeavour, then reaching the top and staying there is mostly not about productivity (talent and effort) in this field. For example, in academic research.

A related empirical test of whether success in a given field is caused by productivity is to check whether people from countries or groups that score highly on corruption indices disproportionately succeed in this field. Either conditional on entering the field or unconditionally. In academia, in fields where convincing others is more important than the objective correctness of one’s results, people from more nepotist cultures should have an advantage. The same applies to journals – the general interest ones care relatively more about a good story, the field journals more about correctness. Do people from more corrupt countries publish relatively more in general interest journals, given their total publications? Of course, conditional on their observable characteristics like the current country of employment.

Another related test for meritocracy in academia or the R&D industry is whether coauthored publications and patents are divided by the number of coauthors in their influence on salaries and promotions. If there is an established ranking of institutions or job titles, then do those at higher ranks have more quality-weighted coauthor-divided articles and patents? The quality-weighting is the difficult part, because usually there is no independent measure of quality (unaffected by the dependent variable, be it promotions, salary, publication venue).

Learning and evolution switch the sign of autocorrelations

Animals are more successful if they learn or evolve to predict locations of food, mates and predators. Prediction of anything relies on correlations over time in the environment. These correlations may be positive or negative. Learning is more difficult if the sign of the correlation switches over time, which occurs in nature due to resource depletion, learning and evolution.

If a herbivore eats a tasty patch of plants or a predator a nest full of eggs, then the next day that food is not there (negative correlation), but the next year at the same time it is probably there again (positive correlation) because the plants regrow from roots or seeds, and if the prey found the nesting spot attractive one year, then other members of the prey species will likely prefer it the next year as well. However, over many generations, if the plants in that location get eaten before dispersing seeds or the young in that nest before breeding, then the prey will either learn or evolve to avoid that location, or go extinct. This makes the autocorrelation negative again on sufficiently long timescales.

Positive correlation is the easiest to learn – just keep doing the same thing and achieve the same successful outcome. Negative correlation is harder, because the absence of success at one time predicts success from the same action at another time, and vice versa. Learning a changing correlation requires a multi-parameter mental model of the superimposed different-frequency oscillations of resource abundance.

There is a tradeoff between exploiting known short-period correlations and experimenting to learn longer-period correlations. There may always be a longer pattern to discover, but finite lifetimes make learning very low-frequency events not worthwhile.

The most liveable cities rankings are suspicious

The „most liveable cities” rankings do not publish their methodology, only vague talk about a weighted index of healthcare, safety, economy, education, etc. An additional suspicious aspect is that the top-ranked cities are all large – there are no small towns. There are many more small than big cities in the world (this is known as Zipf’s law), so by chance alone, one would expect most of the top-ranked towns in any ranking that is not size-based to be small. The liveability rankings do not mention restricting attention to sizes above some cutoff. Even if a minimum size was required, one would expect most of the top-ranked cities to be close to this lower bound, just based on the size distribution.

The claimed ranking methodology includes several variables one would expect to be negatively correlated with the population of a city (safety, traffic, affordability). The only plausible positively size-associated variables are culture and entertainment, if these measure the total number of venues and events, not the per-capita number. Unless the index weights entertainment very heavily, one would expect big cities to be at a disadvantage in the liveability ranking based on the correlations, i.e. the smaller the town, the greater its probability of achieving a given liveability score and placing in the top n in the rankings. So the “best places to live” should be almost exclusively small towns. Rural areas not so much, because these usually have limited access to healthcare, education and amenities. The economy of remote regions grows less overall and the population is older, but some (mining) boom areas radically outperform cities in these dimensions. Crime is generally low, so if rural areas were included in the liveability index, then some of these would have a good change of attaining top rank.

For any large city, there exists a small town with better healthcare, safety, economy, education, younger population, more entertainment events per capita, etc (easy examples are university towns). The fact that these do not appear at the top of a liveability ranking should raise questions about its claimed methodology.

The bias in favour of bigger cities is probably coming from sample selection and hometown patriotism. If people vote mostly for their own city and the respondents of the liveability survey are either chosen from the population approximately uniformly randomly or the sample is weighted towards larger cities (online questionnaires have this bias), then most of the votes will favour big cities.

Blind testing of bicycle fitting

Claims that getting a professional bike fit significantly improves riding comfort and speed and reduces overuse injuries seem suspicious – how can a centimetre here or there make such a large difference? A very wrong fit (e.g. an adult using a children’s bike) of course creates big problems, but most people can adjust their bike to a reasonable fit based on a few online suggestions.

To determine the actual benefit of a bike fit requires a randomised trial: have professionals determine the bike fit for a large enough sample of riders, measure and record the objective parameters of the fit (centimetres of seatpost out of the seat tube, handlebar height from the ground, pedal crank length, etc). Then randomly change the fit by a few centimetres or leave it unchanged, without the cyclist knowing, and let the rider test the bike. Record the speed, ask the rider to rate the comfort, fatigue, etc. Repeat for several random changes in fit. Statistically test whether the average speed, comfort rating and other outcome variables across the sample of riders are better with the actual fit or with small random changes. To eliminate the placebo effect, blind testing is important – the cyclists should not know whether and how the fit has been changed.

Another approach is to have each rider test a large sample of different bike fits, find the best one empirically, record its objective parameters and then have a sample of professional fitters (who should not know what empirical fit was found) choose the best fit. Test statistically whether the professionals choose the same fit as the cyclist.

A simpler trial that does not quite answer the question of interest checks the consistency of different bike fitters. The same person with the same bike in the same initial configuration goes to various fitters and asks them to choose a fit. After each fitting, the objective sizing of the bike is recorded and then the bike is returned to the initial configuration before the next fit. The test is whether all fitters choose approximately the same parameters. Inconsistency implies that most fitters cannot figure out the objectively best fit, but consistency does not imply that the consensus of the fitters is the optimal sizing. They could all be wrong the same way – consistency is insufficient to answer the question of interest.

Committing to an experimental design without revealing it

Pre-registering an experiment in a public registry of clinical trials keeps the experimenters honest (avoids ex post modifications of hypotheses to fit the data and “cherry-picking” the data by removing “outliers”), but unfortunately reveals information to competing research groups. This is an especially relevant concern in commercial R&D.

The same verifiability of honesty could be achieved without revealing scientific details by initially publicly distributing an encrypted description of the experiment, and after finishing the research, publishing the encryption key. Ex post, everyone can check that the specified experimental design was followed and all variables reported (no p-hacking). Ex ante, competitors do not know the trial details, so cannot copy it or infer the research direction.

Distinguishing discrimination in admissions from the opposite discrimination in grading

There are at least two potential explanations for why students from group A get a statistically significantly higher average grade in the same course than those from group B. The first is discrimination against A in admissions: if members of A face a stricter ability cutoff to be accepted at the institution, then conditional on being accepted, they have higher average ability. One form of a stricter ability cutoff is requiring a higher score from members of A, provided admissions test scores are positively correlated with ability.

The second explanation is discrimination in favour of group A in grading: students from A are given better grades for the same work. To distinguish this from admissions discrimination against A, one way is to compare the relative grades of groups A and B across courses. If the difference in average grades is due to ability, then it should be quite stable across courses, compared to a difference coming from grading standards, which varies with each grader’s bias for A.

Of course, there is no clear line how much the relative grades of group A vary across courses under grading discrimination, as opposed to admissions bias. Only statistical conclusions can be drawn about the relative importance of the two opposing mechanisms driving the grade difference. The distinction is more difficult to make when there is a „cartel” in grading discrimination, so that all graders try to boost group A by the same amount, i.e. to minimise the variance in the advantage given to A. Conscious avoidance of detection could be one reason to reduce the dispersion in the relative grade improvement of A.

Another complication when trying to distinguish the causes of the grade difference is that ability may affect performance differentially across courses. An extreme case is if the same trait improves outcomes in one course, but worsens them in another, for example lateral thinking is beneficial in a creative course, but may harm performance when the main requirement is to follow rules and procedures. To better distinguish the types of discrimination, the variation in the group difference in average grades should be compared across similar courses. The ability-based explanation results in more similar grade differences between more closely related courses. Again, if graders in similar courses vary less in their bias than graders in unrelated fields, then distinguishing the types of discrimination is more difficult.

Ways in which an eater can get negative calories from food

There are at least four ways in which an eater may have less energy and nutrients after consuming a food: mechanical, chemical, physical and biological. The mechanical way is that chewing and other parts of digestion take energy, so if a food requires serious mastication and contains few calories, then more energy may be spent than absorbed. This has been claimed for raw celery.
Chemically, one food may react with another in a way that makes one or both of them less digestible. The less effective absorption reduces the nutrients obtained compared to not eating the second reactant. The chemical pathway to inefficient digestion may have multiple steps. For example, ascorbic acid leaches calcium from the body, and calcium is required for the absorption of vitamin D, so eating more citrus fruits may indirectly reduce one’s vitamin D levels.
When calculating the calorie content of food, indigestible fibre is subtracted from carbohydrates before adding up the energy obtained from carbohydrates, fats and proteins. However, if fibre reduces the absorption of calories (in addition to its known reduction of the absorption iron, zinc, magnesium, calcium and phosphorus), then the food’s bioavailable calorie content is less than that obtained by simply subtracting the fibre. To derive the correct calorie content, the fibre should then have negative weight in the calculation, not zero. This difference may explain why in Western countries, a high-fibre diet predicts better health in multiple dimensions in large prospective studies (Nurses’ Health Study, Framingham Heart Study), controlling for calorie intake, lifestyle and many other factors. If the calorie absorption is overestimated for people eating lots of fibre (because the calorie intake is larger than the absorption), then their predicted health based on the too high calorie estimate is worse than their actual health. This is because most people in Western countries overeat, so eating less improves health outcomes. If the predicted health is underestimated, then the high-fibre group looks unusually healthy, which is attributed to the beneficial effects of fibre, but may actually be due to absorbing fewer calories.
A food may chemically break down tissues, e.g. bromelain and papain, from fresh pineapple and papaya respectively, denature meat proteins, so cause mouth sores. Rebuilding the damaged tissue requires the energy and nutrients, the quantity of which may exceed that absorbed from the food.
Chemically causing diarrhea reduces the time that foods (including the laxative agent) spend in the gut, thus reduces nutrient absorption.
Stimulants like caffeine speed up metabolism and cause greater energy expenditure, but may give zero calories themselves, resulting in a net negative caloric balance.
Just like chemical damage, physical injury to the body necessitates spending calories and nutrients for tissue repair. For example, scratchy food (phytoliths, bran) may cause many microscopic wounds to the digestive tract.
Cold food requires the body to spend energy on heating, so if the calorie content is small, then the net energy obtained is is negative. Examples are ice cubes and cold water.
A food substance may physically partially block the absorption of another, for example a gelling agent (methylcellulose, psyllium husks) may turn a juice into a gel in the gut and thereby reduce its absorption. Based on my personal experience, psyllium husks gel liquid feces, thus effectively reducing diarrhea. Mixing psyllium husks with carrot juice and with asparagus powder dissolved in water before consuming them during the same meal results in the excretion of separated faint orange and green gels somewhat distinct from the rest of the feces (photos available upon request, not posted to keep the blog family-friendly). This is suggestive evidence that the gelling agent both kept the juices from mixing in the gut and reduced the absorption of the colourful compounds by keeping the juice in the centre of the gel away from the intestinal wall.
Biologically, a food may reduce the nutrients available to the organism by causing infection, the immune response to which requires energy and depletes the body’s reserves of various substances. Infection may lead to diarrhea, although the mechanism is chemical, namely the toxins excreted by the microbes. Infection with helminths (intestinal worms) that suck blood through the wall of the gut requires the replenishment of blood cells, which uses up calories, protein and iron.
If the food takes a long time to chew or is bulky, then chemical and electrical signals of satiation are sent from the gastrointestinal tract to the the appetite centre of the brain. These signals reduce the desire to eat, thus decrease calorie intake.