Tag Archives: being and seeming

Calorie claims of exercise classes and Strava

This machine burns more calories per minute than any other– up to 800 calories in a 45-minute session” (SWEAT by BXR | VersaClimbing)

Strava shows: Morning Elliptical 1h36m 1,115Cal 136bpm.

If the ad and Strava mean calories (small calories), not kilocalories (large calories), then the unit Cal must be a typo for cal – see “Calories” vs “calories” – History of Science and Mathematics Stack Exchange.

Both “up to 800 calories in a 45-minuteand 1,115 cal in 1h36m are less than the basal metabolic rate of 1200kcal/day =50kcal/h =50,000cal/h. (Examining Variations of Resting Metabolic Rate of Adults: A Public Health Perspective – PMC (nih.gov)) Thus false. The marketing claims probably intend to say kilocalories.

The sustained (an hour or more) power output of elite cyclists is about 400W. (Lance Armstrong: Cycling Power | CIO , Tour de France 2009: Power estimates (sportsscientists.com) )

1W =1J/s =3600J/h, so 400W =1440kJ/h =344 kcal/h because a calorie is 4.184 Joules.

Less-than-elite athletes likely burn significantly fewer calories than 344kcal in an hour, especially when not competing. Any exercise class, app or sports watch claiming over 200 kcal/h is suspect.

Robustness is a form of efficiency

Efficiency means using the best way to achieve a goal. Mathematically, selecting the maximizer of an objective function. The goal may be anything. For example, the objective function may be a weighted average of performance across various situations.

Robustness means performing well in a wide variety of circumstances. Mathematically, performing well may mean maximizing the weighted average performance across situations, where the weights are the probabilities of the situations. Performing well may also mean maximizing the probability of meeting a minimum standard – this probability sums the probabilities of situations in which the (situation-specific) minimum standard is reached. In any case, some objective function is being maximized for robustness. The best way to achieve a goal is being found. The goal is either a weighted average performance, the probability of exceeding a minimum standard or some similar objective. Thus robustness is efficiency for a particular objective.

The robustness-efficiency tradeoff is just a tradeoff between different objective functions. One objective function in this case is a weighted average that puts positive weight on the other objective function.

Whatever the goal, working towards it efficiently is by definition the best thing to do. The goal usually changes over time, but most of this change is a slow drift. Reevaluating the probabilities of situations usually changes the goal, in particular if the goal is a weighted average or a sum of probabilities that includes some of these situations. A rare event occurring causes a reevaluation of the probability of this event, thus necessarily the probability of at least one other event. If the probabilities of rare events are revised up, then the goal tends to shift away from single-situation efficiency, or performance in a small number of situations, towards robustness (efficiency for a combination of a large number of situations).

To be better prepared for emergencies and crises, the society should prepare efficiently. The most efficient method may be difficult to determine in the short term. If the expected time until the next crisis is long, then the best way includes gathering resources and storing these in a large number of distributed depots. These resources include human capital – the skills of solving emergencies. Such skills are produced using training, stored in people’s brains, kept fresh with training. Both the physical and mental resources are part of the economic production in the country. Economic growth is helpful for creating emergency supplies, raising the medical capacity, freeing up time in which to train preparedness. Unfortunately, economic growth is often wasted on frivolous consumption of goods and services, often to impress others. Resources wasted in this way may reduce preparedness by causing people to go soft physically and mentally.

Solving a crisis requires cooperation. Consumption of social media may polarize a society, reducing collaboration and thus preparedness.

How to learn whether an information source is accurate

Two sources may be used to check each other over time. One of these sources may be your own senses, which show whether the event that the other source predicted occurred or not. The observation of an event is really another signal about the event. It is a noisy signal because your own eyes may lie (optical illusions, deepfakes).

First, one source sends a signal about the event, then the second source sends. You will never know whether the event actually occurred, but the second source is the aggregate of all the future information you receive about the event, so may be very accurate. The second source may send many signals in sequence about the event, yielding more info about the first source over time. Then the process repeats about a second event, a third, etc. This is how belief about the trustworthiness of a source is built.

You cannot learn the true accuracy of a source, because the truth is unavailable to your senses, so you cannot compare a source’s signals to the truth. You can only learn the consistency of different sources of sensory information. Knowing the correlation between various sensory sources is both necessary and sufficient for decision making, because your objective function (utility or payoff) is your perception of successfully achieving your goals. If your senses are deceived so you believe you have achieved what you sought, but actually have not, then you get the feeling of success, but if your senses are deceived to tell you you have failed, then you do not feel success even if you actually succeeded. The problem with deception arises purely from the positive correlation between the deceit and the perception of deceit. If deceit increases the probability that you later perceive you have been deceived and are unhappy about that perception, then deceit may reduce your overall utility despite initially increasing it temporarily. If you never suspect the deception, then your happiness is as if the deception was the truth.

Your senses send signals to your brain. We can interpret these signals as information about which hypothetical state of the world has occurred – we posit that there exists a world which may be in different states with various probabilities and that there is a correlation between the signals and these states. Based on the information, you update the probabilities of the states and choose a course of action. Actions result in probability distributions over different future sensations, which may be modelled as a different sensation in each state of the world, which have probabilities attached. (Later we may remove the states of the world from the model and talk about a function from past perceptions and actions into future perceptions. The past is only accessible through memory. Memory is a current perception, so we may also remove time from the model.)

You prefer some future sensations to others. These need not be sensory pleasures. These could be perceptions of having improved the world through great toil. You would prefer to choose an action that results in preferable sensations in the future. Which action this is depends on the state of the world.

To estimate the best action (the one yielding the most preferred sensations), you use past sensory signals. The interpretation of these signals depends on the assumed or learned correlation between the signals and the state. The assumption may be instinctive from birth. The learning is really about how sensations at a point in time are correlated with the combination of sensations and actions before that point. An assumption that the correlation is stable over time enables you to use past correlation to predict future correlation. This assumption in turn may be instinctive or learned.

The events most are interested in distinguishing are of the form “action A results in the most preferred sensations”, “action B causes the most preferred sensations”, “action A yields the least preferred sensations”. Any event that is useful to know is of a similar form by Blackwell’s theorem: information is useful if and only if it changes decisions.

The usefulness of a signal source depends on how consistent the signals it gives about the action-sensation links (events) are with your future perceptions. These future perceptions are the signals from the second source – your senses – against which the first source is checked. The signals of the second source have the form “memory of action A and a preferred sensation at present”. Optimal learning about the usefulness of the first source uses Bayes’ rule and a prior probability distribution on the correlations between the first source and the second. The events of interest in this case are the levels of correlation. A signal about these levels is whether the first source gave a signal that coincided with later sensory information.

If the first source recommended a “best action” that later yielded a preferred sensation, then this increases the probability of high positive correlation between the first source and the second on average. If the recommended action was followed by a negative sensation, then this raises the probability of a negative correlation between the sources. Any known correlation is useful information, because it helps predict the utility consequences of actions.

Counterfactuals should be mentioned as a side note. Even if an action A resulted in a preferred sensation, a different action B might have led to an even better sensation in the counterfactual universe where B was chosen instead. Of course, B might equally well have led to a worse sensation. Counterfactuals require a model to evaluate – what the output would have been after a different input depends on the assumed causal chain from inputs to outputs.

Whether two sources are separate or copies is also a learnable event.

Signalling the precision of one’s information with emphatic claims

Chats both online and in person seem to consist of confident claims which are either extreme absolute statements (“vaccines don’t work at all”, “you will never catch a cold if you take this supplement”, “artificial sweeteners cause cancer”) or profess no knowledge (“damned if I know”, “we will never know the truth”), sometimes blaming the lack of knowledge on external forces (“of course they don’t tell us the real reason”, “the security services are keeping those studies secret, of course”, “big business is hiding the truth”). Moderate statements that something may or may not be true, especially off the center of all-possibilities-equal, and expressions of personal uncertainty (“I have not studied this enough to form an opinion”, “I have not thought this through”) are almost absent. Other than in research and official reports, I seldom encounter statements of the form “these are the arguments in this direction and those are the arguments in that direction. This direction is somewhat stronger.” or “the balance of the evidence suggests x” or “x seems more likely than not-x”. In opinion pieces in various forms of media, the author may give arguments for both sides, but in that case, concludes something like “we cannot rule out this and we cannot rule out that”, “prediction is difficult, especially now in a rapidly changing world”, “anything may happen”. The conclusion of the opinion piece does not recommend a moderate course of action supported by the balance of moderate-quality evidence.

The same person confidently claims knowledge of an extreme statement on one topic and professes certainty of no knowledge at all on another. What could be the goal of making both extreme and no-knowledge statements confidently? If the person wanted to pretend to be well-informed, then confidence helps with that, but claiming no knowledge would be counterproductive. Blaming the lack of knowledge on external forces and claiming that the truth is unknowable or will never be discovered helps excuse one’s lack of knowledge. The person can then pretend to be informed to the best extent possible (a constrained maximum of knowledge) or at least know more than others (a relative maximum).

Extreme statements suggest to an approximately Bayesian audience that the claimer has received many precise signals in the direction of the extreme statement and as a result has updated the belief far from the average prior belief in society. Confident statements also suggest many precise signals to Bayesians. The audience does not need to be Bayesian to form these interpretations – updating in some way towards the signal is sufficient, as is behavioural believing that confidence or extreme claims demonstrate the quality of the claimer’s information. A precisely estimated zero, such as confidently saying both x and not-x are equally likely, also signals good information. Similarly, being confident that the truth is unknowable.

Being perceived as having precise information helps influence others. If people believe that the claimer is well-informed and has interests more aligned than opposed to theirs, then it is rational to follow the claimer’s recommendation. Having influence is generally profitable. This explains the lack of moderate-confidence statements and claims of personal but not collective uncertainty.

A question that remains is why confident moderate statements are almost absent. Why not claim with certainty that 60% of the time, the drug works and 40% of the time, it doesn’t? Or confidently state that a third of the wage gap/racial bias/country development is explained by discrimination, a third by statistical discrimination or measurement error and a third by unknown factors that need further research? Confidence should still suggest precise information no matter what the statement is about.

Of course, if fools are confident and researchers honestly state their uncertainty, then the certainty of a statement shows the foolishness of the speaker. If confidence makes the audience believe the speaker is well-informed, then either the audience is irrational in a particular way or believes that the speaker’s confidence is correlated with the precision of the information in the particular dimension being talked about. If the audience has a long history of communication with the speaker, then they may have experience that the speaker is generally truthful, acts similarly across situations and expresses the correct level of confidence on unemotional topics. The audience may fail to notice when the speaker becomes a spreader of conspiracies or becomes emotionally involved in a topic and therefore is trying to persuade, not inform. If the audience is still relatively confident in the speaker’s honesty, then the speaker sways them more by confidence and extreme positions than by admitting uncertainty or a moderate viewpoint.

The communication described above may be modelled as the claimer conveying three-dimensional information with two two-dimensional signals. One dimension of the information is the extent to which the statement is true. For example, how beneficial is a drug or how harmful an additive. A second dimension is how uncertain the truth value of the statement is – whether the drug helps exactly 55% of patients or may help anywhere between 20 and 90%, between which all percentages are equally likely. A third dimension is the minimal attainable level of uncertainty – how much the truth is knowable in this question. This is related to whether some agency is actively hiding the truth or researchers have determined it and are trying to educate the population about it. The second and third dimensions are correlated. The lower is the lowest possible uncertainty, the more certain the truth value of the statement can be. It cannot be more certain than the laws of physics allow.

The two dimensions of one signal (the message of the claimer) are the extent to which the statement is true and how certain the claimer is of the truth value. Confidence emphasises that the claimer is certain about the truth value, regardless of whether this value is true or false. The claim itself is the first dimension of the signal. The reason the third dimension of the information is not part of the first signal is that the claim that the truth is unknowable is itself a second claim about the world, i.e. a second two-dimensional signal saying how much some agency is hiding or publicising the truth and how certain the speaker is of the direction and extent of the agency’s activity.

Opinion expressers in (social) media usually choose an extreme value for both dimensions of both signals. They claim some statement about the world is either the ultimate truth or completely false or unknowable and exactly in the middle, not a moderate distance to one side. In the second dimension of both signals, the opinionated people express complete certainty. If the first signal says the statement is true or false, then the second signal is not sent and is not needed, because if there is complete certainty of the truth value of the statement, then the statement must be perfectly knowable. If the first signal says the statement is fifty-fifty (the speaker does not know whether true or false), then in the second signal, the speaker claims that the truth is absolutely not knowable. This excuses the speaker’s claimed lack of knowledge as due to an objective impossibility, instead of the speaker’s limited data and understanding.

Moon phase and sleep correlation is not quite a sine wave

Casiraghi et al. (2021) in Science Advances (DOI: 10.1126/sciadv.abe0465) show that human sleep duration and onset depends on the phase of the moon. Their interpretation is that light availability during the night caused humans to adapt their sleep over evolutionary time. Casiraghi et al. fit a sine curve to both sleep duration and onset as functions of the day in the monthly lunar cycle, but their Figure 1 A, B for the full sample and the blue and orange curves for the rural groups in Figure 1 C, D show a statistically significant deviation from a sine function. Instead of same-sized symmetric peaks and troughs, sleep duration has two peaks with a small trough between, then a large sharp trough which falls more steeply than rises, then two peaks again. Sleep onset has a vertically reflected version of this pattern. These features are statistically significant, based on the confidence bands Casiraghi and coauthors have drawn in Figure 1.

The significant departure of sleep patterns from a sine wave calls into question the interpretation that light availability over evolutionary time caused these patterns. What fits the interpretation of Casiraghi et al. is that sleep duration is shortest right before full moon, but what does not fit is that the duration is longest right after full and new moons, but shorter during a waning crescent moon between these.

It would better summarise the data to use the first four terms of a Fourier series instead of just the first term. There seems little danger of overfitting, given N=69 and t>60.

A questionable choice of the authors is to plot the sleep duration and onset of only the 35 best-fitting participants in Figure 2. A more honest choice yielding the same number of plots would pick every other participant in the ranking from the best fit to the worst.

In the section Materials and Methods, Casiraghi et al. fitted both a 15-day and a 30-day cycle to test for the effect of the Moon’s gravitational pull on sleep. The 15-day component was weaker in urban communities than rural, but any effect of gravity should be the same in both. By contrast, the effect of moonlight should be weaker in urban communities, but the urban community data (Figure 1 C, D green curve) fits a simple sine curve better than rural. It seems strange that sleep in urban communities would correlate more strongly with the amount of moonlight, like Figure 1 shows.

Preventing cheating is hopeless in online learning

Technology makes cheating easy even in in-person exams with invigilators next to the test-taker. For example, in-ear wireless headphones not visible externally can play a loop recording of the most important concepts of the tested material. A development of this idea is to use a hidden camera in the test-takers glasses or pen to send the exam contents to a helper who looks up the answers and transmits the spoken solutions via the headphones. Without a helper, sophisticated programming is needed: the image of the exam from the hidden camera is sent to a text-recognition (OCR) program, which pipes it to a web search or an online solver such as Wolfram Alpha, then uses a text-to-speech program to speak the results into the headphones.

A small screen on the inside of the glasses would be visible to a nearby invigilator, so is a risky way to transmit solutions. A small projector in the glasses could in theory display a cheat sheet right into the eye. The reflection from the eye would be small and difficult to detect even looking into the eyes of the test-taker, which are mostly pointed down at the exam.

If the testing is remote, then the test-taker could manipulate the cameras through which the invigilators watch, so that images of cheat sheets are replaced with the background and the sound of helpers saying answers is removed. The sound is easy to remove with a microphone near the mouth of the helper, the input of which is subtracted from the input of the computer webcam. A more sophisticated array of microphones feeding sound into small speakers near the web camera’s microphone can be used to subtract a particular voice from the web camera’s stereo microphone’s input. The technology is the same as in noise-cancelling headphones.

Replacing parts of images is doable even if the camera and its software are provided by the examiners and completely non-manipulable. The invigilators’ camera can be pointed at a screen which displays an already-edited video of the test-taker. The editing is fast enough to make it nearly real-time. The idea of the edited video is the same as in old crime movies where a photo of an empty room is stuck in front of a stationary security camera. Then the guard sees the empty room on the monitor no matter what actually goes on in the room.

There is probably a way to make part of the scene invisible to a camera even with 19th century technology, namely the Pepper’s Ghost illusion with a two-way mirror. The edges of the mirror have to be hidden somehow.

Partial cleaning may make surfaces look dirtier

The reason why incomplete cleaning may increase the visual perception of dirt is by increasing the contrast between the patches of thicker grime and the normal colour by removing a uniform covering of thinner dirt. If something is uniformly grimy, then the colour of the covering dirt may be perceived as the thing’s normal hue. Cleaning may remove approximately the same thickness of dirt from all points on the surface. If some patches initially have a thicker layer, then these remain the colour of the dirt after the cleaning, but other areas may be fully cleaned and revert to the original look of the surface. The human visual system mostly perceives contrast, not the absolute wavelength of the reflected light, as various optical illusions demonstrate. Higher contrast between the thicker patches of grime and the rest of the surface then enhances the perception of dirtiness.

Bar-coding videos to prevent faking

To prevent clips from being cut out of a video or inserted, add a non-repeating sequence of bar codes onto either the whole frame or the main object of the video, such as a person talking. The bar code can use subtle „watermark” shading that does not interfere with viewing – it only needs to be readable by computer. The sequence of bar codes can be recreated at a later time if the algorithm is known, so if a clip is cut out of the video or added, then the sequence no longer matches the replication. Altering the video frame by frame also changes the bar code, although the forger can bypass this security feature by reading the original bar code, removing it before retouching and adding it back later. Still, these extra steps make faking the video somewhat more difficult. The main security feature is that the length of the video cannot be changed without altering the sequence of bar codes, which is easily detected.

The maker of the video may generate the bar codes cryptographically using a private key. This enables confirming the source of the video, for example in a copyright dispute.

Probably the idea of bar-coding videos has already been implemented, because watermarks and time stamps on photos have long been used. The main novelty relative to treating each frame as a photo is to link the bar codes to each other over time.

If top people have families and hobbies, then success is not about productivity

Assume:

1 Productivity is continuous and weakly increasing in talent and effort.

2 The sum of efforts allocated to all activities is bounded, and this bound is similar across people.

3 Families and hobbies take some effort, thus less is left for work. (For this assumption to hold, it may be necessary to focus on families with children in which the partner is working in a different field. Otherwise, a stay-at-home partner may take care of the cooking and cleaning, freeing up time for the working spouse to allocate to work. A partner in the same field of work may provide a collaboration synergy. In both cases, the productivity of the top person in question may increase.)

4 The talent distribution is similar for people with and without families or hobbies. This assumption would be violated if for example talented people are much better at finding a partner and starting a family.

Under these assumptions, reasonably rational people would be more productive without families or hobbies. If success is mostly determined by productivity, then people without families should be more successful on average. In other words, most top people in any endeavour would not have families or hobbies that take time away from work.

In short, if responsibilities and distractions cause lower productivity, and productivity causes success, then success is negatively correlated with such distractions. Therefore, if successful people have families with a similar or greater frequency as the general population, then success is not driven by productivity.

One counterargument is that people first become successful and then start families. In order for this to explain the similar fractions of singles among top and bottom achievers, the rate of family formation after success must be much greater than among the unsuccessful, because catching up from a late start requires a higher rate of increase.

Another explanation is irrationality of a specific form – one which reduces the productivity of high effort significantly below that of medium effort. Then single people with lots of time for work would produce less through their high effort than those with families and hobbies via their medium effort. Productivity per hour naturally falls with increasing hours, but the issue here is total output (the hours times the per-hour productivity). An extra work hour has to contribute negatively to success to explain the lack of family-success correlation. One mechanism for a negative effect of hours on output is burnout of workaholics. For this explanation, people have to be irrational enough to keep working even when their total output falls as a result.

If the above explanations seem unlikely but the assumptions reasonable in a given field of human endeavour, then reaching the top and staying there is mostly not about productivity (talent and effort) in this field. For example, in academic research.

A related empirical test of whether success in a given field is caused by productivity is to check whether people from countries or groups that score highly on corruption indices disproportionately succeed in this field. Either conditional on entering the field or unconditionally. In academia, in fields where convincing others is more important than the objective correctness of one’s results, people from more nepotist cultures should have an advantage. The same applies to journals – the general interest ones care relatively more about a good story, the field journals more about correctness. Do people from more corrupt countries publish relatively more in general interest journals, given their total publications? Of course, conditional on their observable characteristics like the current country of employment.

Another related test for meritocracy in academia or the R&D industry is whether coauthored publications and patents are divided by the number of coauthors in their influence on salaries and promotions. If there is an established ranking of institutions or job titles, then do those at higher ranks have more quality-weighted coauthor-divided articles and patents? The quality-weighting is the difficult part, because usually there is no independent measure of quality (unaffected by the dependent variable, be it promotions, salary, publication venue).

Visually distinct social classes in agrarian societies

One argument advanced for why slavery in the US was special among the world’s slaveholding societies is that one race enslaved another. However, before the age of genetic testing, the races could only have been distinguished visually. Similarly obvious differences in the looks of slaves and masters, or serfs and nobility occurred in all agrarian societies. The obviousness of distinct looks is meant in the statistical sense: with what accuracy could people classify others into slaves and masters, or peasants and lords, averaged both across the population judging and the population judged? I believe the accuracy was close to perfect – comparable to the classification accuracy of US slaves and slaveholders – for the following reasons.

Serfs were malnourished in childhood, thus short. They did hard physical labour without stretching much, thus were bent over, with back and leg muscles better developed than the rest. They spent the day outdoors without sunscreen, wearing limited clothing, thus were tanned. The lack of sunglasses caused them to squint, creating characteristic wrinkles on the face. They seldom had opportunity to wash, thus had ingrained dirt in their skin that would not have come out with a single hard scrubbing. Both corporal punishment and intrafamily violence caused many of them to have visible scars, missing teeth, crooked noses. By contrast, the well-fed nobility were tall and practised proper erect posture in childhood for table manners and dance lessons. Their physical exercise was mostly cardiovascular, without heavy lifting, thus they were either slim or fat, but not muscular. Fencing may have developed noblemen’s quadriceps, biceps and wrist muscles, not so much the trunk. The nobility’s fashionable paleness was further ensured by wearing gloves and hats and carrying parasols during the short time spent outdoors.

All these physical contrasts ensured that even in the same clothes and surroundings, without talking or moving, a peasant and a noble could be distinguished at a glance. In this sense there was nothing special about US slavery.

The belief that US slaves were more distinguishable from their owners than those of other slaveholding societies is based on modern experience – nowadays, people of the same race but different social class are difficult to distinguish based on their physical appearance. Similar nutrition, sports opportunities and outdoor exposure lead to similar stature, musculature and tan.