Tag Archives: manipulation

Delivered food and restaurants are unhealthy due to moral hazard

Consumers observe the taste and cost of food directly, but checking the ingredients for healthiness takes extra effort and time. Rational inattention then implies that eaters are unlikely to verify the health claims. Thus food suppliers are subject to moral hazard: consumers buy based on the healthiness they expect, not the actual ingredients the seller chooses, so the seller has an incentive to improve taste, reduce the production cost and cut price even when this makes the food less healthy.

The standard solutions to moral hazard from economic theory are verification, repeated interaction and vertical integration (selling the firm). In the context of food, safety standards and truth-in-advertising laws restrict the substances manufacturers may add and claims they can make. Regulators verify claims made and punish for illegal additives or false advertising. Also, if a food supplier is found to use unhealthy ingredients (or amounts of sugar, salt and fat), then some consumers may switch to alternate providers, which is a repeated game punishment for the original seller.

The weakness of both verification and repeated interaction is imperfect monitoring: small increases in unhealthy substances are difficult to detect, because tests are noisy and food composition varies naturally. The variation sometimes makes the amount of an ingredient exceed the healthy limit, so honest suppliers would also be punished with positive probability. Incentives are created by the difference in payoffs, so reducing the payoff of the honest decreases their motive to stay honest. The imperfect monitoring allows unscrupulous sellers to outcompete the providers of healthy food on taste and price, for example by using various tricks to circumvent the legal requirements on labelling (https://sanderheinsalu.com/ajaveeb/?p=728).

The remaining solution to the moral hazard problem is vertical integration of the buyer and the supplier, i.e. home cooking. Of course, the ingredients bought to be cooked at home are subject to similar moral hazard – unhealthy substances can be added at any stage of the production process. The risk could in principle be even larger than for processed foods and restaurant meals, but in practice, it seems that simple and unprocessed ingredients are more difficult to manipulate than prepared meals, which are a mixture of many components. Adding sugar, salt, fat or monosodium glutamate to flour, rice or dry beans without mentioning it on the nutrition label is easier to detect than the same (amounts of) additives in shrimp fried rice, bread or a burrito. Raw meats and fish do have extra salt and food colouring added, but usually less than for ready-to-eat meals.

Relative prices are another reason why there may be less manipulation of ingredients than processed foods. There is a per-unit cost of adding unhealthy substances, as well as a fixed cost due to the risk of lawsuits and fines, especially if the additives are not declared on the label. Unprocessed ingredients are less differentiated, so the price competition is more intense. The increase in the price that customers are willing to pay if an ingredient tastes better than the competitors’ may be small if price is the main dimension of competition. The slightly higher price may not justify the per-unit cost of the additives. In contrast, for processed foods the margin may respond greatly to taste, motivating manipulation of the ingredients.

The taste of the final dish is likely to respond less to manipulating one ingredient than to altering the composition of the entire food, both because the ingredient may be only a small part of the final dish and because the taste of a dish is largely determined by the seasoning and the cooking method. In this case, additives to ingredients do not improve taste that much, reducing the profitability of manipulating these.

Intense price competition motivates cost-cutting, including by substituting cheaper ingredients or using additives (e.g. preservatives) that reduce the manufacturing cost. However, if the additives cost more than they save on production cost (such as preservatives for dry goods that already keep indefinitely), then they are unprofitable to include.

Demand for cooking ingredients may also respond less to price and taste than for restaurant meals or delivered food (raw ingredients may even be an inferior good, but eating out is more like a luxury good). In this case, there is a range of fixed costs of unhealthy substances for which adding these to ingredients is unprofitable, but to processed foods profitable.

Food security is a manipulative term

Food security is a manipulative political code phrase designed to scare people and thereby make them support agricultural subsidies, as I have written before. The fear is created by association with sieges before the age of gunpowder, where castles were starved into submission. In modern times, no enemy is silly enough to try to surround and starve a country that is not a city state (e.g. Singapore), because any enemy with a large enough force to prevent food from getting into a country is also strong enough to conquer it quickly by frontal attack. Even unintentional starvation is a public relations disaster (e.g. Yemen), as is a war that drags on, but a quick takeover without too many casualties (e.g. Crimea) actually increases the conqueror’s leader’s popularity in internal politics.

Even if an enemy was stupid and tried to starve a country, the defense against this is not farm subsidies, but many distributed small stockpiles of food. Farms as a food supply are easy to destroy by firebombing the crops and livestock from the air. A small number of large centralised stockpiles are also vulnerable. However, if each household is obliged to keep n months’ worth of non-perishable food at home, then starving the country into submission would take at least n months and bombardment would not shorten that period.

What is really meant by food security is that food prices might rise. However, in all except the very poorest countries in the world, food is so cheap that any reasonable price rise would not cause starvation. For example, according to the USDA, 9 medium baked potatoes fulfill all the nutritional needs of an adult. Similarly, people can survive for a long time eating just wheat flour and water. Wheat flour is 80 cents per kilo, and a kilo of it has 3600 kcal, which is enough for an adult for two days. The price of flour would have to rise at least a hundred times for the cost to lead to starvation in developed countries. Other emergency foods that do not go bad and can be prepared without heating are also cheap, e.g. milk powder, instant oatmeal, canned meats and vegetables.

A price rise is a financial problem, not not a real resource constraint, and as such has a financial solution – insurance. Those afraid of a price rise can use forward contracts to lock in the price. Insurance against a very low-probability event like food prices rising a hundred times is cheap (if such insurance is offered, which it might not be due to the low demand).

Why research with more authors gets cited more

Empirically, articles with more authors are cited more, according to Wuchty et al. (2007). The reasons may be good or bad. A good reason is that coauthored papers may have higher quality, e.g. due to division of labour increasing the efficiency of knowledge production. I propose the following bad reasons, independent of potential quality differences between coauthored and solo articles. Suppose that researchers cite the works of their friends more frequently than warranted. A given scientist is more likely to have a friend among the authors of an article with a greater number of collaborators, which increases its probability of getting a „friendly citation”.

Another reason is defensive citing, i.e. including relatively unrelated papers in the reference list before submitting to a journal, in case the referees happen to be the authors of those works. The reason for adding these unnecessary citations is the belief, warranted or not, that a referee is more likely to recommend acceptance of a paper if it cites the referee’s publications. The probability that the set of referees overlaps with the set of authors of a given prior work increases in the number of authors of that work. Thus defensive citing is more effective when targeted to collaborative instead of solo papers.

The referees may also directly ask the author to cite certain papers in the revision (I have had this experience). If the referees are more likely to request citations to their own or their coauthors’ work, then articles with more authors are again referenced more.

Valderas et al. (2007) offer some additional explanations. One is measurement error. Suppose that letters to the editor, annual reports of the learned society, its presidential inaugural addresses, and other non-research in scientific journals are counted as publications. These have both fewer authors and citations than regular research articles, which creates a positive correlation between the popularity of a piece of writing and its number of authors.

If self-citations are not excluded and researchers cite their own work more frequently than that of others, then papers with more authors get cited more.

Articles with more collaborators are presented more frequently, thus their existence is more widely known. Awareness of a work is a prerequisite of citing it, so the wider circulation of multi-author publications gives them a greater likelihood of being referenced, independent of quality.

Bad popular science books

There is a class of books that is marketed as popular science, but have the profit from sales as their only goal, disregarding truth. Easily visible signs of these are titles that include clickbait keywords (sex, seduction, death, fear, apocalypse, diet), controversial or emotional topics (evolution, health, psychology theories, war, terrorism), radical statements about these topics (statements opposite to mainstream thinking, common sense or previous research), and big claims about the authors’ qualifications that are actually hollow (PhD from an obscure institution or not in the field of the book). The authors typically include a journalist (or writer, or some other professional marketer of narratives) and a person that seems to be qualified in the field of the book. Of course these signs are an imperfect signal, but their usefulness is that they are visible from the covers.
Inside such a book, the authors cherry-pick pieces of science and non-science that support the claim that the book makes, and ignore contradicting evidence, even if that evidence is present in the same research articles that the book cites as supporting it. Most pages promise that soon the book will prove the claims that are made on that page, but somehow the book never gets to the proof. It just presents more unfounded claims.
A book of this class does not define its central concepts or claims precisely, so it can flexibly interpret previous research as supporting its claims. The book does not make precise what would constitute evidence refuting its claim, but sets up “straw-man” counterarguments to its claim and refutes them (mischaracterising the actual counterarguments to make them look ridiculous).
Examples of these books that I have read to some extent before becoming exasperated by their demagoguery: Sex at dawn, Games people play.

“What if” is a manipulative question

“What if this bad event happens?” is a question used as a high-pressure sales tactic (for insurance, maintenance, upgrades and various protective measures). People suffering from anxiety or depression also tend to ask that question, which is called catastrophising. The question generates vague fears and is usually unhelpful for finding reasonable preventive or corrective measures for the bad event. Fearful people tend to jump on anything that looks like it might be a prevention or cure, which sometimes makes the problem worse (e.g. quack remedies for imagined rare disease worsen health).
A more useful question is: “What is the probability of this bad event happening?” This question directs attention to statistics and research about the event. Often, the fear-generating event is so unlikely that it is not worth worrying about. Even if it has significant probability, checking the research on it is more likely to lead to solutions than vague rumination along the lines of “what if.” Even if there are no solutions, statistics on the bad event often suggest circumstances that make it more likely, thus information on which situations or risk factors to avoid.
These points have been made before, as exemplified by the aphorisms “Prepare for what is likely and you are likely to be prepared” and “Safety is an expensive illusion.”

News are gradually biased by re-reporting

The (science) news cycle occurs when the original source is quoted by another news outlet, which is quoted by another outlet, etc, creating a “telephone game”, a.k.a. “Chinese whispers” familiar from kindergarten. Each re-reporting introduces noise to the previous report, so the end result may differ diametrically from the original story. This news cycle has been identified and mocked before, e.g. by PhD Comics.
The telephone game of news outlets has an additional aspect that I have not seen mentioned, namely that the re-reporting does not add random noise, but noise that biases the previous source deliberately. Each news outlet, blog or other re-poster has a slant and focusses on those aspects of the story that favour its existing viewpoint.
A single outlet usually does not change the story to the complete opposite of the original, because outright lying is easy to detect and would damage the outlet’s reputation. However, many outlets in a sequence can each bias the story a little, until the final report is the opposite of the original. Each outlet’s biasing decision is difficult to detect, because the small bias is hidden in the noise of rephrasing and selectively copying the previous outlet’s story. So each outlet can claim to report unbiased news, if readers do not question why the outlet used second-hand (really n-th hand) sources, not the original article (the first in the sequence). A single manipulator thus has an incentive to create many websites that report each other’s stories in a sequence.
The moral of this text is that to get accurate information, read the original source. Whenever you see an interesting news article, work backward along the sequence of reports to see whether the claims are the same as in the first report. The first report is not guaranteed to be true, but at least the biases and honest errors introduced later can be removed this way.

Sugar-free, fat-free and low-salt claims

The three main ingredients of unhealthy food are sugar, salt and fat. The packaging of junk food often has claims of sugar-free, fat-free or low-salt in big colourful letters on the front. The trick is that the absence of one of the three ingredients is compensated by a larger amount of the other two, as can be checked from the nutrition information label.
Sometimes the claims on the front of the pack directly contradict the nutrition label, so are downright lies. I have seen packaging with the claim “sugar-free” on the front, with sugars listed in significant quantity on the nutrition label. There are some legal sanctions for falsifying the nutrition information label, but almost no restrictions on what can be claimed elsewhere on the pack, so any contradictions should almost always be resolved in favour of the nutrition label.
I have seen a sugar-free claim on a pack on which the ingredient list included brown sugar. This suggests the existence of a legal loophole (brown sugar not equalling sugar somehow) that the manufacturer wanted to use.
If the manufacturer does not want to outright lie, then a trick I have seen is to claim “no added sugar” or “no sugar or artificial sweeteners” on the pack, but add other sweeteners, e.g. sugarcane juice, molasses, high fructose corn syrup. Similarly, “no added salt” can be bypassed by adding salty ingredients, for example dried salted meat or bacon to a snack mix.
Another trick is to create the sugar in the food during the manufacturing process. For example, heating starch for a long time or adding the enzyme amylase breaks the starch into smaller-molecule sugars. So a manufacturer can claim “no added sweeteners” and yet produce sugars in the food by processing the starch in it.
A similar trick for salt is to add sodium and chloride in other ingredients and let them combine into NaCl in the food.

How superstition grows out of science

Priests in Ancient Egypt could predict eclipses and the floods of the Nile by observing the stars and the Moon and recording their previous positions when the events of interest happened. The rest was calculation, nothing magical. Ordinary people saw the priests looking at the stars and predicting events in the future, and thought that the stars magically told priests things and that the prediction ability extended to all future events (births, deaths, outcomes of battles). The priests encouraged this belief, because it gave them more power. This is one way astrology could have developed – by distorting and exaggerating the science of astronomy. Another way is via navigators telling the latitude of a ship using the stars or the sun. People would have thought that if heavenly bodies could tell a navigator his location on the open sea, then why not other secrets?
Engineers in Ancient Rome calculated the strength of bridges and aqueducts, and estimated the amount of material needed for these works. Ordinary people saw the engineers playing with numbers and predicting the amount of stones needed for a house or a fort. Numbers “magically” told engineers about the future, and ordinary people thought this prediction ability extended to all future events. Thus the belief in numerology could have been born.
When certain plants were discovered to have medicinal properties against certain diseases, then swindlers imitated doctors by claiming that other natural substances were powerful cures against whatever diseases. The charlatans and snake oil salesmen distorted and exaggerated medicine.
Doctors diagnosed diseases by physical examination before laboratory tests were invented. Thus a doctor could look at parts of a person’s body, tell what diseases the person had, and predict the symptoms that the person would experience in the future. Exaggerating this, palm readers claimed to predict a person’s future life course by looking at the skin of their palm.
In the 20th century, some medicines were discovered to be equally effective at somewhat lower doses than previously thought. Then homeopathy exaggerated this by claiming that medicines are effective when diluted so much that on average not a single molecule of the drug remains in the water given to the patient.
In all these cases, superstition only adds bias and noise to scientific results. Science does not know everything, but it is a sufficient statistic (https://en.wikipedia.org/wiki/Sufficient_statistic) for superstitious beliefs, in the sense that any true information contained in superstition is also contained in science. Nothing additional can be learned from superstition once the scientific results are known.

Scientific thinking coordination game

If most people in a society use the scientific method for decision-making, then telling stories will not persuade them – they will demand evidence. In that case, bullshit artists and storytellers will not have much influence. It is then profitable to learn to provide evidence, which is positively correlated with learning to understand and use evidence. If young people respond to incentives and want to become influential in society (get a high income and social status), then young people will learn and use the scientific method, which reinforces the demand for evidence and reduces the demand for narratives.
If most people are not scientifically minded, but believe stories, then it is profitable to learn to tell stories. The skilled storytellers will be able to manipulate people, thus will gain wealth and power. Young people who want to climb the social and income ladder will then gravitate towards narrative fields of study. They will not learn to understand and use evidence, which reinforces the low demand for evidence.
Both the scientific and the narrative society are self-reinforcing, thus there is a coordination game of people choosing to become evidence-users or storytellers. Note that using the scientific method does not mean being a scientist. Most researchers who I have met do not use science in their everyday decisions, but believe the stories they read in the media or hear from their friends. I have met Yale PhD-s in STEM fields who held beliefs that most people in the world would agree to be false.
One signal of not thinking scientifically is asking people what the weather is like in some place one has not visited (I don’t mean asking in order to make small talk, but asking to gain information). Weather statistics for most places in the world are available online and are much more accurate than acquaintances’ opinions of the weather. This is because weather statistics are based on a much longer time series and on physically measured temperature, rainfall, wind, etc, not on a person’s guess of these variables.

Remembering the sacrifice

Many times and in many places I have seen a call to remember the sacrifice of the soldiers who died in some past conflict. Often, this call seems an attempt to direct attention away from the misguidedness of the particular conflict or the incompetence and selfish motives of the leadership who decided to enter the war. It tries to make people focus on the noble courage of the soldiers, not the sordid power-hunger of the rulers. The bravery of soldiers is discussed in another post (http://sanderheinsalu.com/ajaveeb/?p=595); here I would like to clarify this sacrifice business.

If the soldiers volunteered under reasonably accurate information about the reasons for the conflict and the chances of success, then indeed they chose to sacrifice themselves for the cause. Then we should remember their sacrifice. If, however, they were conscripted (dictatorships often call this volunteering) using the threat of punishment for them or their family, then they did not make the sacrifice any more than a sacrificial animal sacrifices itself. Others sacrificed the conscripts to further their own ends.

These ends are unlikely to prioritize defeating an evil regime and making the world a better place, although the propaganda claims this was the objective. Mostly the goal of leaders is to preserve and expand their power, whether by defending the country against takeover or conquering additional subjects and wealth. Even if this is acknowledged, current propaganda may point out some good side effect of sacrificing the soldiers, e.g. defeating an old enemy. This is again a distraction attempt. The world is complex and interconnected, so every event, including a mass death, has some beneficial side effects, just like every event has some negative side effects. One should consider the overall consequences of an event, not just one side effect.

If the soldiers genuinely volunteered, but due to being misled by propaganda, then they wanted to sacrifice themselves for one cause, but their leaders sacrificed them for another. Usually volunteers underestimate the length of the war and the probability of dying. Thus even when they know the true goal of the conflict, the sacrifice they are led to is larger than the one they intended to make.

The most clear and direct self-sacrifice is made by suicide bombers. They probably think that their bombing serves a good purpose, but such belief is almost always misguided. Religious indoctrination of the bombers manipulates them into believing in a noble cause, hiding the true goals of the leaders ordering the bombing.

I have not heard many calls to remember the sacrifice of present-day child soldiers. Rather there are calls to pity and save them. The situation of many soldiers in many wars has been similar to children forced to fight – ignorance and fear of punishment. Obeying the conscription order often offers a greater survival probability than refusal.

Instead of remembering the sacrifice of conscripts, we should remember them being sacrificed. Remember with pity. Remember to prevent.