Tag Archives: manipulation

Political parties claim inconsistent patience

If the stated preferences of politicians are taken at face value, then they have inconsistent patience across different policy areas. Left-wingers want to invest in education, infrastructure and prevention of climate change. These investments have a present cost and a long-delayed benefit, which suggests patient preferences (high discount factor, low discount rate). On the other hand, the left wants to increase borrowing, redistributive transfers and government spending in general, which have a current benefit (including electoral, but focus on societal for now) and a future cost. Preferring a current benefit and a future cost implies impatience.

For right-wing parties, these preferences are switched (impatient on education, climate, but patient on redistribution), so their inconsistency is the mirror image of the one described above. In summary, both sides of the political divide claim policy preferences that simultaneously imply patience and impatience, which suggests motives other than societal benefit. One way to reason about these other motives is described in https://sanderheinsalu.com/ajaveeb/?p=1015

M-diagram of politics

Suppose a politician claims that X is best for society. Quiz:

1. Should we infer that X is best for society?

2. Should we infer that the politician believes that X is best for society?

3. Should we infer that X is best for the politician?

4. Should we infer that X is best for the politician among policies that can be `sold’ as best for society?

5. Should we infer that the politician believes that X is best for the politician?

This quiz illustrates the general principle in game theory that players best-respond to their perceptions, not reality. Sometimes the perceptions may coincide with reality. Equilibrium concepts like Nash equilibrium assume that on average, players have correct beliefs.

The following diagram illustrates the reasoning of the politician claiming X is best for society: M-diagram of politics In case the diagram does not load, here is its description: the top row has `Official goal’ and `Real goal’, the bottom row has `Best way to the official goal’, `Best way to the real goal that looks like a reasonable way to the official goal’ and `Best way to the real goal’. Arrows point in an M-shaped pattern from the bottom row items to the top items. The arrow from `Best way to the real goal that looks like a reasonable way to the official goal’ to `Official goal’ is the constraint on the claims of the politician.

The correct answer to the quiz is 5.

This post is loosely translated from the original Estonian one https://www.sanderheinsalu.com/ajaveeb/?p=140

Platform providers fake being popular

Crowdfunding platforms, stock exchanges and other providers of two-sided markets want to appear popular, because having more buyers attracts more sellers and vice versa. The platform’s revenue is usually proportional to the number of users, because it charges a commission fee on trades or advertisers pay it to show ads to users. The exchange’s marginal cost of a user is close to zero, giving it an incentive to fake a high volume of trades, a large limit order book and a small bid-ask spread.

The platform’s cost of posting a great volume of outstanding buy and sell orders at a small spread is that many investors try to trade at these favourable bid and ask prices. Either the market maker has to take the other side of these attempted transactions or is found fraudulent. Taking the other side results in a large loss if some investors are better informed than the exchange.

The platform could falsely display a large trading volume, but keep the order book honestly small by adding fake trades at prices between the bid and the ask only, so no investor’s real limit order is ignored. This seems difficult to detect, unless one side of the limit order book is empty (e.g. no buyers) and at least one at-market order on the other side (e.g. a sell) is outstanding. In this case, any trades occurring would have to satisfy the at-market order. However, the platform or real investors can then take the other side of the at-market order at a very favourable price to themselves, which discourages at-market orders. A large trading volume with a thin order book is still slightly suspicious, because it requires that crossing buy and sell orders between the bid and ask prices arrive almost simultaneously, in order to be matched without appearing on the order book for long, and without triggering the real limit orders. Displaying the fake buys and sells on the order book risks attracting actual matching trades, which the platform would have to honour (at a cost).

Without automated quote matching, there are no at-market orders, for example on the Funderbeam crowdfunding platform. Instead, everyone either posts a limit order or picks an order from the other side to trade with, e.g. a buyer chooses a sell. Investors can pick an order with a worse price (higher sell or lower buy) on the other side, which frequently occurs on Funderbeam. Choosing a worse price is irrational, unless the traders in question are colluding, so the asset is effectively not changing ownership. Reasons to carry out such seemingly irrational trades are to manipulate price and volume, e.g. price can be raised or reduced by targeted trades outside the bid-ask interval. Volume can only increase after added trades, rational or not, but such seemingly greater activity is exactly what benefits the stakeholders of the platform. The employees of the market maker have a natural motive to fake-trade between themselves to make their firm look good, even without any inappropriate pressure from their boss.

Another way to attract issuers and investors is to demonstrate successful initial public offerings, meaning that the funds are raised quickly (good for issuers) and the price of the newly listed stock (or other asset) goes up, which benefits investors. Adding fake capital-raisers is difficult, because potential investors will check the background of the supposed issuer. Inserting spoof investors into an actual funding campaign is costly, because real money would have to be invested. One way to manipulate popularity upward is to simultaneously add a fake issuer and fake investors who satisfy its funding need. The idea is to not leave time for real investors to participate in the campaign, by pretending that the capital-raiser achieved its target funding level before most investors could react. This is easier in markets with a small number of real investors and without an auto-invest feature. However, the real investors who were supposedly pre-empted may still research the supposedly very popular issuer.

A costless way to briefly boost the popularity of a real fundraising campaign is to add fake investors after the target funding is achieved, and forbid issuers from increasing the target or accepting funds from those who subscribed after the goal was reached. Any campaign meeting its target can then be made to look heavily oversubscribed. However, if the issuers are informed in advance of the restriction not to increase the target, then they may find an alternative unrestricted platform to raise funds. On the other hand, if the restriction is not mentioned beforehand, then it will likely anger the issuers who will then create negative publicity for the platform. Competition between exchanges thus curtails their manipulation incentives.

The platform can motivate real investors to raise their bids when the campaign reaches its target by rationing demand: bidders in an oversubscribed share issue get only a fraction of what they wanted to buy. Anticipating this, buyers will increase their requested quantities so that the fraction of their new bid equals their actual demand. This makes the campaign look oversubscribed and creates a feedback loop: if other investors increase their quantities, then rationing reduces the fraction of a given investor’s demand that will be satisfied, so this investor raises her or his requested amount, which in turn makes others increase theirs.

If investors know of the bid rationing in advance, then they may select a rival market provider without this restriction, but if rationing takes them by surprise, then they may leave and publicly criticise the platform. Capital-raisers compare exchanges, so if many market providers inflate demand and the issuers pay attention to the level of oversubscription (instead of the fraction of campaigns reaching the target, which is what should matter to the capital-raiser), then the biggest inflator wins. Of course, platforms may not want to reveal unsuccessful campaigns (e.g. Funderbeam does not), so public data on the fraction of issuers who achieved their funding goal is unlikely to exist.

Theoretically, the feedback from bid rationing to increased quantity demanded could lead to infinite amounts requested. A countervailing incentive is that with positive probability, other investors do not honour their commitment to buy, in which case a given investor may be required to buy the amount (s)he demanded, instead of the lower amount (s)he actually wanted. If there is no commitment to buy (for example, on Funderbeam the bids are only non-binding indications of interest), then the danger of overcommitting is absent, so the rational choice seems to be requesting an infinite amount. Investors do not indicate infinite interest, so either they are irrational or some other penalty exists for outbidding one’s capability to pay.

Delivered food and restaurants are unhealthy due to moral hazard

Consumers observe the taste and cost of food directly, but checking the ingredients for healthiness takes extra effort and time. Rational inattention then implies that eaters are unlikely to verify the health claims. Thus food suppliers are subject to moral hazard: consumers buy based on the healthiness they expect, not the actual ingredients the seller chooses, so the seller has an incentive to improve taste, reduce the production cost and cut price even when this makes the food less healthy.

The standard solutions to moral hazard from economic theory are verification, repeated interaction and vertical integration (selling the firm). In the context of food, safety standards and truth-in-advertising laws restrict the substances manufacturers may add and claims they can make. Regulators verify claims made and punish for illegal additives or false advertising. Also, if a food supplier is found to use unhealthy ingredients (or amounts of sugar, salt and fat), then some consumers may switch to alternate providers, which is a repeated game punishment for the original seller.

The weakness of both verification and repeated interaction is imperfect monitoring: small increases in unhealthy substances are difficult to detect, because tests are noisy and food composition varies naturally. The variation sometimes makes the amount of an ingredient exceed the healthy limit, so honest suppliers would also be punished with positive probability. Incentives are created by the difference in payoffs, so reducing the payoff of the honest decreases their motive to stay honest. The imperfect monitoring allows unscrupulous sellers to outcompete the providers of healthy food on taste and price, for example by using various tricks to circumvent the legal requirements on labelling (https://sanderheinsalu.com/ajaveeb/?p=728).

The remaining solution to the moral hazard problem is vertical integration of the buyer and the supplier, i.e. home cooking. Of course, the ingredients bought to be cooked at home are subject to similar moral hazard – unhealthy substances can be added at any stage of the production process. The risk could in principle be even larger than for processed foods and restaurant meals, but in practice, it seems that simple and unprocessed ingredients are more difficult to manipulate than prepared meals, which are a mixture of many components. Adding sugar, salt, fat or monosodium glutamate to flour, rice or dry beans without mentioning it on the nutrition label is easier to detect than the same (amounts of) additives in shrimp fried rice, bread or a burrito. Raw meats and fish do have extra salt and food colouring added, but usually less than for ready-to-eat meals.

Relative prices are another reason why there may be less manipulation of ingredients than processed foods. There is a per-unit cost of adding unhealthy substances, as well as a fixed cost due to the risk of lawsuits and fines, especially if the additives are not declared on the label. Unprocessed ingredients are less differentiated, so the price competition is more intense. The increase in the price that customers are willing to pay if an ingredient tastes better than the competitors’ may be small if price is the main dimension of competition. The slightly higher price may not justify the per-unit cost of the additives. In contrast, for processed foods the margin may respond greatly to taste, motivating manipulation of the ingredients.

The taste of the final dish is likely to respond less to manipulating one ingredient than to altering the composition of the entire food, both because the ingredient may be only a small part of the final dish and because the taste of a dish is largely determined by the seasoning and the cooking method. In this case, additives to ingredients do not improve taste that much, reducing the profitability of manipulating these.

Intense price competition motivates cost-cutting, including by substituting cheaper ingredients or using additives (e.g. preservatives) that reduce the manufacturing cost. However, if the additives cost more than they save on production cost (such as preservatives for dry goods that already keep indefinitely), then they are unprofitable to include.

Demand for cooking ingredients may also respond less to price and taste than for restaurant meals or delivered food (raw ingredients may even be an inferior good, but eating out is more like a luxury good). In this case, there is a range of fixed costs of unhealthy substances for which adding these to ingredients is unprofitable, but to processed foods profitable.

Food security is a manipulative term

Food security is a manipulative political code phrase designed to scare people and thereby make them support agricultural subsidies, as I have written before. The fear is created by association with sieges before the age of gunpowder, where castles were starved into submission. In modern times, no enemy is silly enough to try to surround and starve a country that is not a city state (e.g. Singapore), because any enemy with a large enough force to prevent food from getting into a country is also strong enough to conquer it quickly by frontal attack. Even unintentional starvation is a public relations disaster (e.g. Yemen), as is a war that drags on, but a quick takeover without too many casualties (e.g. Crimea) actually increases the conqueror’s leader’s popularity in internal politics.

Even if an enemy was stupid and tried to starve a country, the defense against this is not farm subsidies, but many distributed small stockpiles of food. Farms as a food supply are easy to destroy by firebombing the crops and livestock from the air. A small number of large centralised stockpiles are also vulnerable. However, if each household is obliged to keep n months’ worth of non-perishable food at home, then starving the country into submission would take at least n months and bombardment would not shorten that period.

What is really meant by food security is that food prices might rise. However, in all except the very poorest countries in the world, food is so cheap that any reasonable price rise would not cause starvation. For example, according to the USDA, 9 medium baked potatoes fulfill all the nutritional needs of an adult. Similarly, people can survive for a long time eating just wheat flour and water. Wheat flour is 80 cents per kilo, and a kilo of it has 3600 kcal, which is enough for an adult for two days. The price of flour would have to rise at least a hundred times for the cost to lead to starvation in developed countries. Other emergency foods that do not go bad and can be prepared without heating are also cheap, e.g. milk powder, instant oatmeal, canned meats and vegetables.

A price rise is a financial problem, not not a real resource constraint, and as such has a financial solution – insurance. Those afraid of a price rise can use forward contracts to lock in the price. Insurance against a very low-probability event like food prices rising a hundred times is cheap (if such insurance is offered, which it might not be due to the low demand).

Why research with more authors gets cited more

Empirically, articles with more authors are cited more, according to Wuchty et al. (2007). The reasons may be good or bad. A good reason is that coauthored papers may have higher quality, e.g. due to division of labour increasing the efficiency of knowledge production. I propose the following bad reasons, independent of potential quality differences between coauthored and solo articles. Suppose that researchers cite the works of their friends more frequently than warranted. A given scientist is more likely to have a friend among the authors of an article with a greater number of collaborators, which increases its probability of getting a „friendly citation”.

Another reason is defensive citing, i.e. including relatively unrelated papers in the reference list before submitting to a journal, in case the referees happen to be the authors of those works. The reason for adding these unnecessary citations is the belief, warranted or not, that a referee is more likely to recommend acceptance of a paper if it cites the referee’s publications. The probability that the set of referees overlaps with the set of authors of a given prior work increases in the number of authors of that work. Thus defensive citing is more effective when targeted to collaborative instead of solo papers.

The referees may also directly ask the author to cite certain papers in the revision (I have had this experience). If the referees are more likely to request citations to their own or their coauthors’ work, then articles with more authors are again referenced more.

Valderas et al. (2007) offer some additional explanations. One is measurement error. Suppose that letters to the editor, annual reports of the learned society, its presidential inaugural addresses, and other non-research in scientific journals are counted as publications. These have both fewer authors and citations than regular research articles, which creates a positive correlation between the popularity of a piece of writing and its number of authors.

If self-citations are not excluded and researchers cite their own work more frequently than that of others, then papers with more authors get cited more.

Articles with more collaborators are presented more frequently, thus their existence is more widely known. Awareness of a work is a prerequisite of citing it, so the wider circulation of multi-author publications gives them a greater likelihood of being referenced, independent of quality.

Bad popular science books

There is a class of books that is marketed as popular science, but have the profit from sales as their only goal, disregarding truth. Easily visible signs of these are titles that include clickbait keywords (sex, seduction, death, fear, apocalypse, diet), controversial or emotional topics (evolution, health, psychology theories, war, terrorism), radical statements about these topics (statements opposite to mainstream thinking, common sense or previous research), and big claims about the authors’ qualifications that are actually hollow (PhD from an obscure institution or not in the field of the book). The authors typically include a journalist (or writer, or some other professional marketer of narratives) and a person that seems to be qualified in the field of the book. Of course these signs are an imperfect signal, but their usefulness is that they are visible from the covers.
Inside such a book, the authors cherry-pick pieces of science and non-science that support the claim that the book makes, and ignore contradicting evidence, even if that evidence is present in the same research articles that the book cites as supporting it. Most pages promise that soon the book will prove the claims that are made on that page, but somehow the book never gets to the proof. It just presents more unfounded claims.
A book of this class does not define its central concepts or claims precisely, so it can flexibly interpret previous research as supporting its claims. The book does not make precise what would constitute evidence refuting its claim, but sets up “straw-man” counterarguments to its claim and refutes them (mischaracterising the actual counterarguments to make them look ridiculous).
Examples of these books that I have read to some extent before becoming exasperated by their demagoguery: Sex at dawn, Games people play.

“What if” is a manipulative question

“What if this bad event happens?” is a question used as a high-pressure sales tactic (for insurance, maintenance, upgrades and various protective measures). People suffering from anxiety or depression also tend to ask that question, which is called catastrophising. The question generates vague fears and is usually unhelpful for finding reasonable preventive or corrective measures for the bad event. Fearful people tend to jump on anything that looks like it might be a prevention or cure, which sometimes makes the problem worse (e.g. quack remedies for imagined rare disease worsen health).
A more useful question is: “What is the probability of this bad event happening?” This question directs attention to statistics and research about the event. Often, the fear-generating event is so unlikely that it is not worth worrying about. Even if it has significant probability, checking the research on it is more likely to lead to solutions than vague rumination along the lines of “what if.” Even if there are no solutions, statistics about the bad event often suggest circumstances that make it more likely, thus information on which situations or risk factors to avoid.
These points have been made before, as exemplified by the aphorisms “Prepare for what is likely and you are likely to be prepared” and “Safety is an expensive illusion.”

News are gradually biased by re-reporting

The (science) news cycle occurs when the original source is quoted by another news outlet, which is quoted by another outlet, etc, creating a “telephone game”, a.k.a. “Chinese whispers” familiar from kindergarten. Each re-reporting introduces noise to the previous report, so the end result may differ diametrically from the original story. This news cycle has been identified and mocked before, e.g. by PhD Comics.
The telephone game of news outlets has an additional aspect that I have not seen mentioned, namely that the re-reporting does not add random noise, but noise that biases the previous source deliberately. Each news outlet, blog or other re-poster has a slant and focusses on those aspects of the story that favour its existing viewpoint.
A single outlet usually does not change the story to the complete opposite of the original, because outright lying is easy to detect and would damage the outlet’s reputation. However, many outlets in a sequence can each bias the story a little, until the final report is the opposite of the original. Each outlet’s biasing decision is difficult to detect, because the small bias is hidden in the noise of rephrasing and selectively copying the previous outlet’s story. So each outlet can claim to report unbiased news, if readers do not question why the outlet used second-hand (really n-th hand) sources, not the original article (the first in the sequence). A single manipulator thus has an incentive to create many websites that report each other’s stories in a sequence.
The moral of this text is that to get accurate information, read the original source. Whenever you see an interesting news article, work backward along the sequence of reports to see whether the claims are the same as in the first report. The first report is not guaranteed to be true, but at least the biases and honest errors introduced later can be removed this way.

Sugar-free, fat-free and low-salt claims

The three main ingredients of unhealthy food are sugar, salt and fat. The packaging of junk food often has claims of sugar-free, fat-free or low-salt in big colourful letters on the front. The trick is that the absence of one of the three ingredients is compensated by a larger amount of the other two, as can be checked from the nutrition information label.
Sometimes the claims on the front of the pack directly contradict the nutrition label, so are downright lies. I have seen packaging with the claim “sugar-free” on the front, with sugars listed in significant quantity on the nutrition label. There are some legal sanctions for falsifying the nutrition information label, but almost no restrictions on what can be claimed elsewhere on the pack, so any contradictions should almost always be resolved in favour of the nutrition label.
I have seen a sugar-free claim on a pack on which the ingredient list included brown sugar. This suggests the existence of a legal loophole (brown sugar not equalling sugar somehow) that the manufacturer wanted to use.
If the manufacturer does not want to outright lie, then a trick I have seen is to claim “no added sugar” or “no sugar or artificial sweeteners” on the pack, but add other sweeteners, e.g. sugarcane juice, molasses, high fructose corn syrup. Similarly, “no added salt” can be bypassed by adding salty ingredients, for example dried salted meat or bacon to a snack mix.
Another trick is to create the sugar in the food during the manufacturing process. For example, heating starch for a long time or adding the enzyme amylase breaks the starch into smaller-molecule sugars. So a manufacturer can claim “no added sweeteners” and yet produce sugars in the food by processing the starch in it.
A similar trick for salt is to add sodium and chloride in other ingredients and let them combine into NaCl in the food.