Tag Archives: manipulation

Platform providers fake being popular

Crowdfunding platforms, stock exchanges and other providers of two-sided markets want to appear popular, because having more buyers attracts more sellers and vice versa. The platform’s revenue is usually proportional to the number of users, because it charges a commission fee on trades or advertisers pay it to show ads to users. The exchange’s marginal cost of a user is close to zero, giving it an incentive to fake a high volume of trades, a large limit order book and a small bid-ask spread.

The platform’s cost of posting a great volume of outstanding buy and sell orders at a small spread is that many investors try to trade at these favourable bid and ask prices. Either the market maker has to take the other side of these attempted transactions or is found fraudulent. Taking the other side results in a large loss if some investors are better informed than the exchange.

The platform could falsely display a large trading volume, but keep the order book honestly small by adding fake trades at prices between the bid and the ask only, so no investor’s real limit order is ignored. This seems difficult to detect, unless one side of the limit order book is empty (e.g. no buyers) and at least one at-market order on the other side (e.g. a sell) is outstanding. In this case, any trades occurring would have to satisfy the at-market order. However, the platform or real investors can then take the other side of the at-market order at a very favourable price to themselves, which discourages at-market orders. A large trading volume with a thin order book is still slightly suspicious, because it requires that crossing buy and sell orders between the bid and ask prices arrive almost simultaneously, in order to be matched without appearing on the order book for long, and without triggering the real limit orders. Displaying the fake buys and sells on the order book risks attracting actual matching trades, which the platform would have to honour (at a cost).

Without automated quote matching, there are no at-market orders, for example on the Funderbeam crowdfunding platform. Instead, everyone either posts a limit order or picks an order from the other side to trade with, e.g. a buyer chooses a sell. Investors can pick an order with a worse price (higher sell or lower buy) on the other side, which frequently occurs on Funderbeam. Choosing a worse price is irrational, unless the traders in question are colluding, so the asset is effectively not changing ownership. Reasons to carry out such seemingly irrational trades are to manipulate price and volume, e.g. price can be raised or reduced by targeted trades outside the bid-ask interval. Volume can only increase after added trades, rational or not, but such seemingly greater activity is exactly what benefits the stakeholders of the platform. The employees of the market maker have a natural motive to fake-trade between themselves to make their firm look good, even without any inappropriate pressure from their boss.

Another way to attract issuers and investors is to demonstrate successful initial public offerings, meaning that the funds are raised quickly (good for issuers) and the price of the newly listed stock (or other asset) goes up, which benefits investors. Adding fake capital-raisers is difficult, because potential investors will check the background of the supposed issuer. Inserting spoof investors into an actual funding campaign is costly, because real money would have to be invested. One way to manipulate popularity upward is to simultaneously add a fake issuer and fake investors who satisfy its funding need. The idea is to not leave time for real investors to participate in the campaign, by pretending that the capital-raiser achieved its target funding level before most investors could react. This is easier in markets with a small number of real investors and without an auto-invest feature. However, the real investors who were supposedly pre-empted may still research the supposedly very popular issuer.

A costless way to briefly boost the popularity of a real fundraising campaign is to add fake investors after the target funding is achieved, and forbid issuers from increasing the target or accepting funds from those who subscribed after the goal was reached. Any campaign meeting its target can then be made to look heavily oversubscribed. However, if the issuers are informed in advance of the restriction not to increase the target, then they may find an alternative unrestricted platform to raise funds. On the other hand, if the restriction is not mentioned beforehand, then it will likely anger the issuers who will then create negative publicity for the platform. Competition between exchanges thus curtails their manipulation incentives.

The platform can motivate real investors to raise their bids when the campaign reaches its target by rationing demand: bidders in an oversubscribed share issue get only a fraction of what they wanted to buy. Anticipating this, buyers will increase their requested quantities so that the fraction of their new bid equals their actual demand. This makes the campaign look oversubscribed and creates a feedback loop: if other investors increase their quantities, then rationing reduces the fraction of a given investor’s demand that will be satisfied, so this investor raises her or his requested amount, which in turn makes others increase theirs.

If investors know of the bid rationing in advance, then they may select a rival market provider without this restriction, but if rationing takes them by surprise, then they may leave and publicly criticise the platform. Capital-raisers compare exchanges, so if many market providers inflate demand and the issuers pay attention to the level of oversubscription (instead of the fraction of campaigns reaching the target, which is what should matter to the capital-raiser), then the biggest inflator wins. Of course, platforms may not want to reveal unsuccessful campaigns (e.g. Funderbeam does not), so public data on the fraction of issuers who achieved their funding goal is unlikely to exist.

Theoretically, the feedback from bid rationing to increased quantity demanded could lead to infinite amounts requested. A countervailing incentive is that with positive probability, other investors do not honour their commitment to buy, in which case a given investor may be required to buy the amount (s)he demanded, instead of the lower amount (s)he actually wanted. If there is no commitment to buy (for example, on Funderbeam the bids are only non-binding indications of interest), then the danger of overcommitting is absent, so the rational choice seems to be requesting an infinite amount. Investors do not indicate infinite interest, so either they are irrational or some other penalty exists for outbidding one’s capability to pay.

Delivered food and restaurants are unhealthy due to moral hazard

Consumers observe the taste and cost of food directly, but checking the ingredients for healthiness takes extra effort and time. Rational inattention then implies that eaters are unlikely to verify the health claims. Thus food suppliers are subject to moral hazard: consumers buy based on the healthiness they expect, not the actual ingredients the seller chooses, so the seller has an incentive to improve taste, reduce the production cost and cut price even when this makes the food less healthy.

The standard solutions to moral hazard from economic theory are verification, repeated interaction and vertical integration (selling the firm). In the context of food, safety standards and truth-in-advertising laws restrict the substances manufacturers may add and claims they can make. Regulators verify claims made and punish for illegal additives or false advertising. Also, if a food supplier is found to use unhealthy ingredients (or amounts of sugar, salt and fat), then some consumers may switch to alternate providers, which is a repeated game punishment for the original seller.

The weakness of both verification and repeated interaction is imperfect monitoring: small increases in unhealthy substances are difficult to detect, because tests are noisy and food composition varies naturally. The variation sometimes makes the amount of an ingredient exceed the healthy limit, so honest suppliers would also be punished with positive probability. Incentives are created by the difference in payoffs, so reducing the payoff of the honest decreases their motive to stay honest. The imperfect monitoring allows unscrupulous sellers to outcompete the providers of healthy food on taste and price, for example by using various tricks to circumvent the legal requirements on labelling (https://sanderheinsalu.com/ajaveeb/?p=728).

The remaining solution to the moral hazard problem is vertical integration of the buyer and the supplier, i.e. home cooking. Of course, the ingredients bought to be cooked at home are subject to similar moral hazard – unhealthy substances can be added at any stage of the production process. The risk could in principle be even larger than for processed foods and restaurant meals, but in practice, it seems that simple and unprocessed ingredients are more difficult to manipulate than prepared meals, which are a mixture of many components. Adding sugar, salt, fat or monosodium glutamate to flour, rice or dry beans without mentioning it on the nutrition label is easier to detect than the same (amounts of) additives in shrimp fried rice, bread or a burrito. Raw meats and fish do have extra salt and food colouring added, but usually less than for ready-to-eat meals.

Relative prices are another reason why there may be less manipulation of ingredients than processed foods. There is a per-unit cost of adding unhealthy substances, as well as a fixed cost due to the risk of lawsuits and fines, especially if the additives are not declared on the label. Unprocessed ingredients are less differentiated, so the price competition is more intense. The increase in the price that customers are willing to pay if an ingredient tastes better than the competitors’ may be small if price is the main dimension of competition. The slightly higher price may not justify the per-unit cost of the additives. In contrast, for processed foods the margin may respond greatly to taste, motivating manipulation of the ingredients.

The taste of the final dish is likely to respond less to manipulating one ingredient than to altering the composition of the entire food, both because the ingredient may be only a small part of the final dish and because the taste of a dish is largely determined by the seasoning and the cooking method. In this case, additives to ingredients do not improve taste that much, reducing the profitability of manipulating these.

Intense price competition motivates cost-cutting, including by substituting cheaper ingredients or using additives (e.g. preservatives) that reduce the manufacturing cost. However, if the additives cost more than they save on production cost (such as preservatives for dry goods that already keep indefinitely), then they are unprofitable to include.

Demand for cooking ingredients may also respond less to price and taste than for restaurant meals or delivered food (raw ingredients may even be an inferior good, but eating out is more like a luxury good). In this case, there is a range of fixed costs of unhealthy substances for which adding these to ingredients is unprofitable, but to processed foods profitable.

Food security is a manipulative term

Food security is a manipulative political code phrase designed to scare people and thereby make them support agricultural subsidies, as I have written before. The fear is created by association with sieges before the age of gunpowder, where castles were starved into submission. In modern times, no enemy is silly enough to try to surround and starve a country that is not a city state (e.g. Singapore), because any enemy with a large enough force to prevent food from getting into a country is also strong enough to conquer it quickly by frontal attack. Even unintentional starvation is a public relations disaster (e.g. Yemen), as is a war that drags on, but a quick takeover without too many casualties (e.g. Crimea) actually increases the conqueror’s leader’s popularity in internal politics.

Even if an enemy was stupid and tried to starve a country, the defense against this is not farm subsidies, but many distributed small stockpiles of food. Farms as a food supply are easy to destroy by firebombing the crops and livestock from the air. A small number of large centralised stockpiles are also vulnerable. However, if each household is obliged to keep n months’ worth of non-perishable food at home, then starving the country into submission would take at least n months and bombardment would not shorten that period.

What is really meant by food security is that food prices might rise. However, in all except the very poorest countries in the world, food is so cheap that any reasonable price rise would not cause starvation. For example, according to the USDA, 9 medium baked potatoes fulfill all the nutritional needs of an adult. Similarly, people can survive for a long time eating just wheat flour and water. Wheat flour is 80 cents per kilo, and a kilo of it has 3600 kcal, which is enough for an adult for two days. The price of flour would have to rise at least a hundred times for the cost to lead to starvation in developed countries. Other emergency foods that do not go bad and can be prepared without heating are also cheap, e.g. milk powder, instant oatmeal, canned meats and vegetables.

A price rise is a financial problem, not not a real resource constraint, and as such has a financial solution – insurance. Those afraid of a price rise can use forward contracts to lock in the price. Insurance against a very low-probability event like food prices rising a hundred times is cheap (if such insurance is offered, which it might not be due to the low demand).

Why research with more authors gets cited more

Empirically, articles with more authors are cited more, according to Wuchty et al. (2007). The reasons may be good or bad. A good reason is that coauthored papers may have higher quality, e.g. due to division of labour increasing the efficiency of knowledge production. I propose the following bad reasons, independent of potential quality differences between coauthored and solo articles. Suppose that researchers cite the works of their friends more frequently than warranted. A given scientist is more likely to have a friend among the authors of an article with a greater number of collaborators, which increases its probability of getting a „friendly citation”.

Another reason is defensive citing, i.e. including relatively unrelated papers in the reference list before submitting to a journal, in case the referees happen to be the authors of those works. The reason for adding these unnecessary citations is the belief, warranted or not, that a referee is more likely to recommend acceptance of a paper if it cites the referee’s publications. The probability that the set of referees overlaps with the set of authors of a given prior work increases in the number of authors of that work. Thus defensive citing is more effective when targeted to collaborative instead of solo papers.

The referees may also directly ask the author to cite certain papers in the revision (I have had this experience). If the referees are more likely to request citations to their own or their coauthors’ work, then articles with more authors are again referenced more.

Valderas et al. (2007) offer some additional explanations. One is measurement error. Suppose that letters to the editor, annual reports of the learned society, its presidential inaugural addresses, and other non-research in scientific journals are counted as publications. These have both fewer authors and citations than regular research articles, which creates a positive correlation between the popularity of a piece of writing and its number of authors.

If self-citations are not excluded and researchers cite their own work more frequently than that of others, then papers with more authors get cited more.

Articles with more collaborators are presented more frequently, thus their existence is more widely known. Awareness of a work is a prerequisite of citing it, so the wider circulation of multi-author publications gives them a greater likelihood of being referenced, independent of quality.

Bad popular science books

There is a class of books that is marketed as popular science, but have the profit from sales as their only goal, disregarding truth. Easily visible signs of these are titles that include clickbait keywords (sex, seduction, death, fear, apocalypse, diet), controversial or emotional topics (evolution, health, psychology theories, war, terrorism), radical statements about these topics (statements opposite to mainstream thinking, common sense or previous research), and big claims about the authors’ qualifications that are actually hollow (PhD from an obscure institution or not in the field of the book). The authors typically include a journalist (or writer, or some other professional marketer of narratives) and a person that seems to be qualified in the field of the book. Of course these signs are an imperfect signal, but their usefulness is that they are visible from the covers.
Inside such a book, the authors cherry-pick pieces of science and non-science that support the claim that the book makes, and ignore contradicting evidence, even if that evidence is present in the same research articles that the book cites as supporting it. Most pages promise that soon the book will prove the claims that are made on that page, but somehow the book never gets to the proof. It just presents more unfounded claims.
A book of this class does not define its central concepts or claims precisely, so it can flexibly interpret previous research as supporting its claims. The book does not make precise what would constitute evidence refuting its claim, but sets up “straw-man” counterarguments to its claim and refutes them (mischaracterising the actual counterarguments to make them look ridiculous).
Examples of these books that I have read to some extent before becoming exasperated by their demagoguery: Sex at dawn, Games people play.

“What if” is a manipulative question

“What if this bad event happens?” is a question used as a high-pressure sales tactic (for insurance, maintenance, upgrades and various protective measures). People suffering from anxiety or depression also tend to ask that question, which is called catastrophising. The question generates vague fears and is usually unhelpful for finding reasonable preventive or corrective measures for the bad event. Fearful people tend to jump on anything that looks like it might be a prevention or cure, which sometimes makes the problem worse (e.g. quack remedies for imagined rare disease worsen health).
A more useful question is: “What is the probability of this bad event happening?” This question directs attention to statistics and research about the event. Often, the fear-generating event is so unlikely that it is not worth worrying about. Even if it has significant probability, checking the research on it is more likely to lead to solutions than vague rumination along the lines of “what if.” Even if there are no solutions, statistics on the bad event often suggest circumstances that make it more likely, thus information on which situations or risk factors to avoid.
These points have been made before, as exemplified by the aphorisms “Prepare for what is likely and you are likely to be prepared” and “Safety is an expensive illusion.”

News are gradually biased by re-reporting

The (science) news cycle occurs when the original source is quoted by another news outlet, which is quoted by another outlet, etc, creating a “telephone game”, a.k.a. “Chinese whispers” familiar from kindergarten. Each re-reporting introduces noise to the previous report, so the end result may differ diametrically from the original story. This news cycle has been identified and mocked before, e.g. by PhD Comics.
The telephone game of news outlets has an additional aspect that I have not seen mentioned, namely that the re-reporting does not add random noise, but noise that biases the previous source deliberately. Each news outlet, blog or other re-poster has a slant and focusses on those aspects of the story that favour its existing viewpoint.
A single outlet usually does not change the story to the complete opposite of the original, because outright lying is easy to detect and would damage the outlet’s reputation. However, many outlets in a sequence can each bias the story a little, until the final report is the opposite of the original. Each outlet’s biasing decision is difficult to detect, because the small bias is hidden in the noise of rephrasing and selectively copying the previous outlet’s story. So each outlet can claim to report unbiased news, if readers do not question why the outlet used second-hand (really n-th hand) sources, not the original article (the first in the sequence). A single manipulator thus has an incentive to create many websites that report each other’s stories in a sequence.
The moral of this text is that to get accurate information, read the original source. Whenever you see an interesting news article, work backward along the sequence of reports to see whether the claims are the same as in the first report. The first report is not guaranteed to be true, but at least the biases and honest errors introduced later can be removed this way.

Sugar-free, fat-free and low-salt claims

The three main ingredients of unhealthy food are sugar, salt and fat. The packaging of junk food often has claims of sugar-free, fat-free or low-salt in big colourful letters on the front. The trick is that the absence of one of the three ingredients is compensated by a larger amount of the other two, as can be checked from the nutrition information label.
Sometimes the claims on the front of the pack directly contradict the nutrition label, so are downright lies. I have seen packaging with the claim “sugar-free” on the front, with sugars listed in significant quantity on the nutrition label. There are some legal sanctions for falsifying the nutrition information label, but almost no restrictions on what can be claimed elsewhere on the pack, so any contradictions should almost always be resolved in favour of the nutrition label.
I have seen a sugar-free claim on a pack on which the ingredient list included brown sugar. This suggests the existence of a legal loophole (brown sugar not equalling sugar somehow) that the manufacturer wanted to use.
If the manufacturer does not want to outright lie, then a trick I have seen is to claim “no added sugar” or “no sugar or artificial sweeteners” on the pack, but add other sweeteners, e.g. sugarcane juice, molasses, high fructose corn syrup. Similarly, “no added salt” can be bypassed by adding salty ingredients, for example dried salted meat or bacon to a snack mix.
Another trick is to create the sugar in the food during the manufacturing process. For example, heating starch for a long time or adding the enzyme amylase breaks the starch into smaller-molecule sugars. So a manufacturer can claim “no added sweeteners” and yet produce sugars in the food by processing the starch in it.
A similar trick for salt is to add sodium and chloride in other ingredients and let them combine into NaCl in the food.

How superstition grows out of science

Priests in Ancient Egypt could predict eclipses and the floods of the Nile by observing the stars and the Moon and recording their previous positions when the events of interest happened. The rest was calculation, nothing magical. Ordinary people saw the priests looking at the stars and predicting events in the future, and thought that the stars magically told priests things and that the prediction ability extended to all future events (births, deaths, outcomes of battles). The priests encouraged this belief, because it gave them more power. This is one way astrology could have developed – by distorting and exaggerating the science of astronomy. Another way is via navigators telling the latitude of a ship using the stars or the sun. People would have thought that if heavenly bodies could tell a navigator his location on the open sea, then why not other secrets?
Engineers in Ancient Rome calculated the strength of bridges and aqueducts, and estimated the amount of material needed for these works. Ordinary people saw the engineers playing with numbers and predicting the amount of stones needed for a house or a fort. Numbers “magically” told engineers about the future, and ordinary people thought this prediction ability extended to all future events. Thus the belief in numerology could have been born.
When certain plants were discovered to have medicinal properties against certain diseases, then swindlers imitated doctors by claiming that other natural substances were powerful cures against whatever diseases. The charlatans and snake oil salesmen distorted and exaggerated medicine.
Doctors diagnosed diseases by physical examination before laboratory tests were invented. Thus a doctor could look at parts of a person’s body, tell what diseases the person had, and predict the symptoms that the person would experience in the future. Exaggerating this, palm readers claimed to predict a person’s future life course by looking at the skin of their palm.
In the 20th century, some medicines were discovered to be equally effective at somewhat lower doses than previously thought. Then homeopathy exaggerated this by claiming that medicines are effective when diluted so much that on average not a single molecule of the drug remains in the water given to the patient.
In all these cases, superstition only adds bias and noise to scientific results. Science does not know everything, but it is a sufficient statistic (https://en.wikipedia.org/wiki/Sufficient_statistic) for superstitious beliefs, in the sense that any true information contained in superstition is also contained in science. Nothing additional can be learned from superstition once the scientific results are known.

Scientific thinking coordination game

If most people in a society use the scientific method for decision-making, then telling stories will not persuade them – they will demand evidence. In that case, bullshit artists and storytellers will not have much influence. It is then profitable to learn to provide evidence, which is positively correlated with learning to understand and use evidence. If young people respond to incentives and want to become influential in society (get a high income and social status), then young people will learn and use the scientific method, which reinforces the demand for evidence and reduces the demand for narratives.
If most people are not scientifically minded, but believe stories, then it is profitable to learn to tell stories. The skilled storytellers will be able to manipulate people, thus will gain wealth and power. Young people who want to climb the social and income ladder will then gravitate towards narrative fields of study. They will not learn to understand and use evidence, which reinforces the low demand for evidence.
Both the scientific and the narrative society are self-reinforcing, thus there is a coordination game of people choosing to become evidence-users or storytellers. Note that using the scientific method does not mean being a scientist. Most researchers who I have met do not use science in their everyday decisions, but believe the stories they read in the media or hear from their friends. I have met Yale PhD-s in STEM fields who held beliefs that most people in the world would agree to be false.
One signal of not thinking scientifically is asking people what the weather is like in some place one has not visited (I don’t mean asking in order to make small talk, but asking to gain information). Weather statistics for most places in the world are available online and are much more accurate than acquaintances’ opinions of the weather. This is because weather statistics are based on a much longer time series and on physically measured temperature, rainfall, wind, etc, not on a person’s guess of these variables.