Monthly Archives: May 2019

Golf as a cartel monitoring device for skilled services

Many explanations have been advanced for golf and similar costly, seemingly boring, low-effort group activities. One reason could be signalling one’s wealth and leisure by an expensive and time-consuming sport, another may be networking during a low-effort group activity that does not interfere with talking.

An additional explanation is monitoring others’ time use. A cartel agrees to restrict the quantity that its members provide, in order to raise price. In skilled services (doctors, lawyers, engineers, notaries, consultants) the quantity sold is work hours. Each member of a cartel has an incentive to secretly increase supply to obtain more profit. Monitoring is thus needed to sustain the cartel. One way to check that competitors are not selling more work hours is to observe their time use by being together. To reduce boredom, the time spent in mutual monitoring should be filled somehow, and the activity cannot be too strenuous, otherwise it could not be sustained for long enough to meaningfully decrease hours worked. Playing golf fulfills these requirements.

A prediction from this explanation for golf is that participation in time-consuming group activities would be greater in industries selling time-intensive products and services. By contrast, if supply is relatively insensitive to hours worked, for example in capital-intensive industries or standard software, then monitoring competitors’ time use is ineffective in restricting their output and sustaining a cartel. Other ways of checking quantity must then be found, such as price-matching guarantees, which incentivise customers to report a reduced price of a competitor.

Rain-based waterfall in a building

Many large buildings have a high lobby – some of these reach the roof, which in that case is often transparent. Some edifices also have an indoor water curtain in the lobby, or water running down a decorative wall. A way to save the cost of pumping the water up for such a waterfall is to rely on rain. The roofs of high buildings are flat anyway, in order to direct water into internal drainpipes, as opposed to sending the water over the eaves of a slanting roof. If the pipe from the roof is made wide, transparent and put it in the middle of the lobby, then whenever it rains, a waterfall occurs in the pipe. This works best in rainy climates and will be especially spectacular in intense rain and with a single pipe receiving the water from the whole inward-slanted roof. The lobby may become noisy though with the sound of all that rushing water.

Disagreement over policy due to preferences vs beliefs

Disagreement about the best policy is due to different preferences or beliefs, or both. Believing that different preferences cause the opinion differences discourages debate (no point arguing over taste after all), leads to polarisation and partisanship. For example, right-wingers may believe that left-wingers prefer to disincentivise entrepreneurs with high taxes, and left-wingers may believe that right-wingers prefer to harm the poor by reducing government transfers. To put it starkly: the other side just prefers evil policy by nature.

By contrast, believing that disagreement over what should be done is caused by differing beliefs assumes that the other side is good-hearted, but mistaken. For example, left-wingers may believe that right-wingers mistakenly believe that transfers to the poor disincentivise them from working or finance their addictions. Right-wingers may believe that left-wingers mistakenly believe that entrepreneurs are not discouraged by higher taxes – being entrepreneurial by nature, they start companies because it is interesting, not out of greed. Mistaken opponents’ opinions can be corrected using data and logic, patience and understanding.

Even if policy disagreement is interpreted as coming from divergent preferences, some such differences are interpreted as less evil than others. For example, impatience is perceived as better than selfishness. Many policies trade off non-simultaneous benefits and costs: invest in infrastructure now to use it after some years, mitigate climate change now to reduce harm to future generations. Paying a current cost for a future benefit may be optimal for patient people, but not for impatient, causing a policy disagreement. The same opinion difference may be due to altruistic people wanting to invest to help others (future users of the infrastructure or the environment), but selfish ones preferring to keep the money now. Believing the same disagreement to be due to selfishness polarises people more than perceiving unequal patience as the cause.

Platform providers fake being popular

Crowdfunding platforms, stock exchanges and other providers of two-sided markets want to appear popular, because having more buyers attracts more sellers and vice versa. The platform’s revenue is usually proportional to the number of users, because it charges a commission fee on trades or advertisers pay it to show ads to users. The exchange’s marginal cost of a user is close to zero, giving it an incentive to fake a high volume of trades, a large limit order book and a small bid-ask spread.

The platform’s cost of posting a great volume of outstanding buy and sell orders at a small spread is that many investors try to trade at these favourable bid and ask prices. Either the market maker has to take the other side of these attempted transactions or is found fraudulent. Taking the other side results in a large loss if some investors are better informed than the exchange.

The platform could falsely display a large trading volume, but keep the order book honestly small by adding fake trades at prices between the bid and the ask only, so no investor’s real limit order is ignored. This seems difficult to detect, unless one side of the limit order book is empty (e.g. no buyers) and at least one at-market order on the other side (e.g. a sell) is outstanding. In this case, any trades occurring would have to satisfy the at-market order. However, the platform or real investors can then take the other side of the at-market order at a very favourable price to themselves, which discourages at-market orders. A large trading volume with a thin order book is still slightly suspicious, because it requires that crossing buy and sell orders between the bid and ask prices arrive almost simultaneously, in order to be matched without appearing on the order book for long, and without triggering the real limit orders. Displaying the fake buys and sells on the order book risks attracting actual matching trades, which the platform would have to honour (at a cost).

Without automated quote matching, there are no at-market orders, for example on the Funderbeam crowdfunding platform. Instead, everyone either posts a limit order or picks an order from the other side to trade with, e.g. a buyer chooses a sell. Investors can pick an order with a worse price (higher sell or lower buy) on the other side, which frequently occurs on Funderbeam. Choosing a worse price is irrational, unless the traders in question are colluding, so the asset is effectively not changing ownership. Reasons to carry out such seemingly irrational trades are to manipulate price and volume, e.g. price can be raised or reduced by targeted trades outside the bid-ask interval. Volume can only increase after added trades, rational or not, but such seemingly greater activity is exactly what benefits the stakeholders of the platform. The employees of the market maker have a natural motive to fake-trade between themselves to make their firm look good, even without any inappropriate pressure from their boss.

Another way to attract issuers and investors is to demonstrate successful initial public offerings, meaning that the funds are raised quickly (good for issuers) and the price of the newly listed stock (or other asset) goes up, which benefits investors. Adding fake capital-raisers is difficult, because potential investors will check the background of the supposed issuer. Inserting spoof investors into an actual funding campaign is costly, because real money would have to be invested. One way to manipulate popularity upward is to simultaneously add a fake issuer and fake investors who satisfy its funding need. The idea is to not leave time for real investors to participate in the campaign, by pretending that the capital-raiser achieved its target funding level before most investors could react. This is easier in markets with a small number of real investors and without an auto-invest feature. However, the real investors who were supposedly pre-empted may still research the supposedly very popular issuer.

A costless way to briefly boost the popularity of a real fundraising campaign is to add fake investors after the target funding is achieved, and forbid issuers from increasing the target or accepting funds from those who subscribed after the goal was reached. Any campaign meeting its target can then be made to look heavily oversubscribed. However, if the issuers are informed in advance of the restriction not to increase the target, then they may find an alternative unrestricted platform to raise funds. On the other hand, if the restriction is not mentioned beforehand, then it will likely anger the issuers who will then create negative publicity for the platform. Competition between exchanges thus curtails their manipulation incentives.

The platform can motivate real investors to raise their bids when the campaign reaches its target by rationing demand: bidders in an oversubscribed share issue get only a fraction of what they wanted to buy. Anticipating this, buyers will increase their requested quantities so that the fraction of their new bid equals their actual demand. This makes the campaign look oversubscribed and creates a feedback loop: if other investors increase their quantities, then rationing reduces the fraction of a given investor’s demand that will be satisfied, so this investor raises her or his requested amount, which in turn makes others increase theirs.

If investors know of the bid rationing in advance, then they may select a rival market provider without this restriction, but if rationing takes them by surprise, then they may leave and publicly criticise the platform. Capital-raisers compare exchanges, so if many market providers inflate demand and the issuers pay attention to the level of oversubscription (instead of the fraction of campaigns reaching the target, which is what should matter to the capital-raiser), then the biggest inflator wins. Of course, platforms may not want to reveal unsuccessful campaigns (e.g. Funderbeam does not), so public data on the fraction of issuers who achieved their funding goal is unlikely to exist.

Theoretically, the feedback from bid rationing to increased quantity demanded could lead to infinite amounts requested. A countervailing incentive is that with positive probability, other investors do not honour their commitment to buy, in which case a given investor may be required to buy the amount (s)he demanded, instead of the lower amount (s)he actually wanted. If there is no commitment to buy (for example, on Funderbeam the bids are only non-binding indications of interest), then the danger of overcommitting is absent, so the rational choice seems to be requesting an infinite amount. Investors do not indicate infinite interest, so either they are irrational or some other penalty exists for outbidding one’s capability to pay.

Star job candidates benefit from appearing to be worse

Employers have a cost of making a job offer: filling out forms, getting approval, not being able to make other offers simultaneously in case too many job candidates accept, etc. A company who believes that it is not the top choice of candidates would want to avoid making an offer to a star applicant (one who is likely to receive better alternative offers from top employers, thus turn down the lower-ranked company’s offer).

If the star job-seeker is uncertain about the offers she or he will get, or wants a bargaining chip to use with the most preferred company, then (s)he prefers to obtain the lower-ranked employer’s offer, even when planning to reject it. A way to entice the company into offering a job is to pretend to be more attainable (have a worse outside option) by faking lower talent and potential when interviewing with lower-ranked employers. For this pretence to be (partly) credible, it must have a cost for the job-seeker, otherwise all the best candidates would pretend to be worse and increase their chance of obtaining offers from their backup employers. Then the next-best candidates would have to fake being less good to receive jobs, etc. This race to the bottom would only end once all candidates look like the worst possible, which does not seem realistic.

One potential cost is that faking lower talent has a random outcome, which may be so bad that the employer does not want to offer a job at all. This would temper the incentive to appear worse. Another cost is information leakage – if bad performance at a less desirable interview becomes known to higher-ranked employers, then the candidate may forfeit her or his most preferred interviews and jobs. It could also be that the top job-seekers cannot hide their quality, for example because their genius shines out despite their best effort, or employers base offers solely on recommendation letters, which the candidate cannot see or affect around the time of applying.

Why rational agents may react negatively to honesty

Emotional people may of course dislike an honest person, just because his truthful opinion hurt their feelings. In contrast, rational agents’ payoff cannot decrease when they get additional information, so they always benefit from honest feedback. However, rational decision makers may still adjust their attitude to be more negative towards a person making truthful, informative statements. The reason is Bayesian updating about two dimensions: the honesty of the person and how much the person cares about the audience’s feelings. Both dimensions of belief positively affect attitude towards the person. His truthful statements increase rational listeners’ belief about his honesty, but may reduce belief in his tactfulness, which may shift rational agents’ opinions strongly enough in the negative direction to outweigh the benefit from honesty.

The relative effect of information about how much the person cares, compared to news about his honesty, is greater when the latter is relatively more certain. In the limit, if the audience is completely convinced that the person is honest (or certain of his dishonesty), then the belief about his honesty stays constant no matter what he does, and only the belief about tact moves. Then telling an unpleasant truth unambiguously worsens the audience’s attitude. Thus if a reasonably rational listener accuses a speaker of „brutal honesty” or tactlessness, then it signals that the listener is relatively convinced either that the speaker is a liar or that he is a trustworthy type. Therefore an accusation of tactlessness may be taken as an insult or a compliment, depending on one’s belief about the accuser’s belief about one’s honesty.

If tact takes effort, and the cost of this effort is lower for those who care about the audience’s emotions, then pleasant comments are an informative signal (in the Spence signalling sense) that the speaker cares about the feelings of others. In that case the inference that brutal honesty implies an uncaring nature is correct.

On the other hand, if the utility of rational agents only depends on the information content of statements, not directly on their positive or negative emotional tone, then the rational agents should not care about the tact of the speaker. In this case, there is neither a direct reason for the speaker to avoid unpleasant truths (out of altruism towards the audience), nor an indirect benefit from signalling tactfulness. Attitudes would only depend on one dimension of belief: the one about honesty. Then truthfulness cannot have a negative effect.

Higher order beliefs may still cause honesty to be interpreted negatively even when rational agents’ utility does not depend on the emotional content of statements. The rational listeners may believe that the speaker believes that the audience’s feelings would be hurt by negative comments (for example, the speaker puts positive probability on irrational listeners, or on their utility directly depending on the tone of the statements they hear), in which case tactless truthtelling still signals not caring about others’ emotions.

Social welfare functions derived from revealed preference

The social welfare functions used in policy evaluation typically put more weight on poorer people, justifying redistribution from the rich to the poor. The reasoning is that the marginal benefit of a unit of money is greater for the poor than the rich. However, people with a greater marginal value of money are more motivated to earn and save, other things equal, so more likely to become rich. In this case, the rich have on average a higher marginal benefit of money than the poor, or a lower marginal cost of accumulating it. If the justification for redistribution is an interpersonal utility comparison, then revealed preference suggests a greater Pareto weight for richer people, thus redistribution in the opposite direction to the usual.

If the marginal utility of money decreases in wealth or income, then people earn until the marginal benefit equals the marginal cost, so the comparison between the rich and the poor depends on their marginal cost of earning, evaluated at their current wealth and income. The cost and benefit of earning may both be higher or lower for richer people. In a one-shot model, whoever has a greater benefit should receive redistributive transfers to maximise a utilitarian welfare criterion. Dynamic indirect effects sometimes reverse this conclusion, because incentives for future work are reduced by taxation.

Those with a high marginal utility of money are more motivated to convince the public that their marginal utility is high and that they should receive a subsidy. The marginal utility is the difference between a benefit and a cost, which determine whether the poor or the rich have a greater incentive to lobby for redistributive transfers. The marginal cost of an hour of persuasion equals the person’s hourly wage, so depends on whether her income is derived mostly from capital or from labour. For example, both rentiers and low-wage workers have a low opportunity cost of time, so optimally lobby more than high-wage workers. If lobbying influences policy (which is empirically plausible), then the tax system resulting from the persuasion competition burdens the high-wage workers the heaviest and leaves loopholes and low rates for capital income and low wages. This seems to be the case in most countries.

A tax system based on lobbying is inefficient, because it is not the people with the greatest benefit that receive the subsidies (which equal the value of government services minus the taxes), but those with the largest difference between the benefit and the lobbying cost. However, the resulting taxation is constrained efficient under the restriction that the social planner cannot condition policy on people’s marginal costs of lobbying.