Tag Archives: being and seeming

Identifying useful work in large organisations by revealed preference

Some members of large organisations seemingly do work, but actually contribute negatively by wasting other people’s time. For example, by sending mass emails, adding regulations, changing things for the sake of changing them (and to pad their CV with „completed projects”) or blocking change with endless committees, consultations and discussions with stakeholders. Even if there is a small benefit from this pretend-work, it is outweighed by the cost to the organisation from the wasted hours of other members. It is difficult to distinguish such negative-value-added activity from positive contributions (being proactive and entrepreneurial, leading by example). Opinions differ on what initiatives are good or bad and how much communication or discussion is enough.
Asking others to rate the work of a person would be informative if the feedback was honest, but usually people do not want to officially criticise colleagues and are not motivated to respond thoughtfully to surveys. Selection bias is also a problem, as online ratings show – the people motivated enough to rate a product, service or person are more likely to have extreme opinions.
Modern technology offers a way to study the revealed preferences of all members of the organisation without taking any of their time. If most email recipients block a given sender, move her or his emails to junk or spend very little time reading (keeping the email open), then this suggests the emails are not particularly useful. Aggregate email activity can be tracked without violating privacy if no human sees information about any particular individual’s email filtering or junking, only about the total number of people ignoring a given sender.
Making meetings, consultations and discussions optional and providing an excuse not to attend (e.g. two voluntary meetings at the same time) similarly allows members of the organisation „vote with their feet” about which meeting they find (more) useful. This provides an honest signal, unlike politeness-constrained and time-consuming feedback.
Anonymity of surveys helps mitigate the reluctance to officially criticise colleagues, but people may not believe that anonymity will be preserved. Even with trust in the feedback mechanism, the time cost of responding may preclude serious and thoughtful answers.

The most liveable cities rankings are suspicious

The „most liveable cities” rankings do not publish their methodology, only vague talk about a weighted index of healthcare, safety, economy, education, etc. An additional suspicious aspect is that the top-ranked cities are all large – there are no small towns. There are many more small than big cities in the world (this is known as Zipf’s law), so by chance alone, one would expect most of the top-ranked towns in any ranking that is not size-based to be small. The liveability rankings do not mention restricting attention to sizes above some cutoff. Even if a minimum size was required, one would expect most of the top-ranked cities to be close to this lower bound, just based on the size distribution.

The claimed ranking methodology includes several variables one would expect to be negatively correlated with the population of a city (safety, traffic, affordability). The only plausible positively size-associated variables are culture and entertainment, if these measure the total number of venues and events, not the per-capita number. Unless the index weights entertainment very heavily, one would expect big cities to be at a disadvantage in the liveability ranking based on the correlations, i.e. the smaller the town, the greater its probability of achieving a given liveability score and placing in the top n in the rankings. So the “best places to live” should be almost exclusively small towns. Rural areas not so much, because these usually have limited access to healthcare, education and amenities. The economy of remote regions grows less overall and the population is older, but some (mining) boom areas radically outperform cities in these dimensions. Crime is generally low, so if rural areas were included in the liveability index, then some of these would have a good change of attaining top rank.

For any large city, there exists a small town with better healthcare, safety, economy, education, younger population, more entertainment events per capita, etc (easy examples are university towns). The fact that these do not appear at the top of a liveability ranking should raise questions about its claimed methodology.

The bias in favour of bigger cities is probably coming from sample selection and hometown patriotism. If people vote mostly for their own city and the respondents of the liveability survey are either chosen from the population approximately uniformly randomly or the sample is weighted towards larger cities (online questionnaires have this bias), then most of the votes will favour big cities.

Overbidding incentives in crowdfunding

Crowdfunding campaigns on Funderbeam and other platforms fix a price for the shares or loan notes and invite investors to submit the quantity they want to buy. If demand exceeds supply, then the financial instruments are rationed pro rata, or investors requesting quantities below a threshold get what they asked and others receive the threshold amount plus a pro rata share in the remaining quantity after the threshold amounts are allocated. Rationing creates the incentive to oversubscribe: an investor who wants n shares and expects being rationed to fraction x of her demanded quantity will rationally put in the order for n/x>n shares to counteract the rationing. For a mechanism not to invite such manipulation, the amount allocated to a given bidder in the event of oversubscription should not depend on that bidder’s bid quantity. For example, everyone gets the minimum of their demanded amount and a threshold quantity, where the threshold is determined so as to equate demand and supply. If there are s shares and all m investors demand more than s/m, then each gets s/m.

If some investors demand less than s/m, then the allocation process is recursive as follows. The i1 investors who asked for less than s/m each get what they requested. Their total t1 is subtracted from s to get s1 and the number of remaining investors reduced to m1=m-i1. Then the i2 investors asking for less than s1/m1 get what they demanded (t2 in total), and the new remaining amount s2=s1-t2 and number of investors m2=m1-i2 determined. Repeat until the number of investors asking for less than sj/mj is zero. Divide the remaining amount equally between the remaining investors.

An alternative is to let the market work by allowing the price to adjust, instead of fixing it in advance. Everyone should then submit demand curves: for each price, how many shares are they willing to buy. This may be too complicated for the unsophisticated crowdfunding investors.

However, complexity is probably not the main reason for the inefficient allocation mechanism that invites overbidding. The crowdfunding platform wants to appear popular among investors to attract companies to raise funds on it, so wants to increase the number of oversubscribed campaigns. Rationing is a way to achieve such manipulation if the fundraisers ignore the investors’ incentives to overbid and do not compare the platform to competing ones with similar allocation mechanisms. If fundraisers are irrational in this way, then they do not choose competing platforms without overbidding incentives, because funding campaigns there seem to attract less investor interest. Competing platforms with more efficient allocation mechanisms then go out of business, which eliminates comparison possibilities.

Feedback requests by no-reply emails

We value your feedback” sent from a no-reply email address shows not only that the feedback is not valued, but also that the organisation is lying. More generally, when someone’s words and deeds conflict, then this is informative about his or her lack of truthfulness. If in addition the deeds are unpleasant, then this is the worst of the four possibilities (good or bad deeds combined with honest admission or lying).

The fact of sending such no-reply feedback requests suggests that either the organisations doing it are stupid, needlessly angering customers with insincere solicitations, or believe that the customers are stupid, failing to draw the statistically correct (Bayesian) conclusion about the organisation.

Some organisations send an automated feedback request by email (Mintos) or post (Yale Student Health) in response to every inquiry or interaction, even ones that clearly did not resolve the problem. The information about the non-resolution could easily be scraped from the original customer emails, without wasting anyone’s time by asking them to fill out feedback forms. The inefficient time-wasting by sending feedback requests is again informative about the organisation.

Blind testing of bicycle fitting

Claims that getting a professional bike fit significantly improves riding comfort and speed and reduces overuse injuries seem suspicious – how can a centimetre here or there make such a large difference? A very wrong fit (e.g. an adult using a children’s bike) of course creates big problems, but most people can adjust their bike to a reasonable fit based on a few online suggestions.

To determine the actual benefit of a bike fit requires a randomised trial: have professionals determine the bike fit for a large enough sample of riders, measure and record the objective parameters of the fit (centimetres of seatpost out of the seat tube, handlebar height from the ground, pedal crank length, etc). Then randomly change the fit by a few centimetres or leave it unchanged, without the cyclist knowing, and let the rider test the bike. Record the speed, ask the rider to rate the comfort, fatigue, etc. Repeat for several random changes in fit. Statistically test whether the average speed, comfort rating and other outcome variables across the sample of riders are better with the actual fit or with small random changes. To eliminate the placebo effect, blind testing is important – the cyclists should not know whether and how the fit has been changed.

Another approach is to have each rider test a large sample of different bike fits, find the best one empirically, record its objective parameters and then have a sample of professional fitters (who should not know what empirical fit was found) choose the best fit. Test statistically whether the professionals choose the same fit as the cyclist.

A simpler trial that does not quite answer the question of interest checks the consistency of different bike fitters. The same person with the same bike in the same initial configuration goes to various fitters and asks them to choose a fit. After each fitting, the objective sizing of the bike is recorded and then the bike is returned to the initial configuration before the next fit. The test is whether all fitters choose approximately the same parameters. Inconsistency implies that most fitters cannot figure out the objectively best fit, but consistency does not imply that the consensus of the fitters is the optimal sizing. They could all be wrong the same way – consistency is insufficient to answer the question of interest.

Why princesses and princes are described as attractive

The bards and scribes who recorded events for posterity received their income essentially in the form of tips from the rulers and the rich, so had an incentive to flatter, describing their patrons as more attractive, virtuous, brave, etc than they really were.

In addition to the exaggeration of their actual attractiveness in reports that have reached us, the children of the wealthy probably really were more beautiful than the poor. Richer youth were better fed and cared for, thus had fewer developmental abnormalities (e.g. bent legs from calcium deficiency) and diseases. The poor were malnourished, lived in dirty conditions and were subject to violence, therefore were more likely stunted, stank and had skin diseases, missing teeth and scars. The latter two distinctions in looks have to a lesser extent lasted to the present day, for the same reason.

Attractiveness consists of the visual, audible and olfactory signals of a fit mate (healthy, fertile conspecific), because organisms evolved to consider fit mates attractive. In times when most people were malnourished and diseased, a well fed and healthy rich person would have been much fitter than most, thus a preferred sexual partner for others.

On the other hand, conditional on surviving to adulthood, the poor likely had better immune-related genes, because they were under stronger selection pressure. Poorer people also experienced more infections, thus acquired stronger immunity to more diseases if they survived. Then conditional on equal looks, a person from a poorer background would have been a fitter mate. Also, the ruling class intermarried to keep wealth in the family, so were inbred (hereditary diseases among European royalty are an example consequence). For these two reasons, it is not surprising that the rulers and the rich found some poor people sexually attractive, specifically the outwardly healthiest-appearing among those who reached maturity.

Political parties claim inconsistent patience

If the stated preferences of politicians are taken at face value, then they have inconsistent patience across different policy areas. Left-wingers want to invest in education, infrastructure and prevention of climate change. These investments have a present cost and a long-delayed benefit, which suggests patient preferences (high discount factor, low discount rate). On the other hand, the left wants to increase borrowing, redistributive transfers and government spending in general, which have a current benefit (including electoral, but focus on societal for now) and a future cost. Preferring a current benefit and a future cost implies impatience.

For right-wing parties, these preferences are switched (impatient on education, climate, but patient on redistribution), so their inconsistency is the mirror image of the one described above. In summary, both sides of the political divide claim policy preferences that simultaneously imply patience and impatience, which suggests motives other than societal benefit. One way to reason about these other motives is described in https://sanderheinsalu.com/ajaveeb/?p=1015

M-diagram of politics

Suppose a politician claims that X is best for society. Quiz:

1. Should we infer that X is best for society?

2. Should we infer that the politician believes that X is best for society?

3. Should we infer that X is best for the politician?

4. Should we infer that X is best for the politician among policies that can be `sold’ as best for society?

5. Should we infer that the politician believes that X is best for the politician?

This quiz illustrates the general principle in game theory that players best-respond to their perceptions, not reality. Sometimes the perceptions may coincide with reality. Equilibrium concepts like Nash equilibrium assume that on average, players have correct beliefs.

The following diagram illustrates the reasoning of the politician claiming X is best for society: M-diagram of politics In case the diagram does not load, here is its description: the top row has `Official goal’ and `Real goal’, the bottom row has `Best way to the official goal’, `Best way to the real goal that looks like a reasonable way to the official goal’ and `Best way to the real goal’. Arrows point in an M-shaped pattern from the bottom row items to the top items. The arrow from `Best way to the real goal that looks like a reasonable way to the official goal’ to `Official goal’ is the constraint on the claims of the politician.

The correct answer to the quiz is 5.

This post is loosely translated from the original Estonian one https://www.sanderheinsalu.com/ajaveeb/?p=140

Platform providers fake being popular

Crowdfunding platforms, stock exchanges and other providers of two-sided markets want to appear popular, because having more buyers attracts more sellers and vice versa. The platform’s revenue is usually proportional to the number of users, because it charges a commission fee on trades or advertisers pay it to show ads to users. The exchange’s marginal cost of a user is close to zero, giving it an incentive to fake a high volume of trades, a large limit order book and a small bid-ask spread.

The platform’s cost of posting a great volume of outstanding buy and sell orders at a small spread is that many investors try to trade at these favourable bid and ask prices. Either the market maker has to take the other side of these attempted transactions or is found fraudulent. Taking the other side results in a large loss if some investors are better informed than the exchange.

The platform could falsely display a large trading volume, but keep the order book honestly small by adding fake trades at prices between the bid and the ask only, so no investor’s real limit order is ignored. This seems difficult to detect, unless one side of the limit order book is empty (e.g. no buyers) and at least one at-market order on the other side (e.g. a sell) is outstanding. In this case, any trades occurring would have to satisfy the at-market order. However, the platform or real investors can then take the other side of the at-market order at a very favourable price to themselves, which discourages at-market orders. A large trading volume with a thin order book is still slightly suspicious, because it requires that crossing buy and sell orders between the bid and ask prices arrive almost simultaneously, in order to be matched without appearing on the order book for long, and without triggering the real limit orders. Displaying the fake buys and sells on the order book risks attracting actual matching trades, which the platform would have to honour (at a cost).

Without automated quote matching, there are no at-market orders, for example on the Funderbeam crowdfunding platform. Instead, everyone either posts a limit order or picks an order from the other side to trade with, e.g. a buyer chooses a sell. Investors can pick an order with a worse price (higher sell or lower buy) on the other side, which frequently occurs on Funderbeam. Choosing a worse price is irrational, unless the traders in question are colluding, so the asset is effectively not changing ownership. Reasons to carry out such seemingly irrational trades are to manipulate price and volume, e.g. price can be raised or reduced by targeted trades outside the bid-ask interval. Volume can only increase after added trades, rational or not, but such seemingly greater activity is exactly what benefits the stakeholders of the platform. The employees of the market maker have a natural motive to fake-trade between themselves to make their firm look good, even without any inappropriate pressure from their boss.

Another way to attract issuers and investors is to demonstrate successful initial public offerings, meaning that the funds are raised quickly (good for issuers) and the price of the newly listed stock (or other asset) goes up, which benefits investors. Adding fake capital-raisers is difficult, because potential investors will check the background of the supposed issuer. Inserting spoof investors into an actual funding campaign is costly, because real money would have to be invested. One way to manipulate popularity upward is to simultaneously add a fake issuer and fake investors who satisfy its funding need. The idea is to not leave time for real investors to participate in the campaign, by pretending that the capital-raiser achieved its target funding level before most investors could react. This is easier in markets with a small number of real investors and without an auto-invest feature. However, the real investors who were supposedly pre-empted may still research the supposedly very popular issuer.

A costless way to briefly boost the popularity of a real fundraising campaign is to add fake investors after the target funding is achieved, and forbid issuers from increasing the target or accepting funds from those who subscribed after the goal was reached. Any campaign meeting its target can then be made to look heavily oversubscribed. However, if the issuers are informed in advance of the restriction not to increase the target, then they may find an alternative unrestricted platform to raise funds. On the other hand, if the restriction is not mentioned beforehand, then it will likely anger the issuers who will then create negative publicity for the platform. Competition between exchanges thus curtails their manipulation incentives.

The platform can motivate real investors to raise their bids when the campaign reaches its target by rationing demand: bidders in an oversubscribed share issue get only a fraction of what they wanted to buy. Anticipating this, buyers will increase their requested quantities so that the fraction of their new bid equals their actual demand. This makes the campaign look oversubscribed and creates a feedback loop: if other investors increase their quantities, then rationing reduces the fraction of a given investor’s demand that will be satisfied, so this investor raises her or his requested amount, which in turn makes others increase theirs.

If investors know of the bid rationing in advance, then they may select a rival market provider without this restriction, but if rationing takes them by surprise, then they may leave and publicly criticise the platform. Capital-raisers compare exchanges, so if many market providers inflate demand and the issuers pay attention to the level of oversubscription (instead of the fraction of campaigns reaching the target, which is what should matter to the capital-raiser), then the biggest inflator wins. Of course, platforms may not want to reveal unsuccessful campaigns (e.g. Funderbeam does not), so public data on the fraction of issuers who achieved their funding goal is unlikely to exist.

Theoretically, the feedback from bid rationing to increased quantity demanded could lead to infinite amounts requested. A countervailing incentive is that with positive probability, other investors do not honour their commitment to buy, in which case a given investor may be required to buy the amount (s)he demanded, instead of the lower amount (s)he actually wanted. If there is no commitment to buy (for example, on Funderbeam the bids are only non-binding indications of interest), then the danger of overcommitting is absent, so the rational choice seems to be requesting an infinite amount. Investors do not indicate infinite interest, so either they are irrational or some other penalty exists for outbidding one’s capability to pay.

Star job candidates benefit from appearing to be worse

Employers have a cost of making a job offer: filling out forms, getting approval, not being able to make other offers simultaneously in case too many job candidates accept, etc. A company who believes that it is not the top choice of candidates would want to avoid making an offer to a star applicant (one who is likely to receive better alternative offers from top employers, thus turn down the lower-ranked company’s offer).

If the star job-seeker is uncertain about the offers she or he will get, or wants a bargaining chip to use with the most preferred company, then (s)he prefers to obtain the lower-ranked employer’s offer, even when planning to reject it. A way to entice the company into offering a job is to pretend to be more attainable (have a worse outside option) by faking lower talent and potential when interviewing with lower-ranked employers. For this pretence to be (partly) credible, it must have a cost for the job-seeker, otherwise all the best candidates would pretend to be worse and increase their chance of obtaining offers from their backup employers. Then the next-best candidates would have to fake being less good to receive jobs, etc. This race to the bottom would only end once all candidates look like the worst possible, which does not seem realistic.

One potential cost is that faking lower talent has a random outcome, which may be so bad that the employer does not want to offer a job at all. This would temper the incentive to appear worse. Another cost is information leakage – if bad performance at a less desirable interview becomes known to higher-ranked employers, then the candidate may forfeit her or his most preferred interviews and jobs. It could also be that the top job-seekers cannot hide their quality, for example because their genius shines out despite their best effort, or employers base offers solely on recommendation letters, which the candidate cannot see or affect around the time of applying.