Author Archives: sanhei

Training programs should be hands-on and use the scientific method

The current education and training programs (first aid, fire warden, online systems) in universities just take the form of people sitting in a room passively watching a video or listening to a talk. A better way would be to interactively involve the trainees, because active learning makes people understand faster and remember longer. Hands-on exercises in first aid or firefighting are also more interesting and useful.

At a minimum, the knowledge of the trainees should be tested, in as realistic a way as possible (using hands-on practical exercises). The test should use the scientific method to avoid bias: the examiner should be unconnected to the training provider. The trainer should not know the specific questions of the exam in advance (to prevent “teaching to the test”), only the general required knowledge. Such independent examination permits assessing the quality of the training in addition to the knowledge of the trainees. Double-blind testing is easiest if the goal of the training (the knowledge hoped for) is well defined (procedures, checklists, facts, mathematical solutions).

One problem is how to motivate the trainees to make an effort in the test. For example, in university lectures and tutorials, students do not try to solve the exercises, despite this being a requirement. Instead, they wait for the answers to be posted. One way to incentivise effort is to create competition by publicly revealing the test results.

Directing help-seekers to resources is playing hot potato

In several mental health first aid guidelines, one of the steps is to direct the help-seeker to resources (suggest asking friends, family, professionals for help, reading materials on how to cope with the mental condition). This can provide an excuse to play hot potato: send the help-seeker to someone else instead of providing help. For example, the therapist or counsellor suggests seeing a doctor and obtaining a prescription, and the doctor recommends meeting a therapist instead.

The hot potato game is neither limited to sufferers of mental health issues, nor to doctors and counsellors. It is very common in universities: many people „raise awareness”, „coordinate” the work of others or „mentor” them, „manage change”, „are on the team or committee”, „create an action plan” (or strategy, policy or procedure), „start a conversation” about an issue or „call attention” to it, instead of actually doing useful work. One example is extolling the virtues of recycling, as opposed to physically moving recyclable items from the garbage bin to the recycling bin, and non-recyclable waste in the other direction. Another example is calling attention to mental health, instead of volunteering to visit the mentally ill at home and help them with tasks. Talking about supporting and mentoring early career academics, as opposed to donating part of one’s salary to create a new postdoc position, thereby putting one’s money where one’s mouth is.

All the seeming-work activities mentioned above allow avoiding actual work and padding one’s CV. Claiming to manage and coordinate other people additionally helps with empire-building – hiring more subordinates to whom one’s own work can be outsourced.

To motivate people to do useful work, as opposed to coordinating or managing, the desirable outcomes of the work should be clearly defined, measured, and incentivised. Mere discussions, committee meetings and action plans should attract no rewards, rather the reverse, because these waste other people’s time. More generally, using more inputs for the same output should be penalised, for example for academics, receiving more grant money should count negatively for promotions, given the same patent and publication output.

One way to measure the usefulness of someone’s activity is to use the revealed preference of colleagues (https://sanderheinsalu.com/ajaveeb/?p=1093). Some management and coordination is beneficial, but universities tend to overdo it, so it has negative value added.

Dark-coloured buildings and cars are silly

Many buildings in Australia, especially new developments, are black, dark grey or brown, or at least the roof is. Many cars are black (other dark colours are less prevalent). The dark colouring increases both cooling and heating costs, because it absorbs and emits solar and infrared radiation faster. In addition, the dark buildings are depressing and ugly. Dark-coloured cars are more difficult to notice, especially in low-visibility conditions, thus have more accidents. White or yellow vehicles are the safest (Lardelli-Claret et al. 2002, Solomon and King 1995).

For cars, the choice of black colour is probably caused by the owner’s desire to seem wealthy by making the car look expensive – limousines in films and popular culture are often black. For buildings, the association in people’s minds between colour and price is weak. If anything, light-coloured houses, reminiscent of Mediterranean villas and the White House, may slightly raise the owner’s status. The reason for dark-coloured roofs may be the cost – tar paper is a cheap material, easy to install. Windows may appear dark due to the one-way glass used. However, for walls, the cheapest material is usually bare concrete, as shown by its choice for purely functional structures (warehouses, barriers, piers, military buildings). For private dwellings, wood or brick may be the cheapest. Neither concrete, wood nor brick is particularly dark in colour, so the choice to build black or brown houses is puzzling. Maybe it is an architectural fad – fashions often trump practicality.

Gambling deterrence mechanism

Compulsive gambling is driven by the hope of winning a large amount, so one way to deter gambling addiction is to forbid paying out winnings to people registered as having this problem. In a one-shot interaction, casinos and lottery organisers clearly have an incentive to keep both the stakes and the winnings, but problem gambling is repeated. Sufficiently patient casinos are motivated to establish a reputation for paying out winnings, if the punishment is small or unlikely enough, because such reputation attracts other gamblers, which increases the long-run expected profit of the casino. The gamblers are not interested in reporting the casino for illegally paying out, because they benefit from the payout, and the closure of the establishment would prevent them from satisfying their craving.

However, the gamblers’ desire for big winnings, even with very low probability, can be used to motivate them to report – the law can offer a large sum to anyone who proves that a casino made an illegal payout. The reward can be financed from an even larger fine levied on the law-breaking casino. The reward should of course be in addition to any winnings of the whistleblower if the latter is a patron of the casino, because a gambler should not lose money by reporting. Gamblers are impatient, unlike casinos, so the repeated interaction with an establishment does not outweigh an immediate payout, even if collecting the payout leads to less opportunity to gamble in the future.

Identifying useful work in large organisations by revealed preference

Some members of large organisations seemingly do work, but actually contribute negatively by wasting other people’s time. For example, by sending mass emails, adding regulations, changing things for the sake of changing them (and to pad their CV with „completed projects”) or blocking change with endless committees, consultations and discussions with stakeholders. Even if there is a small benefit from this pretend-work, it is outweighed by the cost to the organisation from the wasted hours of other members. It is difficult to distinguish such negative-value-added activity from positive contributions (being proactive and entrepreneurial, leading by example). Opinions differ on what initiatives are good or bad and how much communication or discussion is enough.
Asking others to rate the work of a person would be informative if the feedback was honest, but usually people do not want to officially criticise colleagues and are not motivated to respond thoughtfully to surveys. Selection bias is also a problem, as online ratings show – the people motivated enough to rate a product, service or person are more likely to have extreme opinions.
Modern technology offers a way to study the revealed preferences of all members of the organisation without taking any of their time. If most email recipients block a given sender, move her or his emails to junk or spend very little time reading (keeping the email open), then this suggests the emails are not particularly useful. Aggregate email activity can be tracked without violating privacy if no human sees information about any particular individual’s email filtering or junking, only about the total number of people ignoring a given sender.
Making meetings, consultations and discussions optional and providing an excuse not to attend (e.g. two voluntary meetings at the same time) similarly allows members of the organisation „vote with their feet” about which meeting they find (more) useful. This provides an honest signal, unlike politeness-constrained and time-consuming feedback.
Anonymity of surveys helps mitigate the reluctance to officially criticise colleagues, but people may not believe that anonymity will be preserved. Even with trust in the feedback mechanism, the time cost of responding may preclude serious and thoughtful answers.

The most liveable cities rankings are suspicious

The „most liveable cities” rankings do not publish their methodology, only vague talk about a weighted index of healthcare, safety, economy, education, etc. An additional suspicious aspect is that the top-ranked cities are all large – there are no small towns. There are many more small than big cities in the world (this is known as Zipf’s law), so by chance alone, one would expect most of the top-ranked towns in any ranking that is not size-based to be small. The liveability rankings do not mention restricting attention to sizes above some cutoff. Even if a minimum size was required, one would expect most of the top-ranked cities to be close to this lower bound, just based on the size distribution.

The claimed ranking methodology includes several variables one would expect to be negatively correlated with the population of a city (safety, traffic, affordability). The only plausible positively size-associated variables are culture and entertainment, if these measure the total number of venues and events, not the per-capita number. Unless the index weights entertainment very heavily, one would expect big cities to be at a disadvantage in the liveability ranking based on the correlations, i.e. the smaller the town, the greater its probability of achieving a given liveability score and placing in the top n in the rankings. So the “best places to live” should be almost exclusively small towns. Rural areas not so much, because these usually have limited access to healthcare, education and amenities. The economy of remote regions grows less overall and the population is older, but some (mining) boom areas radically outperform cities in these dimensions. Crime is generally low, so if rural areas were included in the liveability index, then some of these would have a good change of attaining top rank.

For any large city, there exists a small town with better healthcare, safety, economy, education, younger population, more entertainment events per capita, etc (easy examples are university towns). The fact that these do not appear at the top of a liveability ranking should raise questions about its claimed methodology.

The bias in favour of bigger cities is probably coming from sample selection and hometown patriotism. If people vote mostly for their own city and the respondents of the liveability survey are either chosen from the population approximately uniformly randomly or the sample is weighted towards larger cities (online questionnaires have this bias), then most of the votes will favour big cities.

Overbidding incentives in crowdfunding

Crowdfunding campaigns on Funderbeam and other platforms fix a price for the shares or loan notes and invite investors to submit the quantity they want to buy. If demand exceeds supply, then the financial instruments are rationed pro rata, or investors requesting quantities below a threshold get what they asked and others receive the threshold amount plus a pro rata share in the remaining quantity after the threshold amounts are allocated. Rationing creates the incentive to oversubscribe: an investor who wants n shares and expects being rationed to fraction x of her demanded quantity will rationally put in the order for n/x>n shares to counteract the rationing. For a mechanism not to invite such manipulation, the amount allocated to a given bidder in the event of oversubscription should not depend on that bidder’s bid quantity. For example, everyone gets the minimum of their demanded amount and a threshold quantity, where the threshold is determined so as to equate demand and supply. If there are s shares and all m investors demand more than s/m, then each gets s/m.

If some investors demand less than s/m, then the allocation process is recursive as follows. The i1 investors who asked for less than s/m each get what they requested. Their total t1 is subtracted from s to get s1 and the number of remaining investors reduced to m1=m-i1. Then the i2 investors asking for less than s1/m1 get what they demanded (t2 in total), and the new remaining amount s2=s1-t2 and number of investors m2=m1-i2 determined. Repeat until the number of investors asking for less than sj/mj is zero. Divide the remaining amount equally between the remaining investors.

An alternative is to let the market work by allowing the price to adjust, instead of fixing it in advance. Everyone should then submit demand curves: for each price, how many shares are they willing to buy. This may be too complicated for the unsophisticated crowdfunding investors.

However, complexity is probably not the main reason for the inefficient allocation mechanism that invites overbidding. The crowdfunding platform wants to appear popular among investors to attract companies to raise funds on it, so wants to increase the number of oversubscribed campaigns. Rationing is a way to achieve such manipulation if the fundraisers ignore the investors’ incentives to overbid and do not compare the platform to competing ones with similar allocation mechanisms. If fundraisers are irrational in this way, then they do not choose competing platforms without overbidding incentives, because funding campaigns there seem to attract less investor interest. Competing platforms with more efficient allocation mechanisms then go out of business, which eliminates comparison possibilities.

Bending polycarbonate glasses

Polycarbonate glasses (sunglasses, safety goggles) can be bent to better fit one’s head by heating the thermoplastic polymer with a lighter, gas stove or other heat source. Example photos are below. Polycarbonate is not flammable and tolerates repeated heating and cooling. The only problem is that the paint discolours when heated and the plastic buckles and wrinkles when bent, so the lenses should not be heated, lest they become unusable. However, for the handles, a better fit outweighs cosmetic appearance in many applications.

Feedback requests by no-reply emails

We value your feedback” sent from a no-reply email address shows not only that the feedback is not valued, but also that the organisation is lying. More generally, when someone’s words and deeds conflict, then this is informative about his or her lack of truthfulness. If in addition the deeds are unpleasant, then this is the worst of the four possibilities (good or bad deeds combined with honest admission or lying).

The fact of sending such no-reply feedback requests suggests that either the organisations doing it are stupid, needlessly angering customers with insincere solicitations, or believe that the customers are stupid, failing to draw the statistically correct (Bayesian) conclusion about the organisation.

Some organisations send an automated feedback request by email (Mintos) or post (Yale Student Health) in response to every inquiry or interaction, even ones that clearly did not resolve the problem. The information about the non-resolution could easily be scraped from the original customer emails, without wasting anyone’s time by asking them to fill out feedback forms. The inefficient time-wasting by sending feedback requests is again informative about the organisation.

Blind testing of bicycle fitting

Claims that getting a professional bike fit significantly improves riding comfort and speed and reduces overuse injuries seem suspicious – how can a centimetre here or there make such a large difference? A very wrong fit (e.g. an adult using a children’s bike) of course creates big problems, but most people can adjust their bike to a reasonable fit based on a few online suggestions.

To determine the actual benefit of a bike fit requires a randomised trial: have professionals determine the bike fit for a large enough sample of riders, measure and record the objective parameters of the fit (centimetres of seatpost out of the seat tube, handlebar height from the ground, pedal crank length, etc). Then randomly change the fit by a few centimetres or leave it unchanged, without the cyclist knowing, and let the rider test the bike. Record the speed, ask the rider to rate the comfort, fatigue, etc. Repeat for several random changes in fit. Statistically test whether the average speed, comfort rating and other outcome variables across the sample of riders are better with the actual fit or with small random changes. To eliminate the placebo effect, blind testing is important – the cyclists should not know whether and how the fit has been changed.

Another approach is to have each rider test a large sample of different bike fits, find the best one empirically, record its objective parameters and then have a sample of professional fitters (who should not know what empirical fit was found) choose the best fit. Test statistically whether the professionals choose the same fit as the cyclist.

A simpler trial that does not quite answer the question of interest checks the consistency of different bike fitters. The same person with the same bike in the same initial configuration goes to various fitters and asks them to choose a fit. After each fitting, the objective sizing of the bike is recorded and then the bike is returned to the initial configuration before the next fit. The test is whether all fitters choose approximately the same parameters. Inconsistency implies that most fitters cannot figure out the objectively best fit, but consistency does not imply that the consensus of the fitters is the optimal sizing. They could all be wrong the same way – consistency is insufficient to answer the question of interest.