Tag Archives: economic theory

On the optimality of self-quarantine

Is self-quarantine early in an epidemic optimal, either individually or for society?

Individual incentives are easier to analyse, so let’s start with these. Conditional on catching a disease, other things equal, later is better. The reasons are discounting and the advances in treatment. A delay of many years may increase the severity conditional on infection (old age weakens immunity), but such long time intervals are typically not relevant in an epidemic.

Conditional on falling ill within the next year (during which discounting and advances in treatment are negligible), it is better to catch the disease when few others are infected, so hospitals have spare capacity. This suggests either significantly before or long after the peak of the epidemic. Self-quarantine, if tight enough, may postpone one’s infection past the peak.

Another individually optimal choice is to get infected early (also called vaccination with live unattenuated virus), although not if immunity increases very little or even decreases. The latter means that one infection raises the probability of another with the same disease, like for malaria, HIV and herpes, which hide out in the organism and recur. Cancer displays similar comebacks. For viral respiratory diseases, as far as I know, immunity increases after infection, but not to 100%. The optimality of self-quarantine vs trying to be infected early then depends on the degree of immunity generated, the quality of the quarantine, whether the disease will be eradicated soon after the epidemic, and other details of the situation.

Individual optimality also depends on what the rest of the population is doing. If their self-quarantine is close to perfect, then an individual’s risk of catching the disease is very low, so no reason to suffer the disutility of isolation. If others quarantine themselves moderately, so the disease will be eradicated soon, but currently is quite infectious, then self-isolation is individually optimal. If others do almost nothing, and the disease spreads easily and does not generate much immunity, then an individual will either have to self-quarantine indefinitely or will catch it. Seasonal flu and the common cold (various rhinoviruses and adenoviruses) are reasonable examples. For these, self-quarantine is individually suboptimal.

Social welfare considerations seem to weigh in favour of self-quarantine, because a sick person infects others, which speeds up the epidemic. One exception to the optimality of self-quarantine comes from economies of scale in treatment when prevalence is not so high as to overwhelm the health system. If the epidemic is fading, but the disease increases immunity and is likely to become endemic, with low prevalence, then it may be better from a social standpoint to catch the disease when treatment is widely available, medical personnel have just had plenty of experience with this illness, and not many other people remain susceptible. This is rare.

Herd immunity is another reason why self-quarantine is socially suboptimal for some diseases. The logic is the same as for vaccination. If catching chickenpox as a child is a mild problem and prevents contracting and spreading it at an older age when it is more severe, then sending children to a school with a chickenpox epidemic is a smart idea.

Reducing the duration of quarantine for vulnerable populations is another reason why being infected sooner rather than later may be socially optimal. Suppose a disease is dangerous for some groups, but mild or even undetectable for most of the population, spreads widely and makes people resistant enough that herd immunity leads to eradication. During the epidemic, the vulnerable have to be isolated, which is unpleasant for them. The faster the non-vulnerable people get their herd immunity and eradicate the infection, the shorter the quarantine required for the vulnerable.

For most epidemics, but not all, self-quarantine is probably socially optimal.

Affirmative action, unequal contests and incentives for effort

Firms using affirmative action policies may perform better because of a welcoming work environment, better candidates, peer effects in diverse teams, but also because of stronger incentives that are targeted better. Unequal standards in contests, such as a lower bar for promotion for historically underrepresented groups, may motivate greater effort than equal ones. The reasoning is as follows.

If people expect to have unequal performance, then equal standards may demotivate everyone, because the high performers think the promotion or bonus is almost assured even without further effort, and the low performers believe the prize is unattainable, so no point in trying. In this case, setting a higher bar for the better-performing group can incentivise both groups, like different divisions in sports. The result that equalising a contest motivates greater effort is fairly general in game theory. Contests may even motivate overprovision of effort relative to the socially efficient level.

A similar effort-increasing effect of unequal standards occurs even if the groups have equal performance, provided their preferences differ. For example, if men value winning a contest (for evolutionary or other reasons), then they exert greater effort in a competitive environment where some but not all can get promoted. If women care little about winning and focus on absolute compensation, then promoting all of them does not significantly reduce their work incentives. An employer who does not internalise the full cost of the employees’ effort wants them to overwork, thus in such an environment optimally sets a high bar for the promotion of men, but a low bar for women.

On the other hand, if there is a limited number of promotion slots, then it may be optimal to give all these to men, because this increases total effort in the firm the most, and use other compensation (salary, bonuses, flex-work) to motivate women.

Ebay should allow conditional bids

Ebay should allow buyers to bid for a single item across multiple auctions: make a bid for one item, then if outbid, automatically make the same bid on the next identical (as defined by the buyer) item and so on. This increases efficiency by joining multiple auctions for identical items into one market with many sellers and buyers. It also reduces selling times, because a buyer who just wants one unit does not have to wait until being outbid before bidding for the next identical item. Buyers generally are not continuously watching the auction, so there is a delay between being outbid and manually making the next bid. Buyers are willing to pay to reduce the delay, as evidenced by purchases at “buy it now” prices greater than the highest bids in the auctions.

More generally, bids conditional on being outbid would help merge auctions into markets, gaining efficiency and speed. For example, a buyer has different values for used copies of the same item in different condition and wants just one copy of the item. Conditional bids allow the buyer to enter a sequence of different-sized bids, one for each copy, with each bid in the sequence conditional on the preceding bids losing.

Linking the bids is not computationally difficult because Ebay already sends an automatic email to a buyer who has been outbid. Instead of an email, the event of being outbid can be used to trigger entering a bid on the next copy of the item.

Faster selling times benefit everyone: sellers sell faster, buyers do not have to waste time checking whether they have been outbid and then making the next bid, Ebay can charge higher fees to appropriate part of the increased surplus from greater efficiency. Ebay can also use the data on which items buyers consider similar enough to classify products and remove duplicate ads.

A browser extension or app can provide the same functionality: an email with title containing “You have been outbid” triggers code that logs in the user (with the credentials saved into a password manager or the browser) and types in a bid on the next copy of the item.

Prefereeing increases the inequality of research output

Why do top researchers in economics publish almost exclusively in the top 5 journals? Random idea generation and mistakes in the course of its implementation should imply significant variance of the quality of finished research projects even for the best scientists. So top people should have more of all quality levels of papers.

Nepotism is not necessary to explain why those at top universities find it easier to publish in top journals. Researchers at the best departments have frequent access to editors and referees of top journals (their colleagues), so can select ideas that the editors and referees like and further tailor the project to the tastes of these gatekeepers during writing. Researchers without such access to editors and referees choose their projects “blindly” and develop the ideas in directions that only match gatekeeper tastes by chance. This results in much “wasted work” if the goal is to publish well (which may or may not be correlated with the social welfare from the research).

In addition to selecting and tailoring projects, those with access can also better select journals, because they know the preferences of the editorial board. So for any given project, networking with the gatekeepers allows choosing a journal where editors are likely to like this project. This reduces the number of rejections before eventual acceptance, allowing accumulating publications quicker and saving the labour of some rounds of revision of the paper (at journals that reject after a revise-and-resubmit for example).

A similar rich-get-richer positive feedback operates in business, especially for firms that sell to other firms (B2B). Top businesspeople get access to decisionmakers at other organisations, so can learn what the market desires, thus can select and tailor products to the wants of potential customers. Better selection and targeting avoids wasting product development costs. The products may or may not increase social welfare.

Information about other business leaders’ preferences also helps target the marketing of any given product to those predisposed to like the product. Thus successful businesspeople (who have access to influential decisionmakers) have a more popular selection of products with lower development and marketing costs.

On the seller side, firms would not want their competitors to know what the buyers desire, but the buyer side has a clear incentive to inform all sellers, not just those with access. Empirically, few buyers publish on their websites any information about their desired products. One reason may be that info is costly to provide, e.g. requests for product characteristics reveal business secrets about the buyer. However, disclosure costs would also prevent revealing info via networking. Another reason buyers do not to publicly announce their desired products may be that the buyers are also sellers of other products, so trade information for information with their suppliers who are also their customers. The industry or economy as a whole would benefit from more information-sharing (saving the cost of unwanted products), so some trading friction must prevent this mutually beneficial exchange.

One friction is an agency conflict between managers and shareholders. If managers are evaluated based on relative performance, then the managers of some firms may collude to only share useful information with each other, not with those outside their circle. The firms managed by the circle would benefit from wider sharing of their product needs, because outside companies would enter the competition to supply them, reducing their costs. However, those outside firms would get extra profit, making their managers look good, thus lowering the relative standing of the managers in the circle.

Popularity inequality and multiple equilibria

Suppose losing a friend is more costly for a person with few contacts than with many. Then a person with many friends has a lower cost of treating people badly, e.g. acting as if friends are dispensable and interchangeable. The lower cost means that unpleasant acts can signal popularity. Suppose that people value connections with popular others more than unpopular. This creates a benefit from costly, thus credible, signalling of popularity – such signals attract new acquaintances. Having a larger network in turn reduces the cost of signalling popularity by treating friends badly.

Suppose people on average value a popular friend more than the disutility from being treated badly by that person (so the bad treatment is not too bad, more of a minor annoyance). Then a feedback loop arises where bad treatment of others attracts more connections than it loses. The popular get even more popular, reducing their cost of signalling popularity, which allows attracting more connections. Those with few contacts do not want to imitate the stars of the network by also acting unpleasantly, because their expected cost is larger. For example, there is uncertainty about the disutility a friend gets from being treated badly or about how much the friend values the connection, so treating her or him badly destroys the friendship with positive probability. An unpopular person suffers a large cost from losing even one friend.

Under the assumptions above, a popular person can rely on the Law of Large Numbers to increase her or his popularity in expectation by treating others badly. A person with few friends does not want to take the risk of losing even them if they turn out to be sensitive to nastiness.

Multiple equilibria may exist in the whole society: one in which everyone has many contacts and is nasty to them and one in which people have few friends and act nice. Under the assumption that people value a popular friend more than the disutility from being treated badly, the equilibrium with many contacts and bad behaviour actually gives greater utility to everyone. This counterintuitive conclusion can be changed by assuming that popularity is relative, not a function of the absolute number of friends. Total relative popularity is constant in the population, in which case the bad treatment equilibrium is worse by the disutility of bad treatment.

In order for there to be something to signal, it cannot be common knowledge that everyone is equally popular. Signalling with reasonable beliefs requires unequal popularity. Inequality reduces welfare if people are risk averse (in this case over their popularity). Risk aversion further reduces average utility in the popular-and-nasty equilibrium compared to the pooling equilibrium where everyone has few friends and does not signal (acts nice).

In general, if one of the benefits of signalling is a reduction in the cost of signalling, then the amount of signalling and inequality increases. My paper “Dynamic noisy signaling” (2018) studies this in the context of education signalling in Section V.B “Human capital accumulation”.

Overbidding incentives in crowdfunding

Crowdfunding campaigns on Funderbeam and other platforms fix a price for the shares or loan notes and invite investors to submit the quantity they want to buy. If demand exceeds supply, then the financial instruments are rationed pro rata, or investors requesting quantities below a threshold get what they asked and others receive the threshold amount plus a pro rata share in the remaining quantity after the threshold amounts are allocated. Rationing creates the incentive to oversubscribe: an investor who wants n shares and expects being rationed to fraction x of her demanded quantity will rationally put in the order for n/x>n shares to counteract the rationing. For a mechanism not to invite such manipulation, the amount allocated to a given bidder in the event of oversubscription should not depend on that bidder’s bid quantity. For example, everyone gets the minimum of their demanded amount and a threshold quantity, where the threshold is determined so as to equate demand and supply. If there are s shares and all m investors demand more than s/m, then each gets s/m.

If some investors demand less than s/m, then the allocation process is recursive as follows. The i1 investors who asked for less than s/m each get what they requested. Their total t1 is subtracted from s to get s1 and the number of remaining investors reduced to m1=m-i1. Then the i2 investors asking for less than s1/m1 get what they demanded (t2 in total), and the new remaining amount s2=s1-t2 and number of investors m2=m1-i2 determined. Repeat until the number of investors asking for less than sj/mj is zero. Divide the remaining amount equally between the remaining investors.

An alternative is to let the market work by allowing the price to adjust, instead of fixing it in advance. Everyone should then submit demand curves: for each price, how many shares are they willing to buy. This may be too complicated for the unsophisticated crowdfunding investors.

However, complexity is probably not the main reason for the inefficient allocation mechanism that invites overbidding. The crowdfunding platform wants to appear popular among investors to attract companies to raise funds on it, so wants to increase the number of oversubscribed campaigns. Rationing is a way to achieve such manipulation if the fundraisers ignore the investors’ incentives to overbid and do not compare the platform to competing ones with similar allocation mechanisms. If fundraisers are irrational in this way, then they do not choose competing platforms without overbidding incentives, because funding campaigns there seem to attract less investor interest. Competing platforms with more efficient allocation mechanisms then go out of business, which eliminates comparison possibilities.

Avoiding the Bulow and Rogoff 1988 result on the impossibility of borrowing

Bulow and Rogoff 1988 NBER working paper 2623 proves that countries cannot borrow, due to their inability to credibly commit to repay, if after default they can still buy insurance. The punishment of defaulting on debt is being excluded from future borrowing. This punishment is not severe enough to motivate a country to repay, by the following argument. A country has two reasons to borrow: it is less patient than the lenders (values current consumption or investment opportunities relatively more) and it is risk-averse (either because the utility of consumption is concave, or because good investment opportunities appear randomly). Debt can be used to smooth consumption or take advantage of temporary opportunities for high-return investment: borrow when consumption would otherwise be low, pay back when relatively wealthy.

After the impatient country has run up its debt to the maximum level the creditors are willing to tolerate, the impatience motive to borrow disappears, because the lenders do not allow more consumption to be transferred from the future to the present. Only the insurance motive to borrow remains. The punishment for default is the inability to insure via debt, because in a low-consumption or valuable-investment state of affairs, no more can be borrowed. Bulow and Rogoff assume that the country can still save or buy insurance by paying in advance, so “one-sided” risk-sharing (pay back when relatively wealthy, or when investment opportunities are unavailable) is possible. This seemingly one-sided risk-sharing becomes standard two-sided risk-sharing upon default, because the country can essentially “borrow” from itself the amount that it would have spent repaying debt. This amount can be used to consume or invest in the state of the world where these activities are attractive, or to buy insurance if consumption and investment are currently unattractive. Thus full risk-sharing is achieved.

More generally, if the country can avoid the punishment that creditors impose upon default (evade trade sanctions by smuggling, use alternate lenders if current creditors exclude it), then the country has no incentive to repay, in which case lenders have no incentive to lend.

The creditors know that once the country has run up debt to the maximum level they allow, it will default. Thus rational lenders set the maximum debt to zero. In other words, borrowing is impossible.

A way around the no-borrowing theorem of Bulow and Rogoff is to change one or more assumptions. In an infinite horizon game, Hellwig and Lorenzoni allow the country to run a Ponzi scheme on the creditors, thus effectively “borrow from time period infinity”, which permits a positive level of debt. Sometimes even an infinite level of debt.

Another assumption that could realistically be removed is that the country can buy insurance after defaulting. Restricting insurance need not be due to an explicit legal ban. The insurers are paid in advance, thus do not exclude the country out of fear of default. Instead, the country’s debt contract could allow creditors to seize the country’s financial assets abroad, specifically in creditor countries, and these assets could be defined to include insurance premiums already paid, or the payments from insurers to the country. The creditors have no effective recourse against the sovereign debtor, but they may be able to enforce claims against insurance firms outside the defaulting country.

Seizing premiums to or payments from insurers would result in negative profits to insurers or restrict the defaulter to one-sided risk-sharing, without the abovementioned possibility of making it two-sided. Seizing premiums makes insurers unwilling to insure, and seizing payments from insurers removes the country’s incentive to purchase insurance. Either way, the country’s benefit from risk-sharing after default is eliminated. This punishment would motivate loan repayment, in turn motivating lending.

Putting your money where your mouth is in policy debates

Climate change deniers should put their money where their mouth is by buying property in low-lying coastal areas or investing in drought-prone farmland. Symmetrically, those who believe the Earth is warming as a result of pollution should short sell climate-vulnerable assets. Then everyone eventually receives the financial consequences of their decisions and claimed beliefs. The sincere would be happy to bet on their beliefs, anticipating positive profit. Of course, the beliefs have to be somewhat dogmatic or the individuals in question risk-loving, otherwise the no-agreeing-to-disagree theorem would preclude speculative trade (opposite bets on a common event).

Governments tend to compensate people for widespread damage from natural disasters, because distributing aid is politically popular and there is strong lobbying for this free insurance. This insulates climate change deniers against the downside risk of buying flood- or wildfire-prone property. To prevent the cost of the damages from being passed to the taxpayers, the deniers should be required to buy insurance against disaster risk, or to sign contracts with (representatives of) the rest of society agreeing to transfer to others the amount of any government compensation they receive after flood, drought or wildfire. Similarly, those who short sell assets that lose value under a warming climate (or buy property that appreciates, like Arctic ports, under-ice mining and drilling rights) should not be compensated for the lost profit if the warming does not take place.

In general, forcing people to put their money where their mouth is would avoid wasting time on long useless debates (e.g. do high taxes reduce economic growth, does a high minimum wage raise unemployment, do tough punishments deter crime). Approximately rational people would doubt the sincerity of anyone who is not willing to bet on her or his beliefs, so one’s credibility would be tied to one’s skin in the game: a stake in the claim signals sincerity. Currently, it costs pundits almost nothing to make various claims in the media – past wrong statements are quickly forgotten, not impacting the reputation for accuracy much. 

The bets on beliefs need to be legally enforceable, so have to be made on objectively measurable events, such as the value of a publicly traded asset. By contrast, it is difficult to verify whether government funding for the arts benefits culture, or whether free public education is good for civil society, therefore bets on such claims would lead to legal battles. The lack of enforceability would reduce the penalty for making false statements, thus would not deter lying or shorten debates much.

An additional benefit from betting on (claimed) beliefs is to provide insurance to those harmed by the actions driven by these beliefs. For example, climate change deniers claim small harm from air pollution. Their purchases of property that will be damaged by a warming world allows climate change believers to short sell such assets. If the Earth then warms, then the deniers lose money and the believers gain at their expense. This at least partially compensates the believers for the damage caused by the actions of the deniers.

Volunteer work is less efficiently allocated than paid work

In my experience, the labour of volunteers and low-wage workers is frequently wasted, just like other free or cheap resources. Unlike for expensive market work, there are no price signals to guide people to the most important tasks first. If activities are not prioritised based on how productive these are, then randomly allocating labour is likely to select work with low usefulness.

Within an organisation, competent managers of volunteers may direct them to the most productive work, but even with the best leaders managing some volunteering opportunities, it remains unclear which organisations do the most good and thus should get priority labour. There is a limited amount of work hours available, just like other resources. Even the best volunteers cannot do everything at once, so to maximise social welfare, the most helpful tasks should be done first. In market work, the employer at which a worker is most productive is generally willing to pay the most for this person’s services. Then if people follow the money, their labour gets allocated to the highest-value tasks.

Of course, markets are not perfect and the importance of some work is not accurately measured in money, but for reasonably rational agents, a noisy signal is better than no signal. Prices carry information and help efficient allocation of resources. One way to better allocate volunteer labour is to establish a pseudo-money for unpaid work: each nonprofit organisation gets a certain amount of credits initially and can spend these to “hire” voluntary workers. Credits used for one person cannot be used for another, so the organisation willing to give away the most for a given individual’s services is probably the one receiving the greatest benefit from that person. Volunteers can then use the credits offered to judge where they would be the most productive (could do the greatest amount of good).

Keeping an open mind and intellectual honesty

„Keep an open mind” is often used as an argument against science, or to justify ignoring evidence more broadly. Let’s distinguish two cases of keeping an open mind: before vs after the evidence comes in. It is good to keep an open mind before data is obtained – no hypothesis is ruled out. In reality, all possibilities have positive probability, no matter how great the amount and quality of information, so one should not dogmatically rule out anything even given the best evidence. However, for practical purposes a small enough probability is the same as zero. Decisions have to be made constantly (choosing not to decide is also a decision), so after enough scientific information is available, it is optimal to make up one’s mind, instead of keeping it open.
Intellectually honest people who want to keep an open mind after obtaining evidence would commit to it from the start: publicly say that no matter what the data shows in the future, they will ignore it and keep an open mind. Similarly, the intellectually honest who plan to make up their mind would also commit, in this case to a policy along the lines of „if the evidence says A, then do this, but if the evidence says B, then that”. The latter policy resembles (parts of) the scientific method.
The anti-science or just intellectually dishonest way of “keeping an open mind” is to do this if and only if the evidence disagrees with one’s prior views. In other words, favourable data is accepted, but unfavourable ignored, justifying the ignoring with the open mind excuse. In debates, the side that runs out of arguments and is about to lose is usually the one who recommends an open mind, and only at that late stage of the debate and conditional on own weak position. Similarly, “agreeing to disagree” is mostly recommended intellectually dishonestly by the losing side of an argument, to attempt to leave the outcome uncertain. This is an almost logically contradictory use of “agreeing to disagree”, because it is mathematically proven that rational agents putting positive probability on the same events cannot agree to disagree – if their posterior beliefs are common knowledge, then these must coincide.