Monthly Archives: February 2021

Paying pharmaceutical firms for capacity is problematic

Castillo et al. 2021 (doi:10.1126/science.abg0889) make many valid points, e.g., vaccine production should be greatly expanded using taxpayer money because the quicker recovery from the pandemic more than pays for the expansion. Castillo et al. also suggest paying pharmaceutical manufacturers for the capacity they install instead of the quantity they produce. The reasoning of the authors is that producers are delaying installing capacity and the delivery of their promised vaccine quantities to save costs and to supply higher-paying buyers first, because the penalties for delaying are small. Producers refuse to sign contracts with larger penalties.

What the authors do not mention is that the same problems occur when paying for capacity. In addition, the capacity needs to be monitored, which is more difficult than checking the delivered quantity. Before large-scale production, how to detect the „Potemkin capacity” of installing cheap production lines unsuitable for large quantities? The manufacturer may later simply claim technical glitches when the production line does not work. Effective penalties are needed, which in turn requires motivating the producer to sign a contract containing these, just like for a quantity contract.

Paying in advance for capacity before the vaccine is proven to work insures firms against the risk of failure, as Castillo et al. say. The problem is that such advance payment also attracts swindlers who promise a miracle cure and then run with the money – there is adverse selection in who enters the government’s capacity contract scheme. Thus capacity contracts should be restricted to firms with a good established reputation. However, vaccines from innovative entrants may also be needed, which suggests continuing to use quantity contracts at least for some firms. If the law requires treating firms equally, then they should all be offered a similar contract.

Identifying unmeasurable effort in contests

To distinguish unmeasurable effort from unmeasurable exogenous factors like talent or environmental interference in contests, assumptions are needed, even for partial identification when overall performance can be objectively measured (e.g., chess move quality evaluated by a computer). Combining one of the following assumptions with the additive separability of effort and the exogenous factors provides sign restrictions on coefficient estimates. Additive separability means that talent or the environment changes performance the same way at any effort level.

One such identifying assumption is that effort is greatest when it makes the most difference – against an equal opponent. By contrast, effort is lower against much better and much worse opponents.

A similar identifying assumption is that if there is personal conflict between some contest participants but not others, then effort is likely higher against a hated opponent than a neutral one.

The performance of a given contestant against an equal opponent compared to against an unequal one is a lower bound on how much effort affects performance. Similarly, the performance against a hated rival compared to against a neutral contestant is a lower bound on the effect of effort. The lower bound is not the total influence of effort, because even against an unequal neutral opponent, effort is still positive.

Computer vision training sets of photos are endogenous

In principle, every pixel could be independent of any other, so the set of possible photos is the number of pixels times the number of colours – billions at least. No training data set is large enough to cover these photo possibilities many times over, as required for statistical analysis, of which machine learning is a subfield. The problem is solved by restricting attention to a small subset of possible photos. In this case, there is a reasonable number of possible photos, which can be covered by a reasonably large training data set.

Useful photos on any topic usually contain just one main object, such as a face, with less than 100 secondary objects (furniture, clothes, equipment). There is a long right tail – some useful photos have dozens of the main object, like a group photo full of faces, but I do not know of a photo with a thousand distinguishable faces. Photos of mass events may have ten thousand people, but lack the resolution to make any face in these useful.

Only selected photos are worth analysing. Only photos sufficiently similar to these are worth putting in a computer vision training dataset. The sample selection occurs both on the input and the output side: few of the billions of pixel arrangements actually occur as photos to be classified by machine vision and most of the training photos are similar to those. There are thus fewer outputs to predict than would be generated from a uniform random distribution and more inputs close to those outputs than would occur if input data was uniform random. Both speed learning.

When photo resolution improves, more objects of interest may appear in photos without losing usefulness to blur. Then such photos become available in large numbers and are added to the datasets.