Computer vision training sets of photos are endogenous

In principle, every pixel could be independent of any other, so the set of possible photos is the number of pixels times the number of colours – billions at least. No training data set is large enough to cover these photo possibilities many times over, as required for statistical analysis, of which machine learning is a subfield. The problem is solved by restricting attention to a small subset of possible photos. In this case, there is a reasonable number of possible photos, which can be covered by a reasonably large training data set.

Useful photos on any topic usually contain just one main object, such as a face, with less than 100 secondary objects (furniture, clothes, equipment). There is a long right tail – some useful photos have dozens of the main object, like a group photo full of faces, but I do not know of a photo with a thousand distinguishable faces. Photos of mass events may have ten thousand people, but lack the resolution to make any face in these useful.

Only selected photos are worth analysing. Only photos sufficiently similar to these are worth putting in a computer vision training dataset. The sample selection occurs both on the input and the output side: few of the billions of pixel arrangements actually occur as photos to be classified by machine vision and most of the training photos are similar to those. There are thus fewer outputs to predict than would be generated from a uniform random distribution and more inputs close to those outputs than would occur if input data was uniform random. Both speed learning.

When photo resolution improves, more objects of interest may appear in photos without losing usefulness to blur. Then such photos become available in large numbers and are added to the datasets.

Leave a Reply

Your email address will not be published.

WordPress Anti Spam by WP-SpamShield