Monthly Archives: March 2018

Adding anticipation to automatic transmission

Some of the downsides of automatic transmission in cars are that it does not anticipate hills or overtaking, and does not respond to slippery conditions appropriately. The technology that could enable the transmission to anticipate hills or overtaking is already available and incorporated in some cars, namely GPS, maps and sensors that look ahead of the car. If the map data includes altitude, then the location and movement direction of the car on the map predicts the slope that the car will be on in the near future. This information could be sent to the automatic transmission to enable it to shift gears in anticipation of a hill. A forward-looking sensor that has a range of a few hundred metres can also see a hill if the road does not curve too much. The sensor data could also be sent to the transmission. Similarly, a sensor could detect the nearing of the car in front and shift to a lower gear to prepare to accelerate for overtaking.
Slippery conditions can be predicted using the car’s thermometer, perhaps with the addition of a humidity sensor, or detected using a wheel slip sensor. This information could also be sent to the computer controlling the automatic transmission, to prevent it from spinning the wheels too fast when there is little grip. The GPS or forward-looking sensor could also tell whether the car is moving relative to the landscape. Comparing the movement data with the wheel spinning speed reveals whether the wheels are slipping.

Speculative etymology of the word “partner”

I was confused when I first encountered the word “partner” in the context of referring to a spouse or the person one is in a romantic relationship with. I thought: “What partner? Business partner? Tennis partner? Oh, sex partner – that’s where the reference to partner comes from.”
A similar speculative etymological derivation can be applied to the US slang phrase “hook up”, which means to start a romantic relationship. The phrase arose to describe a situation in which a “hook” goes up something, specifically someone’s bottom.

Gender equity either on paper or by forcing people to change research fields

Suppose that a university employs roughly equal numbers of men and women overall, but the proportions of the genders differ across research fields, or across research and administration, and the university wants gender equality within each subfield. Using the narrowest definition of a research field, each field has either one or zero people, so in the smallest fields, gender equality is impossible. From now on, the focus is on fields or administrative units that have at least two people.
Gender equity can be achieved on paper by redrawing the administrative boundaries or redefining research fields. A simple algorithm (not the only possible algorithm) is to form pairs of one male and one female employee and call each such pair an administrative unit. Larger units can be formed by adding many male-female pairs together. If the numbers of men and women are not exactly equal, then some single-gender pairs will be left over, but most administrative units will have perfect gender equality.
The same idea can be used less radically by reassigning people who do interdisciplinary work and could plausibly belong to multiple research fields. Each person who is „between” fields gets assigned to the field that has a smaller fraction of that person’s gender. This increases gender equality in both the field that the person joins and the field that the person left. The field with a bigger surplus of that person’s gender loses one of that gender, and the field with a smaller surplus gains one.
Other than reassigning people or redrawing administrative boundaries, gender equity requires inducing people to change their research fields. To some of my colleagues, directing researchers to change their area is ideologically unacceptable. However, if equity is the goal and people’s field change is necessary to achieve it, then several methods can be used. The inducements can be softer or harder. A hard inducement is a hiring policy (and job ads) that restrict hiring to only one gender. The same can be done with promotion and retention. If this policy was explicitly stated to everyone, then it would become more effective, and also more acceptable to people than when they are surprised with it upon applying for promotion.
Soft inducements consist of hints that one gender is preferred, which are usually stated in political doublespeak like „we are committed to equity” or „we are an equal opportunity employer”. If many more candidates of one gender apply, then giving all candidates an equal opportunity of getting hired does not result in equal proportions of men and women employed. I am in favour of clear guidelines and transparency, for example of explicitly stating in job ads and promotion policies that the underrepresented gender is preferred, and which gender is currently underrepresented. Clearly telling people that switching fields is good for their career is likely to have a bigger effect than the currently used hints.
It may be easier to shift the research areas of people who are earlier in their careers. Encouraging more young people of a given gender to go to an area where their gender is underrepresented is one way of inducing them to change their field (relative to their preference).
Current equity policies are focussing almost exclusively on the inflow of employees. Gender balance can also be improved by managing the outflow, for example by offering early retirement schemes to one gender, or expanding these to a wider age range for one gender. If there are gender differences in the propensity of accepting certain inducements to leave, then the same inducements can be offered to both genders (seemingly gender-neutrally), with the desired result that one gender exits more.

Laplace’s principle of indifference makes history useless

Model the universe in discrete time with only one variable, which can take values 0 and 1. The history of the universe up to time t is a vector of length t consisting of zeroes and ones. A deterministic universe is a fixed sequence. A random universe is like drawing the next value (0 or 1) according to some probability distribution every period, where the probabilities can be arbitrary and depend in arbitrary ways on the past history.
The prior distribution over deterministic universes is a distribution over sequences of zeroes and ones. The prior determines which sets are generic. I will assume the prior with the maximum entropy, which is uniform (all paths of the universe are equally likely). This follows from Laplace’s principle of indifference, because there is no information about the distribution over universes that would make one universe more likely than another. The set of infinite sequences of zeroes and ones is bijective with the interval [0,1], so a uniform distribution on it makes sense.
After observing the history up to time t, one can reject all paths of the universe that would have led to a different history. For a uniform prior, any history is equally likely to be followed by 0 or 1. The prediction of the next value of the variable is the same after every history, so knowing the history is useless for decision-making.
Many other priors besides uniform on all sequences yield the same result. For example, uniform restricted to the support consisting of sequences that are eventually constant. There is a countable set of such sequences, so the prior is improper uniform. A uniform distribution restricted to sequences that are eventually periodic, or that in the limit have equal frequency of 1 and 0 also works.
Having more variables, more values of these variables or making time continuous does not change the result. A random universe can be modelled as deterministic with extra variables. These extras can for example be the probability of drawing 1 next period after a given history.
Predicting the probability distribution of the next value of the variable is easy, because the probability of 1 is always one-half. Knowing the history is no help for this either.

Taking your own fingerprints

Some jobs, visas, etc, require a criminal record check from the FBI, which requires sending the FBI your fingerprints. Where you live, it may be difficult to find a professional who will take your fingerprints on an FBI-approved fingerprint card. But no problem, it is quite feasible to take your own fingerprints.
Supplies needed:
1. Scrap paper to practice on
2. Printouts of the FBI fingerprint card on standard office paper (5-10 copies, because you will make mistakes)
3. Ink pad from a stamp (with some black or blue ink in it). For most brands of stamps, the internet has instructions on how to slide the ink pad out.
4. Piece of smooth rubber a little larger than a fingertip, e.g. a piece of bicycle inner tube, a piece of unlubricated condom, or a rubber ball about 5 cm in diameter
5. Paper towels or tissues for wiping ink from fingertips
6. Stick of glue for gluing paper
7. Small sharp straight scissors, or a razorblade and a smooth cutting board, for cutting paper
8. Large envelope for mailing the fingerprint card to the FBI without folding it. Postage stamp(s).

The piece of smooth rubber is for painting the ink of the stamp pad onto the ridges of the fingerprints. Just dipping the fingers in ink or touching them to the stamp pad will put ink in the troughs of the fingerprints as well. Too inky fingers will create a uniform splotch on the paper, not the lines and whorls of a fingerprint. Also, if the stamp pad holds the impression of the writing on the stamp, then the fingers will get that writing on them as inkless areas.
The ink should not be quick-drying, e.g. alcohol-based from a marker pen. This will dry on warm fingertips before the finger reaches the paper. The ink must be liquid, not the paste from a ballpoint pen, because the paste sticks to the fingertip in small chunks. These will create uniform dark splotches on the paper, with the adjacent areas blank white. The ink used in stamp pads is good.
With inky fingers, it is easy to get ink stains on the table or anything else within reach, so the paper towels or tissues should be held ready to clean the ink up while it is still wet.
Procedure:
1. Read the FBI’s online instructions for taking legible fingerprints (https://www.fbi.gov/services/cjis/fingerprints-and-other-biometrics/recording-legible-fingerprints), the Guidelines for Preparation of Fingerprint Cards (https://ucr.fbi.gov/fingerprints_biometrics/guidelines-for-preparation-of-fingerprint-cards-and-association-criminal-history-information) and the Capturing Legible Fingerprints poster (https://www.fbi.gov/file-repository/capturing-legible-fingerprints.pdf).
2. Lightly rub the bulging side of the piece of rubber on the stamp pad to coat it with an even thin layer of ink. Then lightly paint the ink on one fingertip, the first priority being to coat all the ridges from the crease of the first joint to the nail and from one side of the nail to the other. The second priority is to avoid coating the troughs of the fingerprint. Some ink in the troughs is unavoidable, but minimising it will make the print clearer.
3. Roll the fingertip on the scrap paper from nail to nail, like the FBI instructions say. Compare the result to the Capturing Legible Fingerprints poster. The goal is to get a roughly rectangular imprint with the ridges and troughs of the fingerprint visible. Empty spots on the fingerprint indicate too little ink, inky splotches indicate too much. Lightly pressing the second joint of the finger down with the thumb and forefinger of the other hand may help, or alternatively holding the tip of the nail down with the thumb and forefinger of the other hand. Practicing repeatedly with the same finger helps. The ink on the fingertip needs renewing from time to time. If there is too much ink on the fingertip, rolling it on the scrap paper will reduce the amount.
4. Estimate how much ink on the fingertip leaves the best print. If the first roll after inking a finger always creates splotch, estimate how many rolls on the scrap paper are needed (without renewing the ink) to make the fingerprint legible.
5. Once the fingerprints on the scrap paper are looking good, use the same amount of ink and pre-rolls to put the same finger’s print on all the FBI fingerprint cards.
6. Wipe this finger clean of ink.
7. Repeat points 2-6 with each finger. When reaching the thumbs, also put their flat impressions on the fingerprint card (flat thumb impressions require less practice). Doing one finger at a time minimises ink stains on the fingerprint card and the furniture, and gives practice for each finger just before that finger’s print is taken.
8. Ink the four fingers of one hand and practice taking the four-finger flat impression a couple of times. Then put the four-finger impression on all the fingerprint cards. Clean the fingers. Repeat with the other hand.
9. Choose the fingerprint card with the largest number of clear fingerprints. It doesn’t matter how bad the unclear prints on the card are – these will be covered in the next step.
10. For each unclear print on the chosen card, pick the clearest print of the same finger from the remaining cards. Cut out that fingerprint, following the edges of the box it is in. Glue the cutout over the unclear print on the chosen card, making sure all four corners of the cutout are glued down. Hopefully you only need to cut and glue at most 3 prints.
11. Fill out the form at the top of the fingerprint card with black or blue pen, as the instructions say. The required codes for hair and eye colour, etc, can be found in the Guidelines for Preparation of Fingerprint Cards.
12. Without folding the fingerprint card, put it in the envelope together with the printout of the email confirmation of paying the appropriate fee on the FBI website. Address the envelope correctly and mail it. From Australia, sending the A4 envelope to the FBI using regular mail cost under 3 AUD in 2018. In my case, the pdf of the criminal record check was accessible from the FBI’s website less than two weeks after mailing my fingerprint card.

How superstition grows out of science

Priests in Ancient Egypt could predict eclipses and the floods of the Nile by observing the stars and the Moon and recording their previous positions when the events of interest happened. The rest was calculation, nothing magical. Ordinary people saw the priests looking at the stars and predicting events in the future, and thought that the stars magically told priests things and that the prediction ability extended to all future events (births, deaths, outcomes of battles). The priests encouraged this belief, because it gave them more power. This is one way astrology could have developed – by distorting and exaggerating the science of astronomy. Another way is via navigators telling the latitude of a ship using the stars or the sun. People would have thought that if heavenly bodies could tell a navigator his location on the open sea, then why not other secrets?
Engineers in Ancient Rome calculated the strength of bridges and aqueducts, and estimated the amount of material needed for these works. Ordinary people saw the engineers playing with numbers and predicting the amount of stones needed for a house or a fort. Numbers “magically” told engineers about the future, and ordinary people thought this prediction ability extended to all future events. Thus the belief in numerology could have been born.
When certain plants were discovered to have medicinal properties against certain diseases, then swindlers imitated doctors by claiming that other natural substances were powerful cures against whatever diseases. The charlatans and snake oil salesmen distorted and exaggerated medicine.
Doctors diagnosed diseases by physical examination before laboratory tests were invented. Thus a doctor could look at parts of a person’s body, tell what diseases the person had, and predict the symptoms that the person would experience in the future. Exaggerating this, palm readers claimed to predict a person’s future life course by looking at the skin of their palm.
In the 20th century, some medicines were discovered to be equally effective at somewhat lower doses than previously thought. Then homeopathy exaggerated this by claiming that medicines are effective when diluted so much that on average not a single molecule of the drug remains in the water given to the patient.
In all these cases, superstition only adds bias and noise to scientific results. Science does not know everything, but it is a sufficient statistic (https://en.wikipedia.org/wiki/Sufficient_statistic) for superstitious beliefs, in the sense that any true information contained in superstition is also contained in science. Nothing additional can be learned from superstition once the scientific results are known.

A sewing machine design

Needle shaped like a helix, positioned horizontally. The needle can be held in place by four rollers around it with axes parallel to the axis of the helix and magnets that prevent the helix from moving forward or back along its axis. Another way is to use two rollers above the helix and a magnet above these. The needle is rotated around the axis of the helix by the rollers, which touch it.
As the helix rotates, the corkscrew motion makes the point of the needle go into and out of cloth like a curved surgical needle. The two-roller setup is convenient if access to the other side of the cloth is restricted, e.g. when sewing the lining of a garment closed.
An advantage over the current sewing machines is the simplicity and the need for only one thread per seam, not two threads, one of which is restricted in length because it is on a bobbin. For long seams, the single-thread design has the advantage of dispensing with the bobbin. Long seams are usually simple (hemming curtains, bedsheets), not zigzag or some fancy design, so the limited seam design options with the helical needle are not a big problem.
The problem is that towards the end of a long seam, a long thread needs to be pulled through the entire seam behind the needle. The friction gets large. The thread is not relatively straight, but in a “spiral binder” seam, which bunches the cloth.

Scientific thinking coordination game

If most people in a society use the scientific method for decision-making, then telling stories will not persuade them – they will demand evidence. In that case, bullshit artists and storytellers will not have much influence. It is then profitable to learn to provide evidence, which is positively correlated with learning to understand and use evidence. If young people respond to incentives and want to become influential in society (get a high income and social status), then young people will learn and use the scientific method, which reinforces the demand for evidence and reduces the demand for narratives.
If most people are not scientifically minded, but believe stories, then it is profitable to learn to tell stories. The skilled storytellers will be able to manipulate people, thus will gain wealth and power. Young people who want to climb the social and income ladder will then gravitate towards narrative fields of study. They will not learn to understand and use evidence, which reinforces the low demand for evidence.
Both the scientific and the narrative society are self-reinforcing, thus there is a coordination game of people choosing to become evidence-users or storytellers. Note that using the scientific method does not mean being a scientist. Most researchers who I have met do not use science in their everyday decisions, but believe the stories they read in the media or hear from their friends. I have met Yale PhD-s in STEM fields who held beliefs that most people in the world would agree to be false.
One signal of not thinking scientifically is asking people what the weather is like in some place one has not visited (I don’t mean asking in order to make small talk, but asking to gain information). Weather statistics for most places in the world are available online and are much more accurate than acquaintances’ opinions of the weather. This is because weather statistics are based on a much longer time series and on physically measured temperature, rainfall, wind, etc, not on a person’s guess of these variables.

Giving oneself tenure

Senior academics tell juniors that an assistant professor does not have to get tenure at his or her current university, but “in the profession”, i.e. at some university. To extend this reasoning, one does not have to get tenure at all, just guarantee one’s ability to pay one’s living costs with as low effort as possible. Government jobs are also secure – not quite tenure, but close.
Economically, tenure is guaranteed income for life (or until a mandatory retirement age) in exchange for teaching and administrative work. The income may vary somewhat, based on research and teaching success, but there is some lower bound on salary. Many nontenured academics are obsessed about getting tenure. The main reason is probably not the prestige of being called Professor, but the income security. People with families seem especially risk averse and motivated to secure their job.
Guaranteed income can be obtained by other means than tenure, e.g. by saving enough to live off the interest and dividends (becoming a rentier). Accumulating such savings is better than tenure, because there is no teaching and administration requirement. If one wishes, one can always teach for free. Similarly, research can be done in one’s free time. If expensive equipment is needed for the research, then one can pay a university or other institution for access to it. The payment may be in labour (becoming an unpaid research assistant). Becoming financially independent therefore means giving oneself more than tenure. Not many academics seem to have noticed this option, because they choose a wasteful consumerist lifestyle and do not plan their finances.
Given the scarcity of tenure-track jobs in many fields, choosing the highest-paying private-sector position (to accumulate savings), may be a quicker and more certain path to the economic equivalent of tenure than completing sequential postdocs. The option of an industry job seems risky to graduate students, because unlike in academia, one can get fired. However, the chance of layoffs should be compared to failing to get a second postdoc at an institution of the same or higher prestige. When one industry job ends, there are others. Like in academia, moving downward is easier than up.
To properly compare the prospects in academia and industry, one should look at the statistics, not listen to anecdotal tales of one’s acquaintances or the promises of recruiters. If one aspires to be a researcher, then one should base one’s life decisions on properly researched facts. It is surprising how many academics do not. The relevant statistics on the percentage of graduates or postdocs who get a tenure-track job or later tenure have been published for several fields (http://www.nature.com/ncb/journal/v12/n12/full/ncb1210-1123.html, http://www.education.uw.edu/cirge/wp-content/uploads/2012/11/so-you-want-to-become-a-professor.pdf, https://www.aeaweb.org/articles?id=10.1257/jep.28.3.205). The earnings in both higher education and various industries are published as part of national labour force statistics. Objective information on job security (frequency of firing) is harder to get, but administrative data from the Nordic countries has it.
Of course, earnings are not the whole story. If one has to live in an expensive city to get a high salary, then the disposable income may be lower than with a smaller salary in a cheaper location. Non-monetary aspects of the job matter, such as hazardous or hostile work environment, the hours and flexibility. Junior academics normally work much longer than the 40 hours per week standard in most jobs, but the highest-paid private-sector positions may require even more time and effort than academia. The hours may be more flexible in academia, other than the teaching times. The work is probably of the same low danger level. There is no reason to suppose the friendliness of the colleagues to differ.
Besides higher salary, a benefit of industry jobs is that they can be started earlier in life, before the 6 years in graduate school and a few more in postdoc positions. Starting early helps with savings accumulation, due to compound interest. Some people have become financially independent in their early thirties this way (see mrmoneymustache.com).
If one likes all aspects of an academic job (teaching, research and service), then it is reasonable to choose an academic career. If some aspects are not inherently rewarding, then one should consider the alternative scenario in which the hours spent on those aspects are spent on paid employment instead. The rewarding parts of the job are done in one’s free time. Does this alternative scenario yield a higher salary? The non-monetary parts of this scenario seem comparable to academia.
Tenure is becoming more difficult to get, as evidenced by the lengthening PhD duration, the increasing average number of postdocs people do before getting tenure, and by the lengthening tenure clocks (9 years at Carnegie Mellon vs the standard 6). Senior academics (who have guarateed jobs) benefit from increased competition among junior academics, because then the juniors will do more work for the seniors for less money. So the senior academics have an incentive to lure young people into academia (to work in their labs as students and postdocs), even if this is not in the young people’s interest. The seniors do not fear competition from juniors, due to the aforementioned guaranteed jobs.
Graduate student and postdoc unions are lobbying universities and governments to give them more money. This has at best a limited impact, because in the end the jobs and salaries are determined by supply and demand. If the unions want to make current students and postdocs better off, then they should discourage new students from entering academia. If they want everyone to be better off, then they should encourage research-based decision-making by everyone. I do not mean presenting isolated facts that support their political agenda (like the unions do now), but promoting the use of the full set of labour force statistics available, asking people to think about their life goals and what jobs will help achieve those goals, and developing predictive models along the lines of “if you do a PhD in this field in this university, then your probable job and income at age 30, 40, etc is…”.

On the optimal burden of proof

All claims should be considered false until proven otherwise, because lies can be invented much faster than refuted. In other words, the maker of a claim has the burden of providing high-quality scientific proof, for example by referencing previous research on the subject. Strangely enough, some people seem to believe marketing, political spin and conspiracy theories even after such claims have been proven false. It remains to wish that everyone received the consequences of their choices (so that karma works).
Considering all claims false until proven otherwise runs into a logical problem: a claim and its opposite claim cannot be simultaneously false. The priority for falsity should be given to actively made claims, e.g. someone saying that a product or a policy works, or that there is a conspiracy behind an accident. Especially suspect are claims that benefit their maker if people believe them. A higher probability of falsity should also be attached to positive claims, e.g. that something has an effect in whatever direction (as opposed to no effect) or that an event is due to non-obvious causes, not chance. The lack of an effect should be the null hypothesis. Similarly, ignorance and carelessness, not malice, should be the default explanation for bad events.
Sometimes two opposing claims are actively made and belief in them benefits their makers, e.g. in politics or when competing products are marketed. This is the hardest case to find the truth in, but a partial and probabilistic solution is possible. Until rigorous proof is found, one should keep an open mind. Keeping an open mind creates a vulnerability to manipulation: after some claim is proven false, its proponents often try to defend it by asking its opponents to keep an open mind, i.e. ignore evidence. In such cases, the mind should be closed to the claim until its proponents provide enough counter-evidence for a neutral view to be reasonable again.
To find which opposing claim is true, the first test is logic. If a claim is logically inconsistent with itself, then it is false by syntactic reasoning alone. A broader test is whether the claim is consistent with other claims of the same person. For example, Vladimir Putin said that there were no Russian soldiers in Crimea, but a month later gave medals to some Russian soldiers, citing their successful operation in Crimea. At least one of the claims must be false, because either there were Russian soldiers in Crimea or not. The way people try to weasel out of such self-contradictions is to say that the two claims referred to different time periods, definitions or circumstances. In other words, change the interpretation of words. A difficulty for the truth-seeker is that sometimes such a change in interpretation is a legitimate clarification. Tongues do slip. Nonetheless, a contradiction is probabilistic evidence for lying.
The second test for falsity is objective evidence. If there is a streetfight and the two sides accuse each other of starting it, then sometimes a security camera video can refute one of the contradicting claims. What evidence is objective is, sadly, subject to interpretation. Videos can be photoshopped, though it is difficult and time-consuming. The objectivity of the evidence is strongly positively correlated with the scientific rigour of its collection process. „Hard” evidence is a signal of the truth, but a probabilistic signal. In this world, most signals are probabilistic.
The third test of falsity is the testimony of neutral observers, preferably several of them, because people misperceive and misremember even under the best intentions. The neutrality of observers is again up for debate and interpretation. In some cases, an observer is a statistics-gathering organisation. Just like objective evidence, testimony and statistics are probabilistic signals.
The fourth test of falsity is the testimony of interested parties, to which the above caveats apply even more strongly.
Integrating conflicting evidence should use Bayes’ rule, because it keeps probabilities consistent. Consistency helps glean information about one aspect of the question from data on other aspects. Background knowledge should be combined with the evidence, for example by ruling out physical impossibilities. If a camera shows a car disappearing behind a corner and immediately reappearing, moving in the opposite direction, then physics says that the original car couldn’t have changed direction so fast. The appearing car must be a different one. Knowledge of human interactions and psychology is part of the background information, e.g. if smaller, weaker and outnumbered people rarely attack the stronger and more numerous, then this provides probabilistic info about who started a fight. Legal theory incorporates background knowledge of human nature to get information about the crime – human nature suggests motives. Asking: „Who benefits?” has a long history in law.