Monthly Archives: November 2015

Religion is wrong with positive probability

When a person says that some god wants X or rewards X and punishes Y, then how do they know? They, a limited human, claim knowledge of the mind of a god. When asked how they know, they say either that the god told them directly (using some revelation or sign perhaps) or that the god told some other person (a prophet) in the past, who passed on the message in the form of a book or an oral tradition. They certainly do not have replicable experimental evidence. If some other person was told, then recall the telephone game (children whispering in each other’s ear change the message radically) and people’s general lying, misunderstanding and misremembering. In any case, at some point a god must have told a person.
Let us look at this unavoidable transmission link between a purported god and a human. Could not an evil spirit have impersonated the god to the human (if evil spirits exist in their religion)? Or could it have been just a hallucination, dream, false memory? Psychology shows false memories are easy to induce (Brainerd and Reyna 2005 “The science of false memory”). How could a human tell the difference? Plenty of people in insane asylums claim not only to know a god’s will but also to be a god.
If there is a method for distinguishing real revelations from gods from false ones, how do you know it works? Either a god told you directly that it works or a person told you. In both cases we arrive at the previous question: how do you know it was a god and that you (or the other person) understood and remember the message correctly? If there is a method for finding and verifying good methods of distinguishing real from fake revelations, how do you know that works? And so on. Everything eventually relies on belief in the claim of a human. There is always a positive probability that the human imagined things that were not there or is deceiving self or others. Any religious claim is wrong with positive probability.
The next fallback position for advocates of religion is that even if it is wrong with positive probability, it does no harm to believe it. But how do they know? Back to the question about knowing the mind of a god. Why cannot a god reward nonbelievers and punish believers? And which religion out of the multitude in the world should one believe for maximal benefit? Some religions claim that a wrong religion is worse than none (heresy worse than paganism), some the opposite. To compare the expected benefit from believing different religions and from atheism, one needs to know the size of the rewards and punishments and also their probabilities. All this reduces to (conflicting) claims by humans about the will of a god. Which are wrong with positive probability.

Why intelligent people can be funny if they want to

What does it take to be funny spontaneously? A good memory and quick thinking – the memory to remember jokes, the quickness to find a fitting joke for the situation before others.
Practiced funnyness also uses acting skills, which are acquired with time and effort. Intelligent people gain skills faster. Confidence always helps – a shy person dare not tell the joke or act in a funny way, because that draws attention. The caveat “if they want to” in the title is due to the possible confidence issue.

Theory and data both needed for prediction

Clearly, data is required for prediction. Theory only says: “If this, then that.” It connects assumptions and conclusions. Data tells whether the assumptions are true. It allows the theory to be applied.
Theory is also required for prediction, although that is less obvious. For example, after observing a variable taking the value 1 a million times, what is the prediction for the next realization for the variable? Under the theory that the variable is constant, the next value is predicted to be 1. If the theory says there are a million 1-s followed by a million 0-s followed by a million 1-s etc, then the next value is 0. This theory may sound more complicated than the other, but prediction is concerned with correctness, not complexity. Also, the simplicity of a theory is a slippery concept – see the “grue-bleen example” in philosophy.
The constant sequence may sound like a more “natural” theory, but actually both the “natural” and the correct theory depend on where the data comes from. For example, the data may be generated by measuring whether it is day or night every millisecond. Day=1, night=0. Then a theory that a large number of 1-s are followed by a large number of 0-s, etc is more natural and correct than the theory that the sequence is constant.
Sometimes the theory is so simple that it is not noticed, like when forecasting a constant sequence. Which is more important for prediction, theory or data? Both equally, because the lack of either makes prediction impossible. If the situation is simple, then theorists may not be necessary, but theory still is.

Targeting university donations more precisely

If a donation is an expression of gratitude to a university where one acquired great skills or had a good time, then why not target it more precisely? Why donate to the entire university or a particular department as opposed to the people making up the university? Some people probably contributed more than others to the excellent university experience. It would make sense to reward them more. The people who made the studies enjoyable or useful may be gone from the university, especially if they were coursemates, but the employees of universities also change jobs. Those who are gone do not benefit from a donation to the university. A gratitude-based donation should go directly to the people one wants to thank.
If a donation is for the purpose of advancing education and research, then the money should be targeted to where it does the most good. But the universities receiving the most donations are those who are already rich. It is difficult to measure the benefit to education or research that an additional unit of money generates in different universities, but diminishing marginal returns seem reasonable. In that case, do-good donations should go to the poorest regions of the world and the poorest universities.
The richest universities often spend money on fancy architecture, with stonecarvings on the outside of buildings and woodcarvings and paintings inside them. The money thus spent clearly does not contribute to education or research. It may even have a negative value if architecturally interesting buildings are less well suited to study and work than a standard office block (this is true in my experience).
It is not enough to donate under the condition that the university must spend the money on scholarships or salaries, not buildings. There is a crowding-out effect: if the university receives a donation for a particular purpose, it spends less of its own money for that purpose than it would have without the donation. Effectively, part of the donation still goes to buildings.

Signalling by encouraging good decisionmaking

Con artists pressure people into quick decisions. Marketing mentions that the offer is for a limited time only, so buy now, no time to read the small print. Date rapists try to get victims drunk or drugged. In all these cases, the goal is to prevent careful reasoning about what is happening and the decisions to be made. Also to prevent the victim from consulting others. Being pressured, confused or bullied while deciding is a danger sign, so one way for honest sellers to distinguish themselves is by encouraging good decisionmaking. Giving people time, referring them to neutral sources of info, asking them to think things over before deciding are all ways to make decisions more accurate.
More accurate decisions distinguish between good and bad deals better, which benefits honest sellers and harms con artists. This differential effect of information on good and bad types enables signalling by precision of information, where good types want to reveal as much as possible and bad types want to obfuscate. Information unravelling results – the best type has an incentive to reveal itself, then the second best type, then the third best etc. By not revealing, one is pooled with the average of the remaining types. In the end, the only type who does not strictly prefer to reveal itself is the worst type.

Repeal regulation requiring ratings

The credit rating agencies (Moody’s, Fitch etc) have been accused of inflating the ratings of companies after their ratings underestimated the default risk during the 2008 financial crisis. First, it is strange to accept ratings expressed as letters (AAA, AAB etc) when the market participants care about the default risk and the letter codes are based (or so the rating agencies say) on the default risk. Remove the coarse letter codes and require the rating to equal the estimated probability of default over the next n years. The probability should have enough significant digits and should report standard errors. It should not vaguely claim that the default probability is somewhere between x and y. The potential for rating inflation and later justification of wrong ratings is reduced by transparency.
A good punishment for the rating agencies that also increases transparency is to repeal any regulation requiring the use of their ratings. Currently, banks are only allowed to invest in “investment grade” bonds, where the grade is determined by the credit rating (agencies). The purpose of the regulation should be to prevent banks from taking too much risk, so the variable of interest is the default probability, not the rating. Replace the requirement of “investment grade” rating with a requirement that the predicted default probability over the next n years must be below x. The obvious question is who predicts this probability.
The restriction to investing only in bonds predicted to be unlikely to default is similar to the vague requirement of due diligence. The investing bank must be able to justify its decision later if the investment turns out badly. The bank must use all available sources of info (maybe even rating agencies) and state of the art methods to predict default probabilities for bonds it intends to invest in. To prevent the bank from manufacturing a justification ex post to excuse its bad decision, the methodology it uses to predict must be provably unchanged from the time of investing. This can be achieved by sharing the methodology with the regulator.
There is a concern that business secrets leak from the regulator to competitors. This can be eliminated by encrypting the info that the bank gives the regulator, with the bank keeping the key. The encrypted info can even be publicly posted on the web. If concerns arise, the bank can later be ordered to give the key to the regulator (or even to the public), who can then verify the info received in the past. If the bank claims to have lost the key, the punishment should be the same as for the lawbreaking that the key is intended to verify.