Tag Archives: manipulation

Exaggerating vs hiding emotions

In some cultures, it was a matter of honour not to show emotions. Native American warriors famously had stony visages. Victorian aristocracy prided themselves in a stiff upper lip and unflappable manner. Winston Churchill describes in his memoirs how the boarding school culture, enforced by physical violence, was to show no fear. In other cultures, emotions are exaggerated. Teenagers in North America from 1990 to the present are usually portrayed as drama queens, as are arts people. Everything is either fabulous or horrible to them, no so-so experiences. I have witnessed the correctness of this portrayal in the case of teenagers. Jane Austen’s “Northanger Abbey” depicts Victorian teenagers as exaggerating their emotions similarly to their modern-day counterparts.

In the attention economy, exaggerating emotions is profitable to get and keep viewers. Traditional and social media portray situations as more extreme than these really are in order to attract eyeballs and clicks. Teenagers may have a similar motivation – to get noticed by their peers. Providing drama is an effective way. The notice of others may help attract sex partners or a circle of followers. People notice the strong emotions of others for evolutionary reasons, because radical action has a higher probability of following than after neutral communication. Radical action by others requires a quick accurate response to keep one’s health and wealth or take advantage of the radical actor.

A child with an injury or illness may pretend to suffer more than actually to get more care and resources from parents, especially compared to siblings. This is similar to the begging competition among bird chicks.

Exaggerating both praise and emotional punishment motivates others to do one’s bidding. Incentives are created by the difference in the consequences of different actions, so exaggerating this difference strengthens incentives, unless others see through the pretending. Teenagers may exaggerate their outward happiness and anger at what the parents do, in order to force the parents to comply with the teenager’s wishes.

On the other hand, in a zero-sum game, providing information to the other player cannot increase one’s own payoff and usually reduces it. Emotions are information about the preferences and plans of the one who shows these. In an antagonistic situation, such as negotiations or war between competing tribes, a poker face is an information security measure.

In short, creating drama is an emotional blackmail method targeting those with aligned interests. An emotionless front hides both weaknesses and strengths from those with opposed interests, so they cannot target the weakness or prepare for the precise strength.

Whether teenagers display or hide emotion is thus informative about whether they believe the surrounding people to be friends or enemies. A testable prediction is that bullied children suppress emotion and pretend not to care about anything, especially compared to a brain scan showing they actually care and especially when they are primed to recall the bullies. Another testable prediction is that popular or spoiled children exaggerate their emotions, especially around familiar people and when they believe a reward or punishment is imminent.

Signalling the precision of one’s information with emphatic claims

Chats both online and in person seem to consist of confident claims which are either extreme absolute statements (“vaccines don’t work at all”, “you will never catch a cold if you take this supplement”, “artificial sweeteners cause cancer”) or profess no knowledge (“damned if I know”, “we will never know the truth”), sometimes blaming the lack of knowledge on external forces (“of course they don’t tell us the real reason”, “the security services are keeping those studies secret, of course”, “big business is hiding the truth”). Moderate statements that something may or may not be true, especially off the center of all-possibilities-equal, and expressions of personal uncertainty (“I have not studied this enough to form an opinion”, “I have not thought this through”) are almost absent. Other than in research and official reports, I seldom encounter statements of the form “these are the arguments in this direction and those are the arguments in that direction. This direction is somewhat stronger.” or “the balance of the evidence suggests x” or “x seems more likely than not-x”. In opinion pieces in various forms of media, the author may give arguments for both sides, but in that case, concludes something like “we cannot rule out this and we cannot rule out that”, “prediction is difficult, especially now in a rapidly changing world”, “anything may happen”. The conclusion of the opinion piece does not recommend a moderate course of action supported by the balance of moderate-quality evidence.

The same person confidently claims knowledge of an extreme statement on one topic and professes certainty of no knowledge at all on another. What could be the goal of making both extreme and no-knowledge statements confidently? If the person wanted to pretend to be well-informed, then confidence helps with that, but claiming no knowledge would be counterproductive. Blaming the lack of knowledge on external forces and claiming that the truth is unknowable or will never be discovered helps excuse one’s lack of knowledge. The person can then pretend to be informed to the best extent possible (a constrained maximum of knowledge) or at least know more than others (a relative maximum).

Extreme statements suggest to an approximately Bayesian audience that the claimer has received many precise signals in the direction of the extreme statement and as a result has updated the belief far from the average prior belief in society. Confident statements also suggest many precise signals to Bayesians. The audience does not need to be Bayesian to form these interpretations – updating in some way towards the signal is sufficient, as is behavioural believing that confidence or extreme claims demonstrate the quality of the claimer’s information. A precisely estimated zero, such as confidently saying both x and not-x are equally likely, also signals good information. Similarly, being confident that the truth is unknowable.

Being perceived as having precise information helps influence others. If people believe that the claimer is well-informed and has interests more aligned than opposed to theirs, then it is rational to follow the claimer’s recommendation. Having influence is generally profitable. This explains the lack of moderate-confidence statements and claims of personal but not collective uncertainty.

A question that remains is why confident moderate statements are almost absent. Why not claim with certainty that 60% of the time, the drug works and 40% of the time, it doesn’t? Or confidently state that a third of the wage gap/racial bias/country development is explained by discrimination, a third by statistical discrimination or measurement error and a third by unknown factors that need further research? Confidence should still suggest precise information no matter what the statement is about.

Of course, if fools are confident and researchers honestly state their uncertainty, then the certainty of a statement shows the foolishness of the speaker. If confidence makes the audience believe the speaker is well-informed, then either the audience is irrational in a particular way or believes that the speaker’s confidence is correlated with the precision of the information in the particular dimension being talked about. If the audience has a long history of communication with the speaker, then they may have experience that the speaker is generally truthful, acts similarly across situations and expresses the correct level of confidence on unemotional topics. The audience may fail to notice when the speaker becomes a spreader of conspiracies or becomes emotionally involved in a topic and therefore is trying to persuade, not inform. If the audience is still relatively confident in the speaker’s honesty, then the speaker sways them more by confidence and extreme positions than by admitting uncertainty or a moderate viewpoint.

The communication described above may be modelled as the claimer conveying three-dimensional information with two two-dimensional signals. One dimension of the information is the extent to which the statement is true. For example, how beneficial is a drug or how harmful an additive. A second dimension is how uncertain the truth value of the statement is – whether the drug helps exactly 55% of patients or may help anywhere between 20 and 90%, between which all percentages are equally likely. A third dimension is the minimal attainable level of uncertainty – how much the truth is knowable in this question. This is related to whether some agency is actively hiding the truth or researchers have determined it and are trying to educate the population about it. The second and third dimensions are correlated. The lower is the lowest possible uncertainty, the more certain the truth value of the statement can be. It cannot be more certain than the laws of physics allow.

The two dimensions of one signal (the message of the claimer) are the extent to which the statement is true and how certain the claimer is of the truth value. Confidence emphasises that the claimer is certain about the truth value, regardless of whether this value is true or false. The claim itself is the first dimension of the signal. The reason the third dimension of the information is not part of the first signal is that the claim that the truth is unknowable is itself a second claim about the world, i.e. a second two-dimensional signal saying how much some agency is hiding or publicising the truth and how certain the speaker is of the direction and extent of the agency’s activity.

Opinion expressers in (social) media usually choose an extreme value for both dimensions of both signals. They claim some statement about the world is either the ultimate truth or completely false or unknowable and exactly in the middle, not a moderate distance to one side. In the second dimension of both signals, the opinionated people express complete certainty. If the first signal says the statement is true or false, then the second signal is not sent and is not needed, because if there is complete certainty of the truth value of the statement, then the statement must be perfectly knowable. If the first signal says the statement is fifty-fifty (the speaker does not know whether true or false), then in the second signal, the speaker claims that the truth is absolutely not knowable. This excuses the speaker’s claimed lack of knowledge as due to an objective impossibility, instead of the speaker’s limited data and understanding.

Less inspiring people in universities than in early school

A student claimed that fewer inspiring people are found in universities than in early school. Empirical checks of this would be interesting and would need a measure of inspiringness. A theoretical explanation is a tradeoff between multiple dimensions: subject matter competence, integrity, reliability, communication skills, being inspiring, etc. The tradeoff is on both the demand and the supply side. An inspiring competent person has many career options (CEO, politician, entrepreneur) besides academia, so fewer such people end up supplying their labour to the education sector.

On the demand side, a university has to prioritise dimensions on which to rank candidates and hire, given its salary budget and capacity constraints on how many job positions it has. Weighting competence more leaves less emphasis on inspiringness. Competing universities may prioritise different dimensions (be horizontally differentiated), in which case on average each institution gets candidates who have more of its preferred dimension and less of other dimensions.

As a side note, what an organisation says its priorities are may differ from its actual priorities, which are evidenced by behaviour, e.g., who it hires. It may say it values teaching with passion, but hire based on research success instead.

A constraint is a special case of a tradeoff. Suppose that given the minimum required competence, an employer wants to hire the most inspiring person. The higher this level of competence (teaching PhD courses vs kindergarten), the fewer people satisfy the constraint. At a high enough level of the constraint, there may be insufficient candidates in the world to fill all the vacant jobs. Some employers cannot fill the position, others will have just one candidate. Maximising inspiringness over an empty set, or a set of one, is unlikely to yield very inspiring people.

It may be inherently simpler to inspire with easier material, in which case even with equally inspiring people throughout all levels of education, the later stages will seem less inspiring.

Larger leaps through theory may be required as a subject gets more advanced, leaving less scope for inspiring anecdotes and real-life examples. The ivory tower is often accused of being out of touch with common experience. Parting with everyday life is partly inevitable for developing any specialised skill, otherwise the skill would be an everyday one, not specialised.

If inspiring people requires manipulating them, and more educated individuals resist manipulation better, then inspiring people gets more difficult with each level of education. Each stage of study selects on average the more intelligent graduates of the previous stage, so if smarter people are harder to manipulate, then those with higher levels of education are harder to inspire. On the other hand, if academics are naive and out of touch with the ways of the world, then they may be easier to manipulate and inspire than schoolchildren.

People accumulate interests and responsibilities in their first half of life. The more hobbies and duties, the less scope for adopting a goal proposed by some charismatic person, i.e., getting inspired by them. Later in life, many goals may have been achieved and people may have settled down for a comfortable existence. They are then less inclined to believe the need to follow a course that an inspiring person claims is a way of reaching their goals.

Preventing cheating is hopeless in online learning

Technology makes cheating easy even in in-person exams with invigilators next to the test-taker. For example, in-ear wireless headphones not visible externally can play a loop recording of the most important concepts of the tested material. A development of this idea is to use a hidden camera in the test-takers glasses or pen to send the exam contents to a helper who looks up the answers and transmits the spoken solutions via the headphones. Without a helper, sophisticated programming is needed: the image of the exam from the hidden camera is sent to a text-recognition (OCR) program, which pipes it to a web search or an online solver such as Wolfram Alpha, then uses a text-to-speech program to speak the results into the headphones.

A small screen on the inside of the glasses would be visible to a nearby invigilator, so is a risky way to transmit solutions. A small projector in the glasses could in theory display a cheat sheet right into the eye. The reflection from the eye would be small and difficult to detect even looking into the eyes of the test-taker, which are mostly pointed down at the exam.

If the testing is remote, then the test-taker could manipulate the cameras through which the invigilators watch, so that images of cheat sheets are replaced with the background and the sound of helpers saying answers is removed. The sound is easy to remove with a microphone near the mouth of the helper, the input of which is subtracted from the input of the computer webcam. A more sophisticated array of microphones feeding sound into small speakers near the web camera’s microphone can be used to subtract a particular voice from the web camera’s stereo microphone’s input. The technology is the same as in noise-cancelling headphones.

Replacing parts of images is doable even if the camera and its software are provided by the examiners and completely non-manipulable. The invigilators’ camera can be pointed at a screen which displays an already-edited video of the test-taker. The editing is fast enough to make it nearly real-time. The idea of the edited video is the same as in old crime movies where a photo of an empty room is stuck in front of a stationary security camera. Then the guard sees the empty room on the monitor no matter what actually goes on in the room.

There is probably a way to make part of the scene invisible to a camera even with 19th century technology, namely the Pepper’s Ghost illusion with a two-way mirror. The edges of the mirror have to be hidden somehow.

All public statues should be removed

There is no benefit to spending taxpayer money on creating or sustaining personality cults. The same goes for all public art – the current (local) government should not decide on which people to popularise. No significant market failure exists in physical art objects. The government thus does not need to intervene in the market for statues (copying digital art is another matter). Private individuals can put almost any statues and art on their own property as part of free speech.

The materials of which the statues are made could be used for something beneficial instead, like public housing for the poorest members of society. Clearly the government’s goal in erecting statues is to provide circus to the public in order to get re-elected, not to benefit society.

If the influential people whom the statues depict were asked whether the person or the idea matters more, my guess is that almost all would emphasise the idea. Most would ask the resources to be spent on more reasonable things than statues of them.

If the goal of a statue is to signal the importance of the ideas of the person depicted, then there are more efficient ways for this signalling. For example, a scholarship, a charity or a public library in the name of the person.

Bar-coding videos to prevent faking

To prevent clips from being cut out of a video or inserted, add a non-repeating sequence of bar codes onto either the whole frame or the main object of the video, such as a person talking. The bar code can use subtle „watermark” shading that does not interfere with viewing – it only needs to be readable by computer. The sequence of bar codes can be recreated at a later time if the algorithm is known, so if a clip is cut out of the video or added, then the sequence no longer matches the replication. Altering the video frame by frame also changes the bar code, although the forger can bypass this security feature by reading the original bar code, removing it before retouching and adding it back later. Still, these extra steps make faking the video somewhat more difficult. The main security feature is that the length of the video cannot be changed without altering the sequence of bar codes, which is easily detected.

The maker of the video may generate the bar codes cryptographically using a private key. This enables confirming the source of the video, for example in a copyright dispute.

Probably the idea of bar-coding videos has already been implemented, because watermarks and time stamps on photos have long been used. The main novelty relative to treating each frame as a photo is to link the bar codes to each other over time.

The smartest professors need not admit the smartest students

The smartest professors are likely the best at targeting admission offers to students who are the most useful for them. Other things equal, the intelligence of a student is beneficial, but there may be tradeoffs. The overall usefulness may be maximised by prioritising obedience (manipulability) over intelligence or hard work. It is an empirical question what the real admissions criteria are. Data on pre-admissions personality test results (which the admissions committee may or may not have) would allow measuring whether the admission probability increases in obedience. Measuring such effects for non-top universities is complicated by the strategic incentive to admit students who are reasonably likely to accept, i.e. unlikely to get a much better offer elsewhere. So the middle- and bottom-ranked universities might not offer a place to the highest-scoring students for reasons independent of the obedience-intelligence tradeoff.

Similarly, a firm does not necessarily hire the brightest and individually most productive workers, but rather those who the firm expects to contribute the most to the firm’s bottom line. Working well with colleagues, following orders and procedures may in some cases be the most important characteristics. A genius who is a maverick may disrupt other workers in the organisation too much, reducing overall productivity.

Directing help-seekers to resources is playing hot potato

In several mental health first aid guidelines, one of the steps is to direct the help-seeker to resources (suggest asking friends, family, professionals for help, reading materials on how to cope with the mental condition). This can provide an excuse to play hot potato: send the help-seeker to someone else instead of providing help. For example, the therapist or counsellor suggests seeing a doctor and obtaining a prescription, and the doctor recommends meeting a therapist instead.

The hot potato game is neither limited to sufferers of mental health issues, nor to doctors and counsellors. It is very common in universities: many people „raise awareness”, „coordinate” the work of others or „mentor” them, „manage change”, „are on the team or committee”, „create an action plan” (or strategy, policy or procedure), „start a conversation” about an issue or „call attention” to it, instead of actually doing useful work. One example is extolling the virtues of recycling, as opposed to physically moving recyclable items from the garbage bin to the recycling bin, and non-recyclable waste in the other direction. Another example is calling attention to mental health, instead of volunteering to visit the mentally ill at home and help them with tasks. Talking about supporting and mentoring early career academics, as opposed to donating part of one’s salary to create a new postdoc position, thereby putting one’s money where one’s mouth is.

All the seeming-work activities mentioned above allow avoiding actual work and padding one’s CV. Claiming to manage and coordinate other people additionally helps with empire-building – hiring more subordinates to whom one’s own work can be outsourced.

To motivate people to do useful work, as opposed to coordinating or managing, the desirable outcomes of the work should be clearly defined, measured, and incentivised. Mere discussions, committee meetings and action plans should attract no rewards, rather the reverse, because these waste other people’s time. More generally, using more inputs for the same output should be penalised, for example for academics, receiving more grant money should count negatively for promotions, given the same patent and publication output.

One way to measure the usefulness of someone’s activity is to use the revealed preference of colleagues (https://sanderheinsalu.com/ajaveeb/?p=1093). Some management and coordination is beneficial, but universities tend to overdo it, so it has negative value added.

Overbidding incentives in crowdfunding

Crowdfunding campaigns on Funderbeam and other platforms fix a price for the shares or loan notes and invite investors to submit the quantity they want to buy. If demand exceeds supply, then the financial instruments are rationed pro rata, or investors requesting quantities below a threshold get what they asked and others receive the threshold amount plus a pro rata share in the remaining quantity after the threshold amounts are allocated. Rationing creates the incentive to oversubscribe: an investor who wants n shares and expects being rationed to fraction x of her demanded quantity will rationally put in the order for n/x>n shares to counteract the rationing. For a mechanism not to invite such manipulation, the amount allocated to a given bidder in the event of oversubscription should not depend on that bidder’s bid quantity. For example, everyone gets the minimum of their demanded amount and a threshold quantity, where the threshold is determined so as to equate demand and supply. If there are s shares and all m investors demand more than s/m, then each gets s/m.

If some investors demand less than s/m, then the allocation process is recursive as follows. The i1 investors who asked for less than s/m each get what they requested. Their total t1 is subtracted from s to get s1 and the number of remaining investors reduced to m1=m-i1. Then the i2 investors asking for less than s1/m1 get what they demanded (t2 in total), and the new remaining amount s2=s1-t2 and number of investors m2=m1-i2 determined. Repeat until the number of investors asking for less than sj/mj is zero. Divide the remaining amount equally between the remaining investors.

An alternative is to let the market work by allowing the price to adjust, instead of fixing it in advance. Everyone should then submit demand curves: for each price, how many shares are they willing to buy. This may be too complicated for the unsophisticated crowdfunding investors.

However, complexity is probably not the main reason for the inefficient allocation mechanism that invites overbidding. The crowdfunding platform wants to appear popular among investors to attract companies to raise funds on it, so wants to increase the number of oversubscribed campaigns. Rationing is a way to achieve such manipulation if the fundraisers ignore the investors’ incentives to overbid and do not compare the platform to competing ones with similar allocation mechanisms. If fundraisers are irrational in this way, then they do not choose competing platforms without overbidding incentives, because funding campaigns there seem to attract less investor interest. Competing platforms with more efficient allocation mechanisms then go out of business, which eliminates comparison possibilities.

Keeping an open mind and intellectual honesty

„Keep an open mind” is often used as an argument against science, or to justify ignoring evidence more broadly. Let’s distinguish two cases of keeping an open mind: before vs after the evidence comes in. It is good to keep an open mind before data is obtained – no hypothesis is ruled out. In reality, all possibilities have positive probability, no matter how great the amount and quality of information, so one should not dogmatically rule out anything even given the best evidence. However, for practical purposes a small enough probability is the same as zero. Decisions have to be made constantly (choosing not to decide is also a decision), so after enough scientific information is available, it is optimal to make up one’s mind, instead of keeping it open.
Intellectually honest people who want to keep an open mind after obtaining evidence would commit to it from the start: publicly say that no matter what the data shows in the future, they will ignore it and keep an open mind. Similarly, the intellectually honest who plan to make up their mind would also commit, in this case to a policy along the lines of „if the evidence says A, then do this, but if the evidence says B, then that”. The latter policy resembles (parts of) the scientific method.
The anti-science or just intellectually dishonest way of “keeping an open mind” is to do this if and only if the evidence disagrees with one’s prior views. In other words, favourable data is accepted, but unfavourable ignored, justifying the ignoring with the open mind excuse. In debates, the side that runs out of arguments and is about to lose is usually the one who recommends an open mind, and only at that late stage of the debate and conditional on own weak position. Similarly, “agreeing to disagree” is mostly recommended intellectually dishonestly by the losing side of an argument, to attempt to leave the outcome uncertain. This is an almost logically contradictory use of “agreeing to disagree”, because it is mathematically proven that rational agents putting positive probability on the same events cannot agree to disagree – if their posterior beliefs are common knowledge, then these must coincide.