Tag Archives: research

Why research with more authors gets cited more

Empirically, articles with more authors are cited more, according to Wuchty et al. (2007). The reasons may be good or bad. A good reason is that coauthored papers may have higher quality, e.g. due to division of labour increasing the efficiency of knowledge production. I propose the following bad reasons, independent of potential quality differences between coauthored and solo articles. Suppose that researchers cite the works of their friends more frequently than warranted. A given scientist is more likely to have a friend among the authors of an article with a greater number of collaborators, which increases its probability of getting a „friendly citation”.

Another reason is defensive citing, i.e. including relatively unrelated papers in the reference list before submitting to a journal, in case the referees happen to be the authors of those works. The reason for adding these unnecessary citations is the belief, warranted or not, that a referee is more likely to recommend acceptance of a paper if it cites the referee’s publications. The probability that the set of referees overlaps with the set of authors of a given prior work increases in the number of authors of that work. Thus defensive citing is more effective when targeted to collaborative instead of solo papers.

The referees may also directly ask the author to cite certain papers in the revision (I have had this experience). If the referees are more likely to request citations to their own or their coauthors’ work, then articles with more authors are again referenced more.

Valderas et al. (2007) offer some additional explanations. One is measurement error. Suppose that letters to the editor, annual reports of the learned society, its presidential inaugural addresses, and other non-research in scientific journals are counted as publications. These have both fewer authors and citations than regular research articles, which creates a positive correlation between the popularity of a piece of writing and its number of authors.

If self-citations are not excluded and researchers cite their own work more frequently than that of others, then papers with more authors get cited more.

Articles with more collaborators are presented more frequently, thus their existence is more widely known. Awareness of a work is a prerequisite of citing it, so the wider circulation of multi-author publications gives them a greater likelihood of being referenced, independent of quality.

Easier combining of entertainment and work may explain increased income inequality

Many low-skill jobs (guard, driver, janitor, manual labourer) permit on-the-job consumption of forms of entertainment (listening to music or news, phoning friends) that became much cheaper and more available with the introduction of new electronic devices (first small radios, then TVs, then cellphones, smartphones). Such entertainment does not reduce productivity at the abovementioned jobs much, which is why it is allowed. On the other hand, many high-skill jobs (planning, communicating, performing surgery) are difficult to combine with any entertainment, because the distraction would decrease productivity significantly. The utility of low-skill work thus increased relatively more than that of skilled jobs when electronics spread and cheapened. The higher utility made low-skill jobs relatively more attractive, so the supply of labour at these increased relatively more. This supply rise reduced the pay relative to high-skill jobs, which increased income inequality. Another way to describe this mechanism is that as the disutility of low-skill jobs fell, so did the real wage required to compensate people for this disutility.

An empirically testable implication of this theory is that jobs of any skill level that do not allow on-the-job entertainment should have seen salaries increase more than comparable jobs which can be combined with listening to music or with personal phone calls. For example, a janitor cleaning an empty building can make personal calls, but a cleaner of a mall (or other public venue) during business hours may be more restricted. Both can listen to music on their headphones, so the salaries should not have diverged when small cassette players went mainstream, but should have diverged when cellphones with headsets became cheap. Similarly, a trucker or nightwatchman has more entertainment options than a taxi driver or mall security guard, because the latter do not want to annoy customers with personal calls or loud music. A call centre operator is more restricted from audiovisual entertainment than a receptionist.

According to the above theory, the introduction of radios and cellphones should have increased the wage inequality between areas with good and bad reception, for example between remote rural and urban regions, or between underground and aboveground mining. On the other hand, the introduction of recorded music should not have increased these inequalities as much, because the availability of records is more similar across regions than radio or phone coverage.

Heating my apartment with a gas stove

There is no built-in heating system in my Australian-standard un-insulated apartment, and the plug-in electric radiators do not have enough power to raise the temperature by a degree. In the past two winters, I used the gas stove as a heater. It is generally unwise to heat an enclosed space without purpose-built ventilation (such as a chimney) by burning something, because of the risk of CO poisoning. Even before CO becomes a problem, suffocation may occur because the CO2 concentration rises and oxygen concentration falls. Therefore, before deciding to heat with a gas stove, I looked up the research, made thorough calculations and checked them several times. I also bought a CO detector, tested it and placed it next to the gas stove. The ceiling has a smoke alarm permanently attached, but this only detects soot in the air, not gases like CO.
For the calculations, I looked up how much heat is produced by burning a cubic metre or kilogram of CH4 (natural gas), how much the temperature of the air in the apartment should rise as a result, how much CO2 the burning produces, and what the safe limits of long-term CO2 exposure are.
The energy content of CH4 is 37.2 MJ/m3, equivalently 50-55.5 MJ/kg. A pilot light of a water heater is estimated to produce 5.3 kWh/day = 20 MJ/day of heat, but a gas stove’s biggest burner turned fully on is estimated to produce 5-15 MJ/h, depending on the stove and the data source.
The chemical reaction of burning natural gas when oxygen is not a limiting factor is CH4 +2*O2 =CO2 +2*H2O. The molar masses of these gases are CH4=16 g/mol, O2=32 g/mol, CO2=44 g/mol, H2O=18 g/mol, air 29 g/mol. One stove burner on full for 1 hour uses about 0.182 kg =0.255 m3 of CH4 and 0.364 kg of O2, which depletes 1.82 kg = 1.52 m3 of air. The burning produces 2.75*0.182 = 0.5 kg = 0.41 m3 of CO2. The CO2 is denser than air, which is why it may remain in the apartment and displace air when the cracks around the windows are relatively high up. On the other hand, the CO2 also mixes with the air, so may escape at the same rate. Or alternatively, the CO2 is hot, so may rise and escape faster than air. For safety calculations, I want to use a conservative estimate, so assume that the CO2 remains in the apartment.
The volume of the apartment is 6x5x2.5 m =75 m^3. The density of air at room temperature is 1.2 kg/m^3, thus the mass of air in the apartment is 90 kg. The specific heat of air is 1005 kJ/(kg*K) at 20C. The walls and ceiling leak heat, thus more energy is actually needed to heat the apartment by a given amount than the calculation using only air shows. It takes 900 kJ of heat to raise the temperature of the air, not the walls, by 10C (from 12C to 22C). This requires 9/555 kg = 9/(16*555) kmol of CH4 with estimated energy density 55500 kJ/kg. Burning that CH4 also takes 9/(8*555) kmol of O2 and produces 9*11/(4*555) kmol = 9/200 kg of CO2.
The normal concentration of CO2 in outside air is 350-450 ppm. Estimate the baseline concentration in inside air to be 1/2000 ppm because of breathing and poor ventilation. Adding 1/2000 ppm from heating, the CO2 concentration reaches 1/1000 ppm. This is below the legal limit for long-term exposure.
CO is produced in low-oxygen burning. As long as the CO2 concentration in the air is low and the oxygen concentration high, the risk of CO poisoning is small.
For the actual heating, I first tested running the smallest burner all day while I was at home, and paid attention to whether I felt sleepy and whether the air in the apartment smelled more stale than outside or in the corridor. There seemed to be no problems. For nighttime heating, I started with the smallest burner in the lowest setting, similarly paying attention to whether the air in the morning smelled staler than usual and whether I felt any different. Because there were no problems, I gradually increased the heating from week to week. The maximum I reached was to turn on the largest burner to less than half power, and one or two smaller burners fully. Together, these burners produced much less heat than the largest burner on full, as could be easily checked by feel when standing next to the stove. At night, the stove prevented the temperature in the apartment from dropping by the usual 2C, but did not increase it. The CO2 produced was probably far less than the bound I calculated above by assuming a 10C increase in temperature. Empirically, I’m still alive after two winters of letting the gas stove run overnight.

“What if” is a manipulative question

“What if this bad event happens?” is a question used as a high-pressure sales tactic (for insurance, maintenance, upgrades and various protective measures). People suffering from anxiety or depression also tend to ask that question, which is called catastrophising. The question generates vague fears and is usually unhelpful for finding reasonable preventive or corrective measures for the bad event. Fearful people tend to jump on anything that looks like it might be a prevention or cure, which sometimes makes the problem worse (e.g. quack remedies for imagined rare disease worsen health).
A more useful question is: “What is the probability of this bad event happening?” This question directs attention to statistics and research about the event. Often, the fear-generating event is so unlikely that it is not worth worrying about. Even if it has significant probability, checking the research on it is more likely to lead to solutions than vague rumination along the lines of “what if.” Even if there are no solutions, statistics on the bad event often suggest circumstances that make it more likely, thus information on which situations or risk factors to avoid.
These points have been made before, as exemplified by the aphorisms “Prepare for what is likely and you are likely to be prepared” and “Safety is an expensive illusion.”

App to measure road quality

The accelerometers in phones can detect vibrations, such as when the car that the phone is in drives through a pothole. The GPS in the phone can detect the location and speed of the car. An app that connects the jolt, location and speed (and detects whether the phone is in a moving car based on its past speed and location) can automatically measure the quality of the road. The resulting data can be automatically uploaded to a database to create an almost real-time map of road quality. The same detection and reporting would work for bike paths.
Perhaps such an app has already been created, but if not, then it would complement map software nicely. Drivers and cyclists are interested in the quality of the roads as well as the route, time and distance of getting to the destination. Map software already provides congestion data and takes traffic density into account when predicting arrival time at a destination. Road quality data would help drivers select routes to minimise damage to vehicles (and the resulting maintenance cost) and to sensitive cargo. This would be useful to trucking and delivery companies, and ambulances.
A less direct use of data on road quality collected by the app is in evaluating the level of local public services provided (one aspect of the quality of local government). Municipalities with the same climate, soil and traffic density with worse roads are probably less well run. For developing countries where data on governance quality and spending is difficult to get, road quality may be a useful proxy. The public services are correlated with the wealth of a region, so road quality is also a proxy for poverty.

Teaching and research and division of labour

Universities usually prefer that the same person both teaches and does research. There are some purely teaching or purely research-focussed positions, but these are a minority. Both teaching and research achievements (and service as well) are required for tenure. This runs counter to Adam Smith’s argument that division of labour raises overall productivity. One possible cause is an economy of scope (synergy), meaning that teaching helps with research, or research helps to teach. In my experience, there is no such synergy, except maybe in high-level doctoral courses that focus exclusively on recent research. Revising old and basic knowledge by teaching it does not help generate novel insights about recent discoveries. Complex research does not help explain introductory ideas simply and clearly to beginners.

Another explanation is that universities try to hide their cross-subsidy going from teaching to research. The government gives money mainly for teaching, and if teachers and researchers were different people, then it would be easy for the government to check how much money was spent on each group. If, however, the same person is engaged in both activities, then the university can claim that most of the person’s time is spent teaching, or that their research is really designed to improve their teaching. In reality, teaching may be a minor side job and most of the salary may be paid for the research. This is suggested by the weight of research in hiring and tenuring.

The income of universities mostly comes from teaching, so they try to reduce competition from non-university teachers and institutions. One way is to differentiate their product by claiming that university teaching is special, complicated and research-based, so must be done by PhD holders or at least graduate students. Then schoolteachers for example would be excluded from providing this service. Actually the material up to and including first year doctorate courses is textbook-based and thus cannot consist of very recent discoveries. With the help of a textbook, many people could teach it – research is not required, only knowing the material thoroughly. For example, an undergraduate with good teaching skills who was top of the class in a course could teach that course next semester. Teaching skill is not highly correlated with research skill. The advantage someone who recently learned the material has in teaching it is that they remember which parts were difficult. A person who has known something for a long time probably does not recall how they would have preferred it taught when they first learned it.

Researchers forget the basics over time, because they rarely use these – there are more advanced methods. The foundations are learned to facilitate later learning of intermediate knowledge, which in turn helps with more complicated things and so on up to research level. Similarly in sports, musical performance, sewing, the initial exercises for learners can be quite different from the activity that is the end goal. A sports coach is rarely an Olympic athlete at the same time, so why should a teacher be a researcher simultaneously?

Research articles may have negative value

Falsified, plagiarized or plain junk research is not considered here. The effort of the author and the cost to the funders are considered sunk and similarly ignored.

After a research article is published, it may still have negative value for humanity. How is this possible if the cost of creating it is considered zero and the results are not junk? Doesn’t every discovery, however small, contribute a little to the corpus of knowledge? It does, but the cost it imposes on other researchers may outweigh this. Every publication increases the number of articles that researchers of related topics have to look at, however briefly, to write their literature review and check that their idea is not already taken. It may take a few seconds to read the title and decide that the article is irrelevant to one’s work, but this small cost is paid by many. If the publication makes a small enough contribution to knowledge, then the total cost to other academics outweighs the benefit from this contribution. The researchers whose time the article wasted could have done something useful with that time.

Publication delay provides incentives

From submitting a paper to a journal until getting the first referee reports takes about six months in economics. It is very rare to get accepted on the first try. Most papers are rejected, and an immediate acceptance implies having submitted to too weak a journal. Waiting for the referee reports on the revision and second revision takes another six plus a few months. This seems unnecessary (reading a paper does not take six months) and inefficient (creates delay in disseminating research results), but is used for incentives.
Delay discourages frivolous submissions. It forces authors to evaluate their own work with some honesty. If the referee reports were immediate, then everyone would start at the top journal and work their way down through every venue of publication until getting accepted. This would create a large refereeing and editing burden. Delay is a cost for the authors, because simultaneous submissions to multiple journals are not allowed. Trying for high-ranking journals is a risk, because the author may not have anything to show at the next evaluation. This reduces submissions to top journals. It may be optimal to start at the middle of the ranking where the chances of acceptance are higher.
A similar incentive to submit to the correct quality level of journal can be created by imposing a submission fee, forbidding further submissions for a period of time if rejected or requiring the author to write referee reports on others’ papers. A submission fee should be distinguished from publication fees, which are used at fake journals. The submission fee is paid no matter whether the paper is accepted, therefore does not create the incentive for the journal to lower its standards and publish more papers.
The submission fee would impose different costs on authors in different financial circumstances. Some have research funds to pay the fee, some do not. Similarly, delay has a larger effect on people whose evaluation is coming sooner. Being banned from a journal for a given amount of time after a rejection is worse for a researcher working in a single field. Interdisciplinary folk have a wider variety of venues to publish in. Writing referee reports as a price of having one’s work evaluated may lead to sloppy reviewing. Any mechanism to induce self-selection has a cost. Yet self-selection is needed.

Insurance in research

Most developed countries have programs to support research and encourage students to choose careers in it. This suggests scientists have a positive externality on the rest of their country that is not fully internalized in their income. Why not support research by paying the researchers its value, assuming the value can be measured? This would internalize the externality, leading to efficient effort provision.
A potential answer is different risk aversion of the organization supporting science and the scientists. If the institution is involved with many different projects, it is diversified and likely to be less risk averse than a researcher who only has a few projects. The arrangement optimal for both sides is then for the institution to offer insurance (at a cost). The researchers get paid a lower expected amount than the value of their work, but with a lower variance. Instead of the scientists taking loans to finance their work, becoming rich if the project succeeds and bankrupt if it fails, they avoid loans and get a fairly constant salary.
There is a tradeoff between incentives and insurance. If the salary does not depend on success, there is no incentive for effort, but perfect insurance. Having researchers take loans and get the full value of their work provides no insurance, but strong motivation. The compromise is that promotion and pay depend somewhat on research success, but not too much.

Smarter people have a comparative advantage in theory

Theory research requires figuring out the result and how to prove it, and then writing these down. Empirical research requires the same, plus running the experiment or analyzing the data in order to prove the result. This requires relatively more time and less insight. If the production function of empirics requires in its input mix more time per unit of insight than the production function of theory, then smarter people have a comparative advantage in theory. They are endowed with more insight, but everyone has the same amount of time.
The amounts of theory and empirical research produced per unit of insight need not be related in any way for the above comparative advantage result.
Based on comparative advantage, I should switch to empirical research 🙂
Some empirical research methods are quite simple, but modern theory requires complicated math. Due to this, empirical research requires more time per unit of methods knowledge in its input mix. People with a stronger methodological background (better education) thus have a comparative advantage in theory. This suggests graduates of universities with more (advanced) coursework should be more likely to do theory.