Tag Archives: research

Teaching and research and division of labour

Universities usually prefer that the same person both teaches and does research. There are some purely teaching or purely research-focussed positions, but these are a minority. Both teaching and research achievements (and service as well) are required for tenure. This runs counter to Adam Smith’s argument that division of labour raises overall productivity. One possible cause is an economy of scope (synergy), meaning that teaching helps with research, or research helps to teach. In my experience, there is no such synergy, except maybe in high-level doctoral courses that focus exclusively on recent research. Revising old and basic knowledge by teaching it does not help generate novel insights about recent discoveries. Complex research does not help explain introductory ideas simply and clearly to beginners.

Another explanation is that universities try to hide their cross-subsidy going from teaching to research. The government gives money mainly for teaching, and if teachers and researchers were different people, then it would be easy for the government to check how much money was spent on each group. If, however, the same person is engaged in both activities, then the university can claim that most of the person’s time is spent teaching, or that their research is really designed to improve their teaching. In reality, teaching may be a minor side job and most of the salary may be paid for the research. This is suggested by the weight of research in hiring and tenuring.

The income of universities mostly comes from teaching, so they try to reduce competition from non-university teachers and institutions. One way is to differentiate their product by claiming that university teaching is special, complicated and research-based, so must be done by PhD holders or at least graduate students. Then schoolteachers for example would be excluded from providing this service. Actually the material up to and including first year doctorate courses is textbook-based and thus cannot consist of very recent discoveries. With the help of a textbook, many people could teach it – research is not required, only knowing the material thoroughly. For example, an undergraduate with good teaching skills who was top of the class in a course could teach that course next semester. Teaching skill is not highly correlated with research skill. The advantage someone who recently learned the material has in teaching it is that they remember which parts were difficult. A person who has known something for a long time probably does not recall how they would have preferred it taught when they first learned it.

Researchers forget the basics over time, because they rarely use these – there are more advanced methods. The foundations are learned to facilitate later learning of intermediate knowledge, which in turn helps with more complicated things and so on up to research level. Similarly in sports, musical performance, sewing, the initial exercises for learners can be quite different from the activity that is the end goal. A sports coach is rarely an Olympic athlete at the same time, so why should a teacher be a researcher simultaneously?

Research articles may have negative value

Falsified, plagiarized or plain junk research is not considered here. The effort of the author and the cost to the funders are considered sunk and similarly ignored.

After a research article is published, it may still have negative value for humanity. How is this possible if the cost of creating it is considered zero and the results are not junk? Doesn’t every discovery, however small, contribute a little to the corpus of knowledge? It does, but the cost it imposes on other researchers may outweigh this. Every publication increases the number of articles that researchers of related topics have to look at, however briefly, to write their literature review and check that their idea is not already taken. It may take a few seconds to read the title and decide that the article is irrelevant to one’s work, but this small cost is paid by many. If the publication makes a small enough contribution to knowledge, then the total cost to other academics outweighs the benefit from this contribution. The researchers whose time the article wasted could have done something useful with that time.

Publication delay provides incentives

From submitting a paper to a journal until getting the first referee reports takes about six months in economics. It is very rare to get accepted on the first try. Most papers are rejected, and an immediate acceptance implies having submitted to too weak a journal. Waiting for the referee reports on the revision and second revision takes another six plus a few months. This seems unnecessary (reading a paper does not take six months) and inefficient (creates delay in disseminating research results), but is used for incentives.
Delay discourages frivolous submissions. It forces authors to evaluate their own work with some honesty. If the referee reports were immediate, then everyone would start at the top journal and work their way down through every venue of publication until getting accepted. This would create a large refereeing and editing burden. Delay is a cost for the authors, because simultaneous submissions to multiple journals are not allowed. Trying for high-ranking journals is a risk, because the author may not have anything to show at the next evaluation. This reduces submissions to top journals. It may be optimal to start at the middle of the ranking where the chances of acceptance are higher.
A similar incentive to submit to the correct quality level of journal can be created by imposing a submission fee, forbidding further submissions for a period of time if rejected or requiring the author to write referee reports on others’ papers. A submission fee should be distinguished from publication fees, which are used at fake journals. The submission fee is paid no matter whether the paper is accepted, therefore does not create the incentive for the journal to lower its standards and publish more papers.
The submission fee would impose different costs on authors in different financial circumstances. Some have research funds to pay the fee, some do not. Similarly, delay has a larger effect on people whose evaluation is coming sooner. Being banned from a journal for a given amount of time after a rejection is worse for a researcher working in a single field. Interdisciplinary folk have a wider variety of venues to publish in. Writing referee reports as a price of having one’s work evaluated may lead to sloppy reviewing. Any mechanism to induce self-selection has a cost. Yet self-selection is needed.

Insurance in research

Most developed countries have programs to support research and encourage students to choose careers in it. This suggests scientists have a positive externality on the rest of their country that is not fully internalized in their income. Why not support research by paying the researchers its value, assuming the value can be measured? This would internalize the externality, leading to efficient effort provision.
A potential answer is different risk aversion of the organization supporting science and the scientists. If the institution is involved with many different projects, it is diversified and likely to be less risk averse than a researcher who only has a few projects. The arrangement optimal for both sides is then for the institution to offer insurance (at a cost). The researchers get paid a lower expected amount than the value of their work, but with a lower variance. Instead of the scientists taking loans to finance their work, becoming rich if the project succeeds and bankrupt if it fails, they avoid loans and get a fairly constant salary.
There is a tradeoff between incentives and insurance. If the salary does not depend on success, there is no incentive for effort, but perfect insurance. Having researchers take loans and get the full value of their work provides no insurance, but strong motivation. The compromise is that promotion and pay depend somewhat on research success, but not too much.

Smarter people have a comparative advantage in theory

Theory research requires figuring out the result and how to prove it, and then writing these down. Empirical research requires the same, plus running the experiment or analyzing the data in order to prove the result. This requires relatively more time and less insight. If the production function of empirics requires in its input mix more time per unit of insight than the production function of theory, then smarter people have a comparative advantage in theory. They are endowed with more insight, but everyone has the same amount of time.
The amounts of theory and empirical research produced per unit of insight need not be related in any way for the above comparative advantage result.
Based on comparative advantage, I should switch to empirical research 🙂
Some empirical research methods are quite simple, but modern theory requires complicated math. Due to this, empirical research requires more time per unit of methods knowledge in its input mix. People with a stronger methodological background (better education) thus have a comparative advantage in theory. This suggests graduates of universities with more (advanced) coursework should be more likely to do theory.

Eliminating for-profit academic publishing

Much has been written about the high profits academic publishers get from the volunteer labour of their referees and editors, and how high subscription costs reduce funds available for actual research. The opinion pieces and blog posts I have seen do not suggest a concrete way to change the system. They only express hope that with more researchers putting their work on the web, the for-profit publishing industry will eventually disappear. I think this disappearance can and should be hastened. The obvious way is to boycott for-profit journals as an author, referee, editor and librarian.
The obvious objection is that one’s career depends on publishing in certain journals that often happen to be for-profit, and that “service to the profession” (refereeing and editing) is one’s duty and also helps the career a bit. A moral counterargument is that while boycotting may impose some personal costs, it benefits other researchers and the increase in research benefits everyone, so as a favour to the rest of humanity, boycott is the right thing. After all, why do people become academic researchers when the private sector pays more?
Game theoretically, the academic system (including publishing) is a coordination game, like many social norms. As long as everyone else conforms to the system, it is costly to depart from it. Thus self-interested people choose to conform to the system. This keeps the system stable. Individual deviations are costly, but a collective (coalitional) deviation may be costless or at least cheaper. An example is the whole editorial board of a for-profit journal deciding to resign and start a nonprofit copy of this journal. They announce publicly that all articles that researchers were planning to submit to the for-profit journal should now be submitted to the nonprofit copy. The refereeing and editing process goes on as before, only the library subscriptions to the new journal are cheaper. There should be no loss of prestige for the editors or loss of publishing opportunity for the authors.
A journal is not complicated – it only requires an online system to let authors upload relatively small text files, let the editors forward these files (with author identity removed) to referees, referees to upload their text files and the editors to forward these (deidentified) files to authors. Such programs surely exist, free and open-source as well.
Perhaps a proofreader could be hired for the journal and paid out of subscription fees. But the total cost of running a journal (with volunteer labour like now) is very low.

Of rankings

Many universities advertise themselves as being among the top n in the world (or region, country etc). Many more than n in fact, for any n small enough (1000, 500, 100 etc). How can this be? There are many different rankings of universities, each university picks the ranking in which it is the highest and advertises that. So if there are 10 rankings, each with a different university as number one, then there are at least 10 universities claiming to be number one.
There are many reasonable ways to rank a scientist, a journal, a university or a department. For a scientist, one can count all research articles published, or only those in the last 10 or 5 years, or only those in the top 100 journals in the field, or any of the previous weighted by some measure of quality, etc. Quality can be the number of citations, or citations in the last 5 years or from papers in the top 50 journals or quality-weighted citations (for some measure of quality)…
What are the characteristics of a good ranking? Partly depends on what one cares about. If a fresh PhD student is looking for an advisor, it is good to have an influential person who can pull strings to get the student a job. Influence is positively related to total citations or quality-weighted publications, and older publications may be better known than newer. If a department is looking to hire a professor, they would like someone who is active in research, not resting on past glory. So the department looks at recent publications, not total lifetime ones. Or at least divides the number of publications by the number of years the author has been a researcher.
Partly the characteristics of a good ranking are objective. It is the quality-weighted publications that matter, not just total publications. Similarly for citations. Coauthored publications should matter less than solo-authored. The ranking should not be manipulable by splitting one’s paper into two or more, or merging several papers into one. It should not increase if two authors with solo papers agree to add each other as the author, or if two coauthors having two papers together agree to make both papers single-authored, one under each of their names. Therefore credit for coauthored papers should be split between authors so that the shares sum to one.
How to measure the quality of a publication? One never knows the true impact that a discovery will have over the infinite future. Only noisy signals about this can be observed. There currently is no better measure than the opinion of other scientists, but how to transform vague opinions into numerical rankings?  The process seems to start with peer review.
Peer review is not a zero-one thing that a journal either has or not. There are a continuum of quality levels of it, from the top journals with very stringent requirements to middle ones where the referees only partly read or understand the paper, to fake journals that claim to have peer review but really don’t. There have been plenty of experiments where someone has submitted a clearly wrong or joke article to (ostensibly peer-reviewed) journals and got it accepted. Even top journals are not perfect, as evidenced by corrigenda published by authors and critical comments on published articles by other researchers. Even fake journals are unlikely to accept a paper where every word is “blah” – it would make their lack of review obvious and reduce revenue from other authors.
The rankings (of journals, researchers, universities) I have seen distinguish peer-reviewed journals from other publications in a zero-one way, not acknowledging the shades of grey between lack of review and competent review.
How to measure the quality of peer-review in a journal? One could look at the ranking of the researchers who are the reviewers and editors, but then how to rank the researchers? One could look at the quality-weighted citations per year to papers in the journal, but then what is the q    uality of a citation?
Explicit measurement of the quality of peer-review is possible: each author submitting a paper is asked to introduce a hard-to-notice mistake into the paper deliberately, report that mistake to an independent database and the referees are asked to report all mistakes they find to the same database. The author can dispute claimed mistakes and some editor has to have final judgement. It is then easy to compare the quality of review across journals and reviewers by the fraction of introduced mistakes they find. This is the who-watches-the-watchmen situation studied in Rahman (2010) “But who will monitor the monitor?” (http://www.aeaweb.org/articles.php?doi=10.1257/aer.102.6.2767).
One could disregard the journal altogether and focus on quality-weighted citations of the papers, but there is useful information contained in the reputation of a journal. The question is measuring that reputation explicitly.
If there is no objective measure of paper quality (does the chemical process described in it work, the algorithm give a correct solution, the material have the claimed properties etc), then a ranking of papers must be based on people’s opinions. This is like voting. Each alternative to be voted on is a ranking of papers, or there may simply be voting for the best paper. Arrow’s impossibility theorem applies – it is not possible to establish an overall ranking of papers (that is Pareto efficient, independent of irrelevant alternatives, non-dictatorial) using people’s individual rankings.
Theorists have weakened independence of irrelevant alternatives (ranking of A and B does not depend on preferences about other options). If preferences are cardinal (utility values have meaning beyond their ordering), then some reformulations of Arrow’s criteria can be simultaneously satisfied and a cardinal ordering of papers may be derivable from individual orderings.
If the weight of a person’s vote on the ranking of papers or researchers depends on the rank this person or their papers get, then the preference aggregation becomes a fixed point problem even with truthful (nonstrategic) voting. (This is the website relevance ranking problem, addressed by Google’s PageRank and similar algorithms.) There may be multiple fixed points, i.e. multiple different rankings that weight the votes of individuals by their rank and derive their rank from everyone’s votes.
For example, A, B and C are voting on their ranking. Whoever is top-ranked by voting gets to be the dictator and determines everyone’s ranking. A, B, C would most prefer the ranking of themselves to be ABC, BCA and CAB respectively. Each of these rankings is a fixed point, because each person votes themselves as the dictator if they should become the dictator, and the dictator’s vote determines who becomes the dictator.
A fixed point may not exist: with voters A and B, if A thinks B should be the dictator and B thinks A should, and the dictator’s vote determines who becomes the dictator, then a contradiction results no matter whether A or B is the dictator.
If voting is strategic and more votes for you gives a greater weight to your vote, then the situation is the one studied in Acemoglu, Egorov, Sonin (2012) “Dynamics and stability of constitutions, coalitions, and clubs.” (http://www.aeaweb.org/articles.php?f=s&doi=10.1257/aer.102.4.1446). Again, multiple fixed points are possible, depending on the starting state.
Suppose now that the weights of votes are fixed in advance (they are not a fixed point of the voting process). An objective signal that ranks some alternatives, but not all, does not change the problem coming from Arrow’s impossibility theorem. An objective signal that gives some noisy information about the rank or quality of each alternative can be used to prevent strategic manipulation in voting, but does not change the outcome under truthful voting much (outcome is probably continuous in the amount of noise).