Tag Archives: academia

Keeping journals honest about response times

Academic journals in economics commonly take 3-6 months after manuscript submission to send the first response (reject, accept or revise) to the author. The variance of this response time is large both within and across journals. Authors prefer to receive quick responses, even if these are rejections, because then the article can be submitted to the next journal sooner. The quicker an article gets published, the sooner the author can use it to get a raise, a grant or tenure. This creates an incentive for authors to preferentially submit to journals with short response times.
If more articles are submitted to a journal, then the journal has a larger pool of research to select from. If the selection is positively correlated with article quality, then a journal with a larger pool to select from publishes on average higher quality articles. Higher quality raises the prestige of a journal’s editors. So there is an incentive for a journal to claim to have short response times to attract authors. On the other hand, procrastination of the referees and the editors tends to lengthen the actual response times. Many journals publish statistics about their response times on their website, but currently nothing guarantees the journals’ honesty. There are well-known tricks (other than outright lying) to shorten the reported response time, for example considering an article submitted only when it is assigned to an editor, and counting the response time from that point. Assigning to an editor can take over two weeks in my experience.
To keep journals honest, authors who have submitted to a journal should be able to check whether their papers have been correctly included in the statistics. Some authors may be reluctant to have their name and paper title associated with a rejection from a journal. A rejection may be inferred from a paper being included in the submission statistics, but not being published after a few years. A way around this is to report the response time for each manuscript number. Each submission to a journal is already assigned a unique identifier (manuscript number), which does not contain any identifying details of the author. The submitter of a paper is informed of its manuscript number, so can check whether the response time publicly reported for that manuscript number is correct.
Currently, authors can make public claims about the response time they encountered (e.g. on https://www.econjobrumors.com/journals.php), but these claims are hard to check. An author wanting to harm a journal may claim a very long response time. If the authors’ reported response times are mostly truthful, then these provide information about a journal’s actual response time. Symmetrically, if the journals’ reported response times are accurate, then an author’s truthfulness can be statistically tested, with the power of the test depending on the number of articles for which the author reports the response time.

Conferences and seminars as cargo cults

I wrote about the wastefulness of physically travelling to conferences or to give seminars, because one could give a presentation via a video call over the internet (http://sanderheinsalu.com/ajaveeb/?p=442). Other than habit or tradition, why would scientists organise conferences and seminars with physical attendance? One explanation I offered was that physical presence is a commitment device. Herding is another justification to any tradition. Irrationality is a third, which complements herding.

Holding conferences and seminars may be rational if top researchers are presenting and providing feedback, because there is much to learn from them. Such workshops may not be useful if the participants are not on the research frontier. Nonetheless, the low-achievers may organise such gatherings, because they want to publish like the high-achievers, and they perceive that the high-achievers benefit from the research meetings of large groups. A cargo cult means imitating someone’s behaviour to reach the same goals as them, but without understanding the reason why their actions lead to the results they do. The lack of comprehension of the underlying mechanism leads the imitation subtly astray, so it does not obtain the desired results. The conferences and seminars of low-achievers are a cargo cult if the following hold: only the participation of high-achievers makes research meetups useful, the high-achievers do not attend low-prestige meetups, and the low-achievers do not understand the above. It is difficult to test the usefulness of any action in research, because publication success is noisy, influenced by many factors and with long delays. Thus it is difficult to test whether there is a cargo cult in the less advanced levels of the research community.

Besides improving quality, the feedback of top researchers can increase publication chances by making the research of those lower in the scientific hierarchy conform to the tastes of the top. This is a horizontal differentiation effect – matching idiosyncratic tastes. It is not vertical differentiation (improving quality). If top people as referees and editors favour a certain field of research or ideology, then presenting work to them may uncover their biases and enable an author to pander to them.

Another way that presenting helps with publication is the familiarity effect. When the referee or editor has seen the paper presented before reading it, then the content is familiar and thus easier to understand. The reader may interpret the ease of comprehension as clarity of the paper, not familiarity of the material. Clear writing and well-structured ideas are a positive signal to the referee and increase the publication chances.

If the second-best people imitate the highest-achievers, then the third-best may imitate the second-best, etc. The cargo cults may be multilayered. Such imitation of imitators is called herding. It may sometimes be individually rational, but may lead to socially suboptimal ignoring of later information in favour of imitating the decisions of those who acted based on earlier info. Herding strengthens the effects of mistaken imitation, thus worsening cargo cult effects.

Cargo cults occur widely – any time there is a fad, fashion or bubble, some people jump on the bandwagon because their role models did, without asking why the role models did so. The personal situation of the trailbreakers may make it rational for them to act in a certain way, but the different circumstances of the followers may make imitation counterproductive for them. An example is creating a financial bubble to profit from it (pump-and-dump strategy). The starters profit from the amount invested by the followers. The last people to become followers lose their investment when the bubble bursts. I am not the first to compare the research community to a pyramid scheme – search „Profzi scheme” online.

Incentivising refereeing

To shorten the refereeing lag and improve report quality in economics, the natural solution is to incentivise academics to do a better and quicker job. Economists respond to incentives, but currently no salary or promotion consequences arise from good or bad refereeing, as far as I know. In http://sanderheinsalu.com/ajaveeb/?p=503, I wrote about incentives for authors not to submit careless papers (in the hope that a refereeing mistake gets them accepted). One such incentive is requiring refereeing for a journal as a precondition for submitting to that journal. If a submitted paper gets n referee reports on average, then before submitting, an author should referee n papers in a timely manner, which should balance the supply and demand of refereeing. This forced refereeing may lead to quick, but sloppy reports.

An additional incentive is needed for quality. Rahman’s 2010 paper on the question „who watches the watchmen” suggests an answer. The editor can insert a deliberate mistake in every paper and see whether a referee finds it. If not, then the refereeing of that person is likely of low quality. The mistake is corrected before publication. Alternatively, the editor can ask each author to insert a mistake and tell the editor about it. The author is not penalised for this mistake and is asked to correct it if the paper is accepted. The referees are again judged on whether they find the mistake.

The above scheme derives refereeing incentives from publication incentives, requiring minimal change to the current system. However, it is somewhat indirect. A more straightforward incentive for refereeing is to reward it directly, either paying for it or basing promotion decisions partly on it. The speed of refereeing is already slightly monetarily incentivised in the American Economic Journal: Microeconomics. If the referee sends the report before a deadline, then she or he is paid 100 dollars. If a good referee report takes about 10 hours, then the amount is certainly not enough to motivate quality provision, but it is a step in one of the right directions. A simple improvement on the binary „before vs after deadline” reward scheme is to reduce the payment gradually as the delay of the referee report increases.

If refereeing is incentivised, then lower-ranked journals need larger incentives to compensate for the fact that refereeing for these has less inherent prestige and the paper one has to read is of lower quality. On the other hand, lower-ranked journals are less able to motivate refereeing with the threat of not accepting submissions from those who have not refereed. There are more lower-ranked journals, and it is less important to get accepted by any particular one of them. Some of the less prestigious journals would find no referees under the system proposed above. This is good, because it removes the „peer reviewed” status of some junk journals and may force them to close. If authors know that quality journals require refereeing before submission, then they draw the obvious conclusion about a journal that does not require it.

Deflation of academic publications

The top journals publish a similar number of articles as decades ago, but there is a much larger number of researchers competing to get their work into a top journal. Correspondingly, it is more difficult over time to get a paper into a given journal. If articles are analogous to currency in the academic world, then this would be deflation: the value of the currency rises over time. If articles are like goods and services, but research effort is the currency that buys them, then there is inflation, because the amount of currency required to buy a given good rises.

The correct comparison between publications in different decades would take into account the increasing difficulty of publishing in a given journal. Instead of comparing papers in the top n journals, a better metric is papers in the top x percent of journals (accounting for the possibly expanding size of each journal). Similarly, being the number one researcher among a thousand in 1901 is less impressive than being the best among a million in 2001. Again the right comparison is by percentile rank, not by “top n” status.

The norms and metrics in academia are largely made by senior, established researchers. If people do not completely account for the deflation, then the top academics benefit from the increasing difficulty of publishing in the top n journals combined with the metric that counts the top n, not the top x percent. The research of old academics that was published in the top n long ago looks the more impressive the more difficult it is nowadays to get a paper into the top n. Comparison by percentile rank would correct for this artificial advantage, so the established members of the profession would not seem as high-achieving relative to new entrants.

A similar change in difficulty has occurred in getting accepted as a student in the top n universities, or getting hired as faculty in these. The right comparison to the students or faculty decades ago would compare the top x percent of universities, with the appropriate correction if the universities have expanded their enrollment or number of jobs.

Publication delay provides incentives

From submitting a paper to a journal until getting the first referee reports takes about six months in economics. It is very rare to get accepted on the first try. Most papers are rejected, and an immediate acceptance implies having submitted to too weak a journal. Waiting for the referee reports on the revision and second revision takes another six plus a few months. This seems unnecessary (reading a paper does not take six months) and inefficient (creates delay in disseminating research results), but is used for incentives.
Delay discourages frivolous submissions. It forces authors to evaluate their own work with some honesty. If the referee reports were immediate, then everyone would start at the top journal and work their way down through every venue of publication until getting accepted. This would create a large refereeing and editing burden. Delay is a cost for the authors, because simultaneous submissions to multiple journals are not allowed. Trying for high-ranking journals is a risk, because the author may not have anything to show at the next evaluation. This reduces submissions to top journals. It may be optimal to start at the middle of the ranking where the chances of acceptance are higher.
A similar incentive to submit to the correct quality level of journal can be created by imposing a submission fee, forbidding further submissions for a period of time if rejected or requiring the author to write referee reports on others’ papers. A submission fee should be distinguished from publication fees, which are used at fake journals. The submission fee is paid no matter whether the paper is accepted, therefore does not create the incentive for the journal to lower its standards and publish more papers.
The submission fee would impose different costs on authors in different financial circumstances. Some have research funds to pay the fee, some do not. Similarly, delay has a larger effect on people whose evaluation is coming sooner. Being banned from a journal for a given amount of time after a rejection is worse for a researcher working in a single field. Interdisciplinary folk have a wider variety of venues to publish in. Writing referee reports as a price of having one’s work evaluated may lead to sloppy reviewing. Any mechanism to induce self-selection has a cost. Yet self-selection is needed.

Insurance in research

Most developed countries have programs to support research and encourage students to choose careers in it. This suggests scientists have a positive externality on the rest of their country that is not fully internalized in their income. Why not support research by paying the researchers its value, assuming the value can be measured? This would internalize the externality, leading to efficient effort provision.
A potential answer is different risk aversion of the organization supporting science and the scientists. If the institution is involved with many different projects, it is diversified and likely to be less risk averse than a researcher who only has a few projects. The arrangement optimal for both sides is then for the institution to offer insurance (at a cost). The researchers get paid a lower expected amount than the value of their work, but with a lower variance. Instead of the scientists taking loans to finance their work, becoming rich if the project succeeds and bankrupt if it fails, they avoid loans and get a fairly constant salary.
There is a tradeoff between incentives and insurance. If the salary does not depend on success, there is no incentive for effort, but perfect insurance. Having researchers take loans and get the full value of their work provides no insurance, but strong motivation. The compromise is that promotion and pay depend somewhat on research success, but not too much.

On grants, evaluation and efficiency

Getting grants counts positively in an academic’s evaluation and results in promotions and raises. But grants are supposed to be inputs for research, not outputs. Other things equal, it should be preferable to get the same output with fewer inputs (more efficiently and cheaply). Given an academic’s publications and patents, the grants they received in order to create these outputs should count negatively in their evaluation. The university administration is interested in motivating grant-getting, because they tax the grants – take a fraction of each for themselves. The motivating is done by promotions and raises. Rewarding more use of inputs inflates the cost of research and diverts effort from scientific output to getting more inputs.

A justification for rewarding grant-getting is that having current grants makes it easier to do research, thus increasing the expected scientific output in the near future. This only applies to a person’s current grants, not those already spent. Perhaps the current grants may count positively in an evaluation, but the spent ones should still have negative weight.

Once the system is in place, there may be an additional incentive to follow it: signalling obedience to rules. If academics are expected to apply for grants, then the ones that publicly do not may be considered contrarian, which may have negative consequences.

A similar reasoning applies to researchers from rich and poor institutions. If university resources are used for the work, then the person from a rich institution had more inputs for their work. The same output from a scientist in a poor university should be a more favourable signal about them.

An analogous adjustment is done in US college applications when low socioeconomic status confers an advantage. The direction of the correction is right, but its appropriate size remains to be determined.

Targeting university donations more precisely

If a donation is an expression of gratitude to a university where one acquired great skills or had a good time, then why not target it more precisely? Why donate to the entire university or a particular department as opposed to the people making up the university? Some people probably contributed more than others to the excellent university experience. It would make sense to reward them more. The people who made the studies enjoyable or useful may be gone from the university, especially if they were coursemates, but the employees of universities also change jobs. Those who are gone do not benefit from a donation to the university. A gratitude-based donation should go directly to the people one wants to thank.
If a donation is for the purpose of advancing education and research, then the money should be targeted to where it does the most good. But the universities receiving the most donations are those who are already rich. It is difficult to measure the benefit to education or research that an additional unit of money generates in different universities, but diminishing marginal returns seem reasonable. In that case, do-good donations should go to the poorest regions of the world and the poorest universities.
The richest universities often spend money on fancy architecture, with stonecarvings on the outside of buildings and woodcarvings and paintings inside them. The money thus spent clearly does not contribute to education or research. It may even have a negative value if architecturally interesting buildings are less well suited to study and work than a standard office block (this is true in my experience).
It is not enough to donate under the condition that the university must spend the money on scholarships or salaries, not buildings. There is a crowding-out effect: if the university receives a donation for a particular purpose, it spends less of its own money for that purpose than it would have without the donation. Effectively, part of the donation still goes to buildings.

Wasteful academic travel

Academics fly around the world to meet coauthors, go to conferences or present seminars. These things could easily be done by videoconferencing, saving money, travel time, environment and productivity lost to jetlag. An objection I have heard is that video calls are not the same thing. What other senses besides sight and hearing do people use to communicate with their colleagues? A handshake maybe. Then build a robotic arm that gives haptic feedback to imitate any person’s hand and that can be used to shake hands at a distance.
If a wall-sized screen disguised at the edges is put in a seminar room and the audience walks in together, it would be a challenge to distinguish a real speaker at the front of the room from a speaker shown on the big screen. Eye tracking software can adjust the screen image as the viewer changes position to give the impression of 3D. Or the audience can wear virtual reality glasses like Oculus Rift.
Other than habit, commitment may be a reason for physical travel. If a person has travelled to give a seminar, the audience would feel embarrassed for not attending. This would be felt less if the presentation is via video and could be recorded. Then the option to watch it later would give people the excuse to constantly postpone watching. If an academic travels to a conference, there are fewer distractions than at home or at work, so a greater chance of actually going to the presentations.
The proliferation of laptops, smartphones and tablets is undermining this commitment – one can attend a talk and not pay attention, checking email or surfing the web instead. Google Glasses would have an even stronger effect: the eyes can be pointed towards the speaker while actually watching and listening something else.

Eliminating for-profit academic publishing

Much has been written about the high profits academic publishers get from the volunteer labour of their referees and editors, and how high subscription costs reduce funds available for actual research. The opinion pieces and blog posts I have seen do not suggest a concrete way to change the system. They only express hope that with more researchers putting their work on the web, the for-profit publishing industry will eventually disappear. I think this disappearance can and should be hastened. The obvious way is to boycott for-profit journals as an author, referee, editor and librarian.
The obvious objection is that one’s career depends on publishing in certain journals that often happen to be for-profit, and that “service to the profession” (refereeing and editing) is one’s duty and also helps the career a bit. A moral counterargument is that while boycotting may impose some personal costs, it benefits other researchers and the increase in research benefits everyone, so as a favour to the rest of humanity, boycott is the right thing. After all, why do people become academic researchers when the private sector pays more?
Game theoretically, the academic system (including publishing) is a coordination game, like many social norms. As long as everyone else conforms to the system, it is costly to depart from it. Thus self-interested people choose to conform to the system. This keeps the system stable. Individual deviations are costly, but a collective (coalitional) deviation may be costless or at least cheaper. An example is the whole editorial board of a for-profit journal deciding to resign and start a nonprofit copy of this journal. They announce publicly that all articles that researchers were planning to submit to the for-profit journal should now be submitted to the nonprofit copy. The refereeing and editing process goes on as before, only the library subscriptions to the new journal are cheaper. There should be no loss of prestige for the editors or loss of publishing opportunity for the authors.
A journal is not complicated – it only requires an online system to let authors upload relatively small text files, let the editors forward these files (with author identity removed) to referees, referees to upload their text files and the editors to forward these (deidentified) files to authors. Such programs surely exist, free and open-source as well.
Perhaps a proofreader could be hired for the journal and paid out of subscription fees. But the total cost of running a journal (with volunteer labour like now) is very low.