Hands off arXiv!

Written by Dmytro Mishkin and Amy Tabb, June 29, 2020.

Back to Tips and Tricks Table of Contents.

Introduction.

Recently, we have seen renewed calls that conference submissions should not be posted on arXiv prior to acceptance. The main arguments of these calls are the following:

We will show that such arguments are often wrong, and banning pre-acceptance arXiv preprints is harmful especially for early career researchers (abbreviated as ECRs for the remainder of this document). Further, there are additional broader considerations, which are missed in the holy war for double-blind review.

Who are “we”? Dmytro Mishkin is a final year Ph.D. student from Czech Technical University (CTU) in Prague, working in computer vision and deep learning. He had no computer science background when he started his PhD. Amy Tabb is a research scientist, a PI role, in the USA. She works on multidisciplinary problems in computer vision, robotics, and agriculture and has a Ph.D. degree in ECE.

“Wait 3-6 months” argument is wrong.

This argument is based on a false assumption that a paper will be accepted after one or two submissions, resulting in a 3-6 month time span.

Given the acceptance rate at the CVPR or NeurIPS conferences of 20-30% (statistics) the a-priori expectation is a paper will be rejected more than one time. Moreover, the “wait 3-6 months” argument does not account for the research ‘arms race,’ meaning that a paper may be scooped before acceptance.

Paper acceptance Model

Figure 1. Toy simulation of number of paper submissions, given paper acceptances and scooping. The source code is on Google Colab.

Let’s do a toy simulation: given the acceptance rate 25%, scooping rate 10%, what is the probability that your paper will be accepted? Not more than 40%. So if one waits 3-6 months to submit a completed work to arXiv, one quite likely will wait much, much longer. In case if you would like to play with a paper acceptance model, here you go.

Reviews are a single point of failure for the ideas.

Review gatekeeping is a single point of failure, which is hard to pass even for experienced researchers. Famous and less famous examples are as follows:

arXiv is monitored by lots of practitioners, who aren’t worried about whether the work has been endorsed by others, only if the method is working well in practice.

arXiv has become much more than an archive now.

A whole ecosystem evolved around arXiv. Services like arxiv-sanity (thank you, Andrej!), arxiv-vanity, arxiv-daily-type twitter accounts allowed arXiv to become one of the default ways to be up to date and cope with all the papers published. If you are not on arXiv, your work will reach much fewer people, especially if you are a young researcher.

Moreover, arXiv serves an important role for public access to documents for grant reports and public talks. For researchers on the job market, arXiv allows them to list all of their completed publications in an easily-accessible format. Anonymized submissions then become inappropriate; an employer checking a potential employee’s C.V. would not be able to validate that the employee was the author or not from the public anonymized version.

While having the most recent publication online might not be important for the professor with 20+ publications already, for the early career researcher it is the step from zero to one.

One more thing: while companies finance research from their own profits, most university research is done on taxpayers’ money. This brings, at the very least, a moral obligation to share research results in a timely manner (see above about the “3-6 months”).

The focus of the discussion is misplaced.

The main focus of the “ban arXiv” discussion has been that the huge public relations machines of Google, OpenAI, Facebook and other corporations would increase chances of acceptance in an unfair way. However, this argument misses the bigger picture.

ECRs, especially from non-mainstream labs may have difficulty writing papers such that they conform to the norms of the community, in terms of framing their ideas and expressing them with established vocabulary. This is even more true for people with non-ML/CV background and novel ideas. Using some phrases from Bill Freeman’s talk “How to write a good CVPR submission”, these papers could be described as “The Puppy with 6 toes.” These papers are easily rejectable.

On the other hand, famous labs have very experienced researchers in writing papers, huge hardware resources, and so on. Thus, someone from a hypothetical Facebook has extensive experience in how to write “The Cockroach paper”, which is a “may be boring, but hard to kill” type of paper (see Bill Freeman again).

While arXiv and Twitter publicity could possibly make a difference for paper acceptance for the famous labs, famous labs have lots of other weapons in their arsenal such as experienced writers and other resources as mentioned above affecting paper acceptance. Removing arXiv for them would not make a huge difference. Unlike them, ECRs have comparatively less and removing the option to preprint before acceptance means a much reduced ability to participate in the research community. There is a Russian saying “Пока толстый сохнет худой сдохнет” - “ while a fat one would be fading, a lean one would have died”.

Crowdsourcing review is the way of Open Science and it might not be nice to corporations.

Let’s recall all the discussions and critique about the GPT-2 and GPT-3 models by OpenAI. GPT-2 was released first as an OpenAI technical report, and later code, while GPT-3 was released on arXiv. While the opinions on the works and its impact itself might be vary, would three standard reviews from NeurIPS have given way to the rich community discussion and questions about ethics, long-term impact, and machine learning bias in general, as arose on Twitter and blog discussions at the time of the releases of these works? We think not.

Concluding notes.

The world of research is complex, rich and diverse (although not as diverse as we would like). Some laboratories may set their own policies with respect to arXiv usage. For instance, individuals or labs may feel strongly one way or the other about using a particular latex template or the timing of when to post their papers. However, there is quite a difference between setting one’s own laboratory policies and dictating what others do with their own content, particularly when others are operating in accordance with the terms of arXiv and the venues to which they are submitting. What seems to be a good idea in one context, might be a disaster in another.

Sincerely yours, Dmytro Mishkin and Amy Tabb.

© Amy Tabb 2020-2020. All rights reserved. The contents of this site reflect my personal perspectives and not those of any other entity.