Abstract :
In Part 1, the author reflected on the trouble IC had when encountering a submission that was substantially similar to another submission received elsewhere - a definite no-no. The author described some tools for detecting self-plagiarism and considered a possible way to get authors to make submitted works available alongside those already published, so that we could more easily detect overlapping submissions and other violations of submission guidelines. One key requirement would be that either the submissions themselves be stored and managed securely, with the same guarantees as submissions in the review process or that only some derived data be stored in a centralized fashion. In the latter case, we could readily tell that two manuscripts were similar without actually having access to either one: the arbitrator detecting the overlap wouldn´t need to be trusted to protect the content, either. This arbitrator is an example of a broader class of neutral agents that can serve many functions on the Internet. In this article, the focuses on a second aspect of the reviewing process - reviewer integrity - and then return briefly to a few other real-world examples.