Introducing Registered Reports

There is a growing movement in the scientific community aimed at eradicating a number of questionable research practices that are contributing to a “publishing culture that is toxic to science” (Chambers et al., 2013). Publication bias, insufficient statistical power, cherry-picking, post hoc hypothesising, and a paucity of data sharing (Chambers, 2014) have become common practice in scientific publishing, threatening the foundation of scientific enquiry. These aforementioned problems are perpetuated by the “publish or perish” mantra, which is omnipresent in the scientific community.

An elegant solution that enforces scientific integrity is implemented by Registered Reports, a novel publishing format that was first introduced to the field of psychology by the journal Cortex in 2013. Scientists pre-register their research question and methodology before data collection and acceptance is determined by the quality of these submissions. Since 2013, many journals in the field of psychology have introduced registered reports. Following these examples, we are excited to announce the launch of registered reports for student research at the Journal of European Psychology Students!

What Are The Problems Plaguing Scientific Methodology?

There are many well-established sore points, which are unjustifiably accepted by the community, despite the fact that the egregious nature of these violations threaten the integrity of psychological science. For the sake of brevity, we have listed some of the most prevalent problems below.

Publication bias/File-Drawer Problem. There is a bias against the publication of null results (Bishop, 2013; Sterling, 1959). This is evident from the systematic discrepancy in the results reported in published rather than unpublished studies (Song, Hooper & Loke, 2013), and also renders meta-analysis a less useful tool (cf. van Elk et al., 2015).

Replication bias. Due to the emphasis on reporting positive results and novel findings, it is all too common for unsuccessful replications to go unpublished in the current scientific climate (Nosek & Lakens, 2014).

P-hacking. Engaging in selective-reporting/cherry-picking/data-peeking allows the researcher (unbeknownst to them or not) to “present anything as [statistically] significant” (Simmons, Nelson, & Simonsohn, 2011; John, Leslie, & Loewenstein, 2012).

Garden of forking paths. The lines between exploratory and confirmatory research are often blurred (cf., de Groot 1954/2014; Wagenmakers et al., 2012). This is due to a hidden multiple comparison problem, whereby outcomes of statistical tests lose their evidential value when conducted in an environment in which different data would lead to the application of different statistical tests or preprocessing steps (Gelman & Loken, 2014). P-values are invalid unless research is purely confirmatory (Wagenmakers et al., 2012).

Statistical power. Low statistical power diminishes the probability that experimental findings are true and increases the probability that effect sizes are overestimated (Type M error) or point in the wrong direction (Type S error; Button et al. 2013; Gelman & Carlin, 2014; Ioannidis, 2005).

Lack of data sharing. The percentage of researchers sharing their data is low (Wicherts et al., 2006; Wicherts, Bakker, & Molenaar, 2011), alleviating the problems mentioned above.

How Can We Address These Issues?

The pre-registration of research shifts the focus from “publishable results” to sound hypotheses and methodology (Chambers, 2014), thereby eliminating the pressure to resort to undesirable research practices (many of which have been outlined above). Specifically, researchers submit a document, which details the introduction, theoretical motivation, experimental design, data pre-processing steps (e.g., outlier removal criteria), and the planned statistical analyses prior to data collection. Peer review only focuses on the merits of the proposed study and the adequacy of the statistical analyses. If there is sufficient merit to the planned study, the authors are guaranteed in-principle acceptance (Nosek & Lakens, 2014).

Authors then collect and analyse the data, having been granted in-principle acceptance. In-principle acceptance assures that results will be published regardless of the study outcomes. Editors examine deviations from the original methodology, which are then reported in the published article along with additional exploratory analyses. For an overview of the publishing process, see Figure 1.

Figure 1 

The publishing process for Registered Reports in the Journal of European Psychology Students.

To summarise, by publishing regardless of the outcome of the statistical analyses, registered reports eliminate publication bias. Secondly, by clearly stating the hypotheses and planned analyses prior to data collection, the distinction between exploratory and confirmatory studies is made apparent (de Groot 1954/2014). Finally, the format of pre-registration guards against “questionable research practices” (John, Leslie, & Loewenstein, 2012) like post-hoc theorizing (Kerr, 1998), as well as avoiding the “garden of forking paths” (Gelman & Loken, 2014).

Pre-Registered Student Research

We believe that, as a student journal, we are in a position to introduce the next generation of psychologists to sound research practices in scientific publishing. We guide students through the entire process, from the initial technical review, which points out APA misdeeds, to professional peer-review of all submissions fulfilling our submission criteria and finally (if accepted) to copy editing.

The advantages of pre-registering one’s bachelor or master thesis are obvious. While it is motivating for students to engage in sound research practices, we also believe that students can benefit immensely from the peer-review process accompanying registered reports. Namely, students have the added advantage of receiving critical feedback that they might otherwise not receive, from unbiased and objective reviewers prior to data collection.

Furthermore, in our experience, much of the student-generated research that we encounter aims to replicate and extend previous findings. Therefore, we hope to promote and facilitate replication by providing students with the support they need to improve the quality and clarity of their contributions. We echo the call of Frank and Saxe (2012) and encourage instructors of experimental methods classes to challenge students to replicate recent effects in the psychological literature.

Challenges Of Registered Reports For Student Research

We recognise two critical issues in offering registered reports for student research: lack of time and lack of power. Time is a critical factor in student research. For example, many students are unwilling or unable to wait for reviewer feedback if the end of the semester or graduation is fast approaching. It is for this reason that JEPS is dedicated to substantially accelerating the review process. We have already taken steps to implement this process; for example, we recently developed a word template in APA format to mitigate the often lengthy and technical review process. In addition, we are actively recruiting peer-reviewers who are passionate about the potential of registered reports for student research. Lastly, we encourage students to keep introductions short, with the possibility of expanding them after in-principle acceptance.

The second issue is in relation to statistical power. With research in psychology already being underpowered (cf., Cohen, 1962; Fraley & Vazire, 2014), how can we demand that students perform costly and high powered research by collecting a much larger sample than is standard? We cannot, and we will not. Luckily, there are additional ways to conduct adequately powered research. For example, students should think about the direction of the effect, and subsequently conduct a one-sided test, which markedly increases power. As the research is pre-registered, this does not qualify as ad-hoc behaviour.

Additionally, instead of employing a plethora of tests and measurements, students interested in pursuing confirmatory research should focus on a small, theoretically motivated and hypothesis-driven selection, increasing the efficiency of testing or experimental sessions.

Another recommendation is to use a sequential design, which involves testing participants in batches and stopping when enough evidence has been accumulated (or the student runs out of resources); for example, when the Bayes factor in favour of a hypothesis is greater than 5. The increased flexibility provided by the Bayesian framework is well suited for student research, and students and instructors are encouraged to read up on this line of work (e.g., Schönbrodt et al., in press; Wagenmakers, Morey, & Lee, in press). However, even with indecisive evidence, a sound, pre-registered study can be important fuel for future meta-analysis (for an example, see Scheibehenne, Jamil, & Wagenmakers, in press).

Next Steps

The editorial board at JEPS, along with a strong cohort of the scientific community (Chambers et al., 2013), believe that registered reports are a promising development in peer-reviewed scientific reporting. We are very excited to implement this protocol in order to incentivise “best practice” research amongst students as well as preparing aspiring researchers for a new era of scientific publishing.

Competing Interests

The authors declare that they have no competing interests.