Introduction

Over the last three decades, there has been a significant surge in neurological disorder prevalence across the United States [1]. With an estimated 100 million Americans affected by over a thousand different neurological and neurodegenerative diseases, these patients rely on high-quality clinical neurology research to improve current treatment interventions [2, 3]. These individuals not only experience challenges to their quality of life, but also bear significant financial burden – costing Americans approximately $800 billion in medical expenses [2]. Considering the severities and rapidly growing expenses associated with neurological disorders, the demand for evidence-based research has become increasingly invaluable for alleviating these burdens [2]. Despite the substantial amount of funding from the government, [4] neurological research may not achieve its maximum potential due to poor research reporting practices in the field– including limitations in reproducibility, transparency, and selective outcome bias [5, 6]. These limitations often result in misleading conclusions and contribute to outcome reporting that is difficult to interpret [7,8,9,10]. By addressing these shortcomings in clinical neurology research, scientific journals can ensure that only high-quality studies are published to their audiences – ultimately leading to improved patient outcomes, elimination of harmful interventions, reduced research waste, and the alleviation of rising healthcare costs [5, 11, 12].

One approach for improving research quality is the use of reporting guidelines (RGs) by prospective authors before publishing their work. Reporting guidelines serve as structured checklists that promote standardization of reported data in literature [13]. Previous studies have demonstrated that using RGs correctly can improve the quality of research and reduce the risk of bias [14,15,16,17]. For instance, Moher et al. found that the adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement led to improved reporting of randomized controlled trials (RCTs) in the majority of the journals analyzed [14]. Likewise, Nawijn et al. found that academic journals that endorsed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist exhibited higher-quality reporting in comparison to journals that did not endorse it [16]. To improve accessibility and discoverability of RGs, the Enhancing the Quality of Transparency of Health Research (EQUATOR) Network developed its database which complies RGs across various study designs [18]. Consisting of over 500 RGs, the EQUATOR Network provides education and training to ensure effective RG use and awareness in clinical research [18, 19]. Despite these efforts, it appears that RG adoption across various fields of medicine remains insufficient. For instance, Innocenti et al. found that a small percentage of high-impact rehabilitation journals acknowledged the use of RGs, and an even smaller percentage properly adhered to the stated RG. [20] Additionally, Tan et al. identified inconsistencies in RG endorsement among high-impact general surgery and vascular surgery journals, demonstrating the need for improvement [21]. Poor research practices can compromise quality of research – therefore, identifying gaps in RG adherence is a critical step for producing high-quality research.

Another method for improving research reporting is implementation of public registries for RCTs. Trial registration prior to the initiation of a study prevents biased reporting of selective outcomes – which improves transparency in the study’s results and leads to more reliable research [22,23,24,25]. Due to these reasons, the World Health Organization (WHO) promotes the use of trial registration by RCTs, and the International Committee of Medical Journal Editors (ICMJE) requires registration of all RCTs prior to patient enrollment [26, 27]. However, promoting the adoption of rigorous research practices requires proper enforcement of trial registration. Journals may encourage implementation by requiring or recommending study registration in their Instructions for Authors webpage – however, previous studies have identified significant gaps in trial registration and enforcement across various fields of medicine [28,29,30].

Currently, it is unclear the extent that clinical neurology journals advocate for the use of RGs and clinical trial registration. The purpose of this study is to evaluate the publishing policies of the leading clinical neurology journals regarding RGs and trial registration. Our aim is to understand the degree to which journals are endorsing these policies and to identify areas for improvement.

Methods

Study design

We conducted a cross-sectional analysis of the top neurology journals using the Strengthening The Reporting of Observational Studies in Epidemiology (STROBE) checklist [31]. Data was obtained directly from each journals’ Instructions for Authors webpage.

Standard protocol approvals, registrations, and patient consents

Due to the nature of our study, no human participants were included in our investigation. Therefore, oversight by the Institutional Review Board was not implicated.

Search strategy

On November 18, 2022, eligible journals were identified by consultation between one investigator (CAS) and a medical research librarian. The 2021 Scopus CiteScore tool supplied the journal listings using the website’s “Neurology” subject area [32]. The CiteScore for a given year is calculated by dividing the number of citations within the previous 4 years by the number of publications in the previous four years:

$$\:2021\:CiteScore\:=\frac{Citations\:in\:2018-21}{Publications\:in\:2018-21}\:$$

This metric provides a comprehensive measure of a journal’s citation impact, reflecting its influence and reach within the academic community. Identified journal listings were cross-checked using Google Scholar Metrics h5-index’s “Neurology” category, which confirmed the top twenty journals found by Scopus [33].

Eligibility

We evaluated the top 100 peer-reviewed academic journals in the “Neurology” subject area according to the 2021 Scopus CiteScore tool. We used Google Translate to translate journals with non-English websites, which has been proven to be a reliable application for extracting data from foreign articles [34, 35].

Exclusion criteria

We excluded journals from our study if they met any of the following criteria: (i) had been discontinued, (ii) did not provide contact information for the editorial office on their website, as we sought to limit bias by providing editors the opportunity to elaborate on their publication policies, (iii) was an academic book, as they merely provide a summary of current research, or (iv) did not accept any of the study designs being assessed for in this study. In the event of an exclusion during the initial screening process, the subsequent journal identified by the Scopus CiteScore tool was used to maintain a sample size of 100. Exclusions with rationale are provided in Fig. 1.

Investigator training

The two investigators (AVT, JKS) received instructions from CAS over the data collection process prior to this study’s initiation. Following the discussion of the scope, rationale, and methods, both investigators extracted data from five journals that were not included in the study sample. To ensure consistency in methodology and accuracy of recorded data, this data extraction training was done in a masked, duplicate fashion. If warranted, an additional set of five journals would have been provided for further practice. Once a consensus was reached during the training session, the two investigators began extracting data from the generated study sample.

Data collection process

Two investigators extracted data from the Instructions for Authors webpage of each included journal in a masked, duplicate fashion. Data was collected using a standardized Google Form – designed a priori by investigators CAS, DN, and MV. After data extraction was completed, data was reconciled in an unmasked manner. Any discrepancies that could not be resolved by the two investigators were resolved by a third investigator (ZE).

Data items

Data extracted from each journal included: email response rate by each journal editor, journal title, five-year impact factor, mention of the EQUATOR Network in the Instructions for Authors, mention of the ICMJE in the Instructions for Authors, and geographical zone of publication (i.e., North America, South America, Europe, Asia, etc.). Informational statements pertaining to study registration at databases – e.g. Clinicaltrials.gov, WHO, PROSPERO (The International Prospective Register of Systematic Reviews) – were also extracted for each journal. For each journal’s Instructions for Authors webpage, statements regarding the use of popular RGs were extracted. A description of these RGs and their respective study designs can be found in Table 1.

Table 1 Reporting guidelines and study designs

Data points for a guideline or trial registry were recorded as either “not mentioned,” “required,” or “recommended” for each journal. In cases where verbiage within the Instructions for Authors section included words or phrases such as “required,” “must,” “need,” “mandatory,” or “studies will not be considered for publication unless…,” we would record as “required” by the journal. “Recommended” was recorded when words or phrases such as “recommended,” “should,” “preferred,” or “encouraged,” were used. Study investigators resolved unclear verbiage upon reconciliation of data. If a journal mentioned the EQUATOR Network as the source for proper guideline usage instead of listing specific guidelines in the Instructions for Authors webpage, we assumed that the journal used the relevant RG for a specific study design.

To ensure that policies regarding study designs not accepted for publication are fairly assessed, a standardized email was sent to the editorial staff of each journal in our sample. This email asked about the study designs listed in Table 1 and whether they were accepted by the journal. We repeated this process once per week for three consecutive weeks to increase response rates [36]. If no response was received during that time, it was assumed that all relevant study designs were accepted – therefore, investigators further examined the journals’ Information for Authors based on all the previously mentioned data points.

Outcomes

The primary outcome of this study is to explore the proportion of journals that require/recommend the use of popular RGs for each evaluated study design. The secondary outcome evaluated the proportion of journals that require/recommend the registration of RCTs.

Statistical methods

We used R (version 4.2.1) and RStudio to descriptively summarize collected data from our sample. Descriptive statistics included: (i) frequencies/percentages of guidelines requirement/recommendation in included journals and (ii) the frequencies/percentages of journals requiring/recommending clinical trial registration. This study was a direct analysis of journal webpages, therefore, analyses for bias were not warranted.

Reproducibility

This study was conducted based upon a protocol designed a priori. To ensure transparency and reproducibility of our study, we uploaded the protocol, raw data, extraction forms, STROBE checklist, analysis scripts, and standardized email prompts to Open Science Framework (OSF) [37].

Results

During our initial screening, there were 356 clinical neurology journals identified using the 2021 Scopus CiteScore tool. We selected the top 100 journals based on the highest five-year impact factors. We excluded four journals: two were discontinued and two did not accept the study designs investigated. Following our protocol, we included the next four journals identified by the Scopus CiteScore tool to replace those that were excluded (Fig. 1).

Fig. 1
figure 1

PRISMA flow diagram of journal selection process

Our analysis consisted of 100 clinical neurology journals, with five-year impact factors ranging from 50.844 to 2.226 (mean [SD], 7.82 [7.01]). Following the review of the Instructions for Authors and editorial staff email responses (response rate, 60/100; 60.0%), the following RGs were removed from computing proportions as the study type was not accepted by the journal: QUOROM (1/100, 1.0%), PRISMA (1/100, 1.0%), STARD (2/100, 2.0%), ARRIVE (4/100; 4.0%), CARE (8/100; 8.0%), CHEERS (3/100; 3.0%), SRQR (6/100; 6.0%), SQUIRE (5/100; 5.0%), SPIRIT (8/100; 8.0%), COREQ (6/100; 6.0%), TRIPOD (2/100; 2.0%), PRISMA-P (9/100; 9.0%).

Reporting guidelines

In our sample, the EQUATOR Network was mentioned in 52 out of 100 journals (52.0%). Additionally, 85 of 100 (85.0%) referenced the ICMJE Uniform Requirements for Manuscripts. Twenty-five journals (25/100; 25.0%) did not mention any RGs within their Instructions for Authors. The most frequent RG mentioned was CONSORT (64/99; 64.6%), followed by PRISMA (52/99; 52.5%) and ARRIVE (51/96; 53.1%). Of the journals that mentioned the CONSORT guideline, 22 (22/64; 34.4%) required adherence and 42 (42/64; 65.6%) recommended adherence. For PRISMA, 12 (12/52; 23.1%) journals required adherence and 40 (40/52; 76.9%) recommended adherence. For ARRIVE, 6 (6/51; 11.8%) required adherence and 45 (45/51; 88.2%) recommended adherence. The least mentioned RG was QUOROM (1/99; 1.0%), followed by MOOSE (9/100; 9.0%) and SQUIRE (17/95; 17.9%). The only journal that mentioned the QUOROM guideline only recommended adherence. Of the journals that mentioned MOOSE, 2 (2/9; 22.2%) journals required adherence and 7 (7/8; 77.8%) recommended adherence. For SQUIRE, 1 (1/17; 5.9%) journal required adherence and 16 (16/17; 94.1%) recommended adherence. Independent data for all journals in our sample can be found in Supplementary Table 1.

Clinical trial registration

Out of the 99 neurology journals in our sample that accepted clinical trials, 67.0% (66/99) mentioned clinical trial registration. Of those journals, 54 (54/66; 81.9%) required registration and 11 (11/66; 16.7%) recommended it. For the 52 (52/100; 52.0%) journals that mentioned the EQUATOR Network in our sample, there were 43 (43/52; 82.7%) that required trial registration, 5 (5/52; 9.6%) that recommended trial registration, 3 (3/52; 5.8%) that did not mention trial registration, and only 1 (1/52; 1.9%) did not accept clinical trials.

Discussion

Our study found that among the top 100 clinical neurology journals, a quarter (25.0%) did not mention a single RG in their Instructions for Authors webpage, and a third (33.3%) did not mention any clinical trial registration policies. Our findings are consistent with previous research conducted in other medical disciplines, [20, 21, 38] and further highlights the inadequate endorsement of RGs and clinical trial registration policies. The gap in RG adherence within the field of clinical neurology impedes research quality and promotes misinformation – limiting potential advancements, contributes to poorer patient outcomes, and increased financial burdens for the patient. Furthermore, our findings emphasize a greater need for journals to implement proactive measures that encourage authors to adhere to RGs and registration requirements, ultimately resulting in improvements to the quality of clinical neurology research and reduce the possibility of biased reporting [39].

Insufficient endorsement of RGs and trial registration has been studied extensively across multiple medical specialties. Our findings within clinical neurology are consistent with the current RG literature regarding the issues of inadequate reporting in clinical research [40]. For instance, a prior study examining orthopedic surgery journals found a lack of RG and trial registration requirements in their field, indicating inadequate reporting practices [41]. Sims et al. conducted a similar study and found that almost half of critical care journals did not advocate for the use of any RGs – further supporting the idea that inadequate reporting requirements is a prevalent problem across medical specialties [28]. However, Wayant et al. found that within oncology journals, only 4.8% did not endorse any RGs or trial registration requirements, indicating a higher level of support for RGs and trial registry within this field specifically [42]. Although there is existing evidence supporting the increased incorporation of RGs and trial registration within certain specialties, our findings suggest that significant improvements are still warranted within clinical neurology. Suboptimal endorsement of RGs and trial registration policies to any degree can hinder the quality of research and promote harmful research practices.

Failure to prospectively register a clinical trial can lead to reporting bias, which compromises the integrity of evidence-based research and could potentially harm trial participants. Even journal editors have acknowledged that prospective trial registration is the most effective tool for promoting unbiased reporting [43]. To address this issue and encourage prospective trial registration, the U.S. Food and Drug Administration (FDA) introduced FDAAA 801, which requires all conducted studies to be registered prior to initiation – minimizing the risks associated with reporting bias [44]. Despite implementing these proactive measures, non-compliance to registration policies and selective outcome reporting continues to be a significant problem. Mathieu et al. found that less than half of the trials assessed were adequately registered prior to completion [38]. Additionally, over 30% of the adequately registered trials showed discrepancies between registered and published outcomes [38]. These findings highlight a concerning reality within clinical research – a lack of accountability results in diminished research quality. In our study, we discovered that 55% of neurology journals required trial registration, suggesting a reluctance to fully endorse sufficient registration requirements. This disinclination is concerning because journals have both a professional duty to guide authors towards conducting comprehensive research and an ethical obligation to protect the well-being of patients and trial participants. After further evaluation of current research and comparing the results to our findings, we believe that a greater portion of this research burden lies on journals compared to authors.

Based on our findings, we suggest that journals encourage authors to submit an organized checklist that verifies adherence before their work is accepted for publication. Additionally, journal editors should consider providing constructive feedback to authors whose submissions do not adequately meet the journal’s expectations. We also recommend that journals update their Instructions for Authors pages regularly to make it easier for authors to locate the journal’s expectations regarding RGs, the EQUATOR Network, and clinical trial registration. Furthermore, journals that do not currently adhere to RGs or registration policies should incorporate the EQUATOR Network into their Instructions for Authors section to help aid authors and reviewers in reporting and evaluating scientific research.

Our study had several strengths. First, we conducted all screening and data extraction in a masked, duplicate fashion, which is an approach that has been recommended by the Cochrane Collaboration to mitigate the potential for bias and errors [45]. A second strength was that we followed a protocol developed a priori to ensure adequate and clear reporting of observational studies [31, 42, 46]. Third, to promote transparency and reproducibility, our protocol, raw data, extraction forms, STROBE checklist, analysis scripts, and standardized email prompts were uploaded to OSF [37]. However, it is important to acknowledge that our study has some limitations. We encountered some challenges when contacting journal editors. Unfortunately, some editors did not respond to our inquiries, which made it difficult to confirm whether certain study designs were accepted for publication by the journal. Additionally, some of the webpages were outdated, resulting in uncertainty of whether the study designs were still accepted for publication in those journals. It is important to note that prospective authors may face similar difficulties when trying to understand journal expectations prior to submitting their articles. While it seems evident that higher impact factor journals might be associated with better quality publications, our study did not directly assess the uptake of RGs or trial registration by authors post-publication, nor did it perform a statistical analysis to correlate journal impact factor with the quality of published articles. This is a significant limitation as it restricts our ability to conclusively determine whether higher impact factor journals indeed achieve better rates of RG reporting and mandatory trial registration. Future research should aim to quantitatively assess this correlation to provide more concrete evidence on the impact factor’s influence on research quality. Lastly, due to the cross-sectional nature of this study, our findings may not be generalizable to other fields of medicine and should be interpreted within this context.

Conclusion

In conclusion, our study found that a majority of the top 100 clinical neurology journals required or recommended the adherence to RGs and trial registration policies. However, our analysis also identified significant shortcomings in journal compliance with these standards. To address this issue, we recommend that journals adopt proactive approaches to publishing articles by developing policies that encourage adherence to these RGs and trial registration policies. Ultimately, promoting these policies may improve research quality in the field of clinical neurology, resulting in better outcomes for patients.