Toward Assessing Clinical Trial Publications for Reporting Transparency.
Journal of biomedical informatics
OBJECTIVE: To annotate a corpus of randomized controlled trial (RCT) publications with the checklist items of CONSORT reporting guidelines and using the corpus to develop text mining methods for RCT appraisal.METHODS: We annotated a corpus of 50 RCT articles at the sentence level using 37 fine-grained CONSORT checklist items. A subset (31 articles) was double-annotated and adjudicated, while 19 were annotated by a single annotator and reconciled by another. We calculated inter-annotator agreement at the article and section level using MASI (Measuring Agreement on Set-Valued Items) and at the CONSORT item level using Krippendorff's alpha. We experimented with two rule-based methods (phrase-based and section header-based) and two supervised learning approaches (support vector machine and BioBERT-based neural network classifiers), for recognizing 17 methodology-related items in the RCT Methods sections.RESULTS: We created CONSORT-TM consisting of 10,709 sentences, 4,845 (45%) of which were annotated with 5,246 labels. A median of 28 CONSORT items (out of possible 37) were annotated per article. Agreement was moderate at the article and section levels (average MASI: 0.60 and 0.64, respectively). Agreement varied considerably among individual checklist items (Krippendorff's alpha= 0.06-0.96). The model based on BioBERT performed best overall for recognizing methodology-related items (micro-precision: 0.82, micro-recall: 0.63, micro-F1: 0.71). Combining models using majority vote and label aggregation further improved precision and recall, respectively.CONCLUSION: Our annotated corpus, CONSORT-TM, contains more fine-grained information than earlier RCT corpora. Low frequency of some CONSORT items made it difficult to train effective text mining models to recognize them. For the items commonly reported, CONSORT-TM can serve as a testbed for text mining methods that assess RCT transparency, rigor, and reliability, and support methods for peer review and authoring assistance. Minor modifications to the annotation scheme and a larger corpus could facilitate improved text mining models. CONSORT-TM is publicly available at https://github.com/kilicogluh/CONSORT-TM.
View details for DOI 10.1016/j.jbi.2021.103717
View details for PubMedID 33647518
Analysis of single comments left for bioRxiv preprints till September 2019.
2021; 31 (2): 020201
While early commenting on studies is seen as one of the advantages of preprints, the type of such comments, and the people who post them, have not been systematically explored.We analysed comments posted between 21 May 2015 and 9 September 2019 for 1983 bioRxiv preprints that received only one comment on the bioRxiv website. The comment types were classified by three coders independently, with all differences resolved by consensus.Our analysis showed that 69% of comments were posted by non-authors (N = 1366), and 31% by the preprints' authors themselves (N = 617). Twelve percent of non-author comments (N = 168) were full review reports traditionally found during journal review, while the rest most commonly contained praises (N = 577, 42%), suggestions (N = 399, 29%), or criticisms (N = 226, 17%). Authors' comments most commonly contained publication status updates (N = 354, 57%), additional study information (N = 158, 26%), or solicited feedback for the preprints (N = 65, 11%).Our results indicate that comments posted for bioRxiv preprints may have potential benefits for both the public and the scholarly community. Further research is needed to measure the direct impact of these comments on comments made by journal peer reviewers, subsequent preprint versions or journal publications.
View details for DOI 10.11613/BM.2021.020201
View details for PubMedID 33927548
View details for PubMedCentralID PMC8047782
Attitudes and practices of open data, preprinting, and peer-review-A cross sectional study on Croatian scientists.
2021; 16 (6): e0244529
Attitudes towards open peer review, open data and use of preprints influence scientists' engagement with those practices. Yet there is a lack of validated questionnaires that measure these attitudes. The goal of our study was to construct and validate such a questionnaire and use it to assess attitudes of Croatian scientists. We first developed a 21-item questionnaire called Attitudes towards Open data sharing, preprinting, and peer-review (ATOPP), which had a reliable four-factor structure, and measured attitudes towards open data, preprint servers, open peer-review and open peer-review in small scientific communities. We then used the ATOPP to explore attitudes of Croatian scientists (n = 541) towards these topics, and to assess the association of their attitudes with their open science practices and demographic information. Overall, Croatian scientists' attitudes towards these topics were generally neutral, with a median (Md) score of 3.3 out of max 5 on the scale score. We also found no gender (P = 0.995) or field differences (P = 0.523) in their attitudes. However, attitudes of scientist who previously engaged in open peer-review or preprinting were higher than of scientists that did not (Md 3.5 vs. 3.3, P<0.001, and Md 3.6 vs 3.3, P<0.001, respectively). Further research is needed to determine optimal ways of increasing scientists' attitudes and their open science practices.
View details for DOI 10.1371/journal.pone.0244529
View details for PubMedID 34153041
- Preprint Servers' Policies, Submission Requirements, and Transparency in Reporting and Research Integrity Recommendations. JAMA 2020; 324 (18): 1901?3
The worldwide clinical trial research response to the COVID-19 pandemic - the first 100 days.
2020; 9: 1193
Background: Never before have clinical trials drawn as much public attention as those testing interventions for COVID-19. We aimed to describe the worldwide COVID-19 clinical research response and its evolution over the first 100 days of the pandemic. Methods: Descriptive analysis of planned, ongoing or completed trials by April 9, 2020 testing any intervention to treat or prevent COVID-19, systematically identified in trial registries, preprint servers, and literature databases. A survey was conducted of all trials to assess their recruitment status up to July 6, 2020. Results: Most of the 689 trials (overall target sample size 396,366) were small (median sample size 120; interquartile range [IQR] 60-300) but randomized (75.8%; n=522) and were often conducted in China (51.1%; n=352) or the USA (11%; n=76). 525 trials (76.2%) planned to include 155,571 hospitalized patients, and 25 (3.6%) planned to include 96,821 health-care workers. Treatments were evaluated in 607 trials (88.1%), frequently antivirals (n=144) or antimalarials (n=112); 78 trials (11.3%) focused on prevention, including 14 vaccine trials. No trial investigated social distancing. Interventions tested in 11 trials with >5,000 participants were also tested in 169 smaller trials (median sample size 273; IQR 90-700). Hydroxychloroquine alone was investigated in 110 trials. While 414 trials (60.0%) expected completion in 2020, only 35 trials (4.1%; 3,071 participants) were completed by July 6. Of 112 trials with detailed recruitment information, 55 had recruited <20% of the targeted sample; 27 between 20-50%; and 30 over 50% (median 14.8% [IQR 2.0-62.0%]). Conclusions: The size and speed of the COVID-19 clinical trials agenda is unprecedented. However, most trials were small investigating a small fraction of treatment options. The feasibility of this research agenda is questionable, and many trials may end in futility, wasting research resources. Much better coordination is needed to respond to global health threats.
View details for DOI 10.12688/f1000research.26707.1
View details for PubMedID 33082937