ObjectiveTo realize automatic risk bias assessment for the randomized controlled trial (RCT) literature using BERT (Bidirectional Encoder Representations from Transformers) as an approach for feature representation and text classification.MethodsWe first searched The Cochrane Library to obtain risk bias assessment data and detailed information on RCTs, and constructed data sets for text classification. We assigned 80% of the data set as the training set, 10% as the test set, and 10% as the validation set. Then, we used BERT to extract features, construct text classification model, and evaluate the seven types of risk bias values (high and low). The results were compared with those from traditional machine learning methods using a combination of n-gram and TF-IDF as well as the Linear SVM classifier. The accuracy rate (P value), recall rate (R value) and F1 value were used to evaluate the performance of the models.ResultsOur BERT-based model achieved F1 values of 78.5% to 95.2% for the seven types of risk bias assessment tasks, which was 14.7% higher than the traditional machine learning method. F1 values of 85.7% to 92.8% were obtained in the extraction task of the other six types of biased descriptors except "other sources of bias", which was 18.2% higher than the traditional machine learning method.ConclusionsThe BERT-based automatic risk bias assessment model can realize higher accuracy in risk of bias assessment for RCT literature, and improve the efficiency of assessment.
RoB2 (revised version 2019), an authoritative tool for assessing the risk of bias in randomized controlled trials, has been updated and improved based on the original version. This article elaborated and interpreted the background and main content of RoB2 (revised version 2019), as well as the operation process of the new software. Compared with the previous version of RoB2 (revised version 2018), RoB2 (revised version 2019) has the advantages of rich content, complete details, accurate questions, and simple operation, etc. Additionally, it is more user-friendly for researchers and beginners. The risk bias assessment of randomized controlled trials is more comprehensive and accurate, and it is an authoritative, trustworthy, and popular tool for evaluating the risk of bias in randomized controlled studies in medical practice.
证据质量升级的最常见原因是效应量大。当方法学严谨的观察性研究表明风险至少降低或增加2倍时,GRADE建议考虑将证据质量升高1级;当风险至少降低或增加5倍时,考虑将证据质量升高2级。当存在剂量-反应关系,或所有合理的混杂、偏倚会降低明显的治疗效应,或混杂、偏倚使得结果无效为假效应时,系统评价作者和指南制定者也可考虑升高证据质量。其他考虑因素包括起效迅速、潜在的疾病(状态)趋势以及间接证据。
The QUADAS-2, QUIPS, and PROBAST tools are not specific for prognostic accuracy studies and the use of these tools to assess the risk of bias in prognostic accuracy studies is prone to bias. Therefore, QUAPAS, a risk of bias assessment tool for prognostic accuracy studies, has recently been developed. The tool combines QUADAS-2, QUIPS, and PROBAST, and consists of 5 domains, 18 signaling questions, 5 risk of bias questions, and 4 applicability questions. This paper will introduce the content and usage of QUAPAS to provide inspiration and references for domestic researchers.
Selective non-reporting and publication bias of study results threaten the validity of systematic reviews and meta-analyses, thus affect clinical decision making. There are no rigorous methods to evaluate the risk of bias in network meta-analyses currently. This paper introduces the main contents of ROB-MEN (risk of bias due to missing evidence in network meta-analysis), including tables of the tool, operation process and signal questions. The pairwise comparisons table and the ROB-MEN table are the tool’s core. The ROB-MEN tool can be applied to very large and complex networks including lots of interventions to avoid time-consuming and labor-intensive process, and it has the advantages of clear logic, complete details and good applicability. It is the first tool used to evaluate the risk of bias due to missing evidence in network meta-analysis and is useful to researchers, thus being worth popularizing and applying.
The COSMIN community updated the COSMIN-RoB checklist on reliability and measurement error in 2021. The updated checklist can be applied to the assessment of all types of outcome measurement studies, including clinician-reported outcome measures (ClinPOMs), performance-basd outcome measurement instruments (PerFOMs), and laboratory values. In order to help readers better understand and apply the updated COSMIN-RoB checklist and provide methodological references for conducting systematic reviews of ClinPOMs, PerFOMs and laboratory values, this paper aimed to interpret the updated COSMIN-RoB checklist on reliability and measurement error studies.
This paper summarizes the methodological quality assessment tools of artificial intelligence-based diagnostic test accuracy studies, and introduces QUADAS-AI and modified QUADAS-2. Moreover, this paper summarizes reporting guidelines of these studies as well, and then introduces specific reporting standards in AI-centred research, and checklist for AI in dental research.
Evidence synthesis is the process of systematically gathering, analyzing, and integrating available research evidence. The quality of evidence synthesis depends on the quality of the original studies included. Validity assessment, also known as risk of bias assessment, is an essential method for assessing the quality of these original studies. Currently, there are numerous validity assessment tools available, but some of them lack a rigorous development process and evaluation. The application of inappropriate validity assessment tools to assessing the quality of the original studies during the evidence synthesis process may compromise the accuracy of study conclusions and mislead the clinical practice. To address this dilemma, the LATITUDES Network, a one-stop resource website for validity assessment tools, was established in September 2023, led by academics at the University of Bristol, U.K. This Network is dedicated to collecting, sorting and promoting validity assessment tools to improve the accuracy of original study validity assessments and increase the robustness and reliability of the results of evidence synthesis. This study introduces the background of the establishment of the LATITUDES Network, the included validity assessment tools, and the training resources for the use of validity assessment tools, in order to provide a reference for domestic scholars to learn more about the LATITUDES Network, to better use the appropriate validity assessment tools to conduct study quality assessments, and to provide references for the development of validity assessment tools.
With the rapid development of artificial intelligence (AI) and machine learning technologies, the development of AI-based prediction models has become increasingly prevalent in the medical field. However, the PROBAST tool, which is used to evaluate prediction models, has shown growing limitations when assessing models built on AI technologies. Therefore, Moons and colleagues updated and expanded PROBAST to develop the PROBAST+AI tool. This tool is suitable for evaluating prediction model studies based on both artificial intelligence methods and regression methods. It covers four domains: participants and data sources, predictors, outcomes, and analysis, allowing for systematic assessment of quality in model development, risk of bias in model evaluation, and applicability. This article interprets the content and evaluation process of the PROBAST+AI tool, aiming to provide references and guidance for domestic researchers using this tool.
ObjectiveTo evaluate whether and to what extent the new risk of bias (ROB) tool has been used in Cochrane systematic reviews (CSRs) on acupuncture. MethodsWe searched the Cochrane Database of Systematic Review (CDSR) in issue 12, 2011. Two reviewers independently selected CSRs which primarily focused on acupuncture and moxibustion. Then the data involving in essential information, the information about ROB (sequence generation, allocation concealment, blindness, incomplete outcome data, selective reporting and other potential sources of bias) and GRADE were extracted and statistically analyzed. ResultsIn total, 41CSRs were identified, of which 19 CSRs were updated reviews. Thirty-three were published between 2009 and 2011. 60.98% reviews used the Cochrane Handbook as their ROB assessment tool. Most CSRs gave information about sequence generation, allocation concealment, blindness, and incomplete outcome data, however, half of them (54.55%, 8/69) showed selective reporting or other potential sources of bias. Conclusion"Risk of bias" tools have been used in most CSRs on acupuncture since 2009. However, the lack of evaluation items still remains.