west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Credibility" 3 results
  • Credibility assessment of meta-analysis evidence on genetic association study

    Meta-analysis has become a common approach to summarize genetic association with the tremendous amount of published epidemiological evidence. Assessing the credibility of meta-analysis evidence on genetic association is a rapidly growing challenge. This paper illuminates how to assess the credibility of meta-analysis evidence by using Venice criteria. A semi-quantitative index assigns three levels for the amount of evidence, replication and protection from bias. At the end, three considerations are merged into a grading scheme, which generates three composite assessments: weak, moderate or strong. Credibility assessment is necessary to estimate whether a true genetic association exists. Such method provides indication for further study and is of clinical importance.

    Release date:2018-08-14 02:01 Export PDF Favorites Scan
  • Evidence certainty grading of network meta-analysis: method update and case application

    Network meta-analysis (NMA) is a method that can compare and rank the effects of different interventions, which plays an important role in evidence translation and evidence-based decision-making. In 2014, the GRADE working group first introduced the GRADE method for NMA evidence certainty grading. Since then, its method system has been gradually supplemented and improved. In recent years, the GRADE working group has further improved the methods for evaluating intransitivity and imprecision in NMA, and has made recommendations for the presentation and interpretation of NMA results, forming a complete methodological chain of NMA evidence certainty grading and result interpretation consisting of 6 steps. Our team updated the method system of GRADE applied in NMA with specific cases to provide references for relevant researchers.

    Release date: Export PDF Favorites Scan
  • Interpretation of credibility evaluation tools for minimal important difference in patient-reported outcomes based on anchoring methods

    The estimation of the minimal important difference (MID) in patient-reported outcomes (PRO) relies on various selection principles and statistical methodologies, resulting in varying degrees of credibility among studies. When applying these findings, it is crucial to consider their evaluation outcomes. In the context of widely accepted MID studies based on the anchoring method, the credibility of the MID of PRO is influenced by the selection of anchors and the statistical methods employed for estimation. Variations in the anchors utilized, differences in clinical trial designs, disparities in the characteristics of measurement subjects and environment, as well as the control of biases in studies, can all contribute to inconsistencies in the MID of PRO. In response to this, McMaster University in Canada has developed a credibility evaluation tool specifically for MID studies in PRO. The tool comprises five core items and four additional items. The five core items encompass an evaluation framework that assesses: (1) Is the patient or necessary proxy responding directly to both the PRO and the anchor? (2) Is the anchor easily understandable and relevant for patients or necessary proxy? (3)Has the anchor shown good correlation with the PRO? (4) Is the MID precise? (5) Does the threshold or difference between groups on the anchor used to estimate the MID reflect a small but important difference? The four additional items concerning transition-rated anchors assess: (1) Is the amount of elapsed time between baseline and follow-up measurement for MID estimation optimal? (2) Does the transition item have a satisfactory correlation with the PRO score at follow-up? (3) Does the transition item correlate with the PRO score at baseline? (4) Is the correlation of the transition item with the PRO change score appreciably greater than the correlation of the transition item with the PRO score at follow-up? Given the relative weights of each item in the tool are uncertain and environment-dependent, items are not scored; instead, an overall judgment is made using a qualitative rating approach. This article introduces the specific items of this tool and illustrates the evaluation process through a case study to improve its use in optimizing PRO results presentation and interpretation in clinical trials, reviews, assessments, and guidelines.

    Release date: Export PDF Favorites Scan
1 pages Previous 1 Next

Format

Content