ObjectiveTo evaluate the effect of surgical treatment of vertebral artery stenosis and to summarize the experience.MethodThe clinical data of 6 patients undergoing surgical treatment from September 2018 to September 2019 were retrospectively analyzed.ResultsAll the procedures were successfully performed without intraoperative cerebral infarction, injury of thoracic duct or nerve disconnection by mistake. The operative time was 120 to 270 minutes, the median was 180 minutes. The blood loss was 50 to 150 milliliters, and the median was 65 milliliters. One patient suffered from Horner’s syndrome after the operation. One patient suffered from cerebral infarction on 4 days after the operation. During the follow-up of 3–10 months, three patients felt dizziness relieved and there were no anastomotic stricture or new cerebral infarction happened.ConclusionsSurgical treatment is safeand effective for vertebral artery stenosis. Revascularization of the carotid and vertebral arteries at the same time shouldbe avoided.
ObjectiveTo examine statistical performance of different rare-event meta-analyses methods.MethodsUsing Monte-Carlo simulation, we set a variety of scenarios to evaluate the performance of various rare-event meta-analysis methods. The performance measures included absolute percentage error, root mean square error and interval coverage.ResultsAcross different scenarios, the absolute percentage error and root mean square error were similar for Bayesian logistic regression model, generalized mixed linear effects model and continuity correction, but the interval coverage was higher with Bayesian logistic regression model. The statistical performances with Mantel-Haenszel method and Peto method were consistently suboptimal across different scenarios.ConclusionsBayesian logistic regression model may be recommended as a preferred approach for rare-event meta-analysis.
Repeated measurement quantitative data is a common data type in clinical studies, and is frequently utilized to assess the therapeutic effects of the intervention measures at a single time point in clinical trials. This study clarifies the concepts and calculation methods for sample size estimation of repeated measurement quantitative data, in order to explore the research question of "comparing group differences at a single time point", from three perspectives: the primary research questions in clinical studies, the main statistical analysis methods and the definitions of the primary outcome indicators. Discrepancies in sample sizes calculated by various methods under different correlation coefficients and varying numbers of repeated measurements were examined. The study revealed that the sample size calculation method based on the mixed-effects model or generalized estimating equations accounts for both the correlation coefficient and the number of repeated measurements, resulting in the smallest estimated sample size. Secondly, the sample size calculation method based on covariance analysis considers the correlation coefficient and produces a smaller estimated sample size than the t-test. The t-test based sample size calculation method requires an appropriate approach to be selected according to the definition of the primary outcome measure. The alignment between the sample size calculation method, the statistical analysis method and the definition of the primary outcome measure is essential to avoid the risk of overestimation or underestimation of the required sample size.
ObjectiveTo explore the utilization of longitudinal data in constructing non-time-varying outcome prediction models and to compare the impact of different modeling approaches on prediction performance. MethodsClinical predictors were selected using univariate analysis and Lasso regression. Non-time-varying outcome prediction models were developed based on latent class trajectory analysis, the two-stage model, and logistic regression. Internal validation was performed using Bootstrapping resampling, and model performance was evaluated using ROC curves, PR curves, sensitivity, specificity and other relevant metrics. ResultsA total of 49 629 pregnant women were included in the study, with mean age of 31.42±4.13 years and pre-pregnancy BMI of 20.91±2.62kg/m². Fourteen predictors were incorporated into the final model. Prediction models utilizing longitudinal data demonstrated high accuracy, with AUROC values exceeding 0.90 and PR-AUC values greater than 0.47. The two-stage model based on late-pregnancy hemoglobin data showed the best performance, achieving AUROC of 0.93 (95%CI 0.92 to 0.94) and PR-AUC of 0.60 (95%CI 0.56 to 0.64). Internal validation confirmed robust model performance, and calibration curves indicated a good agreement between predicted and observed outcomes. ConclusionFor the longitudinal data, the two-stage model can well capture the dynamic change trajectory of the longitudinal data. For different clinical outcomes, the predictive value of repeated measurement data is different.
With the establishment and development of regional healthcare big data platforms, regional healthcare big data is playing an increasingly important role in health policy program evaluations. Regional healthcare big data is usually structured hierarchically. Traditional statistical models have limitations in analyzing hierarchical data, and multilevel models are powerful statistical analysis tools for processing hierarchical data. This method has frequently been used by healthcare researchers overseas, however, it lacks application in China. This paper aimed to introduce the multilevel model and several common application scenarios in medicine policy evaluations. We expected to provide a methodological framework for medicine policy evaluation using regional healthcare big data or hierarchical data.
With the increasing improvement of real-world evidence as a research system and guideline specification for pre-market registration and post-market regulatory decision support of clinically urgent drug and mechanical products, identifying an approach to ensure the high quality and standards of real-world data and establishing a basis for the generation of real-world evidence is receiving increasing attention and concern from regulatory authorities. Based on the experience of Boao hope city real-world data research pattern and ophthalmic data platform construction, this paper discussed the "source data-database-evidence chain" generation process, data management, and data governance in real-world study from the special features and necessity of multiple sources and heterogeneity of data, multiple research designs, and standardized regulatory requirements, and provided references for further construction of comprehensive research data platforms in the future.
The use of repeated measurement data from patients to improve the classification ability of prediction models is a key methodological issue in the current development of clinical prediction models. This study aims to investigate the statistical modeling approach of the two-stage model in developing prediction models for non-time-varying outcomes using repeated measurement data. Using the prediction of the risk of severe postpartum hemorrhage as a case study, this study presents the implementation process of the two-stage model from various perspectives, including data structure, basic principles, software utilization, and model evaluation, to provide methodological support for clinical investigators.
With the gradual standardization and improvement of the real-world study system, real-world evidence, as a supplement to evidence from classical randomized controlled trials, is increasingly used to evaluate the effectiveness and safety of pharmaceuticals and medical devices. High-quality real-world evidence is not only related to the quality of real-world data, but also depends on the type of study design. Therefore, as one of the important designs for pragmatic clinical trials, the Zelen design has received much attention from investigators in recent years. This paper discussed the implementation processes, subtypes of design, advantages, limitations, statistical concerns, and appropriate application scenarios of the Zelen design, on the basis of published papers, in order to clarify its application value, and to provide references for future research.
Interrupted time series (ITS) analysis is a quasi-experimental design for evaluating the effectiveness of health interventions. By controlling the time trend before the intervention, ITS is often used to estimate the level change and slope change after the intervention. However, the traditional ITS modeling strategy might indicate aggregation bias when the data was collected from different clusters. This study introduced two advanced ITS methods of handling hierarchical data to provide the methodology framework for population-level health intervention evaluation.
ObjectiveBased on the requirements of the era of big medical data and discipline development, this study aimed to enhance the clinical research capabilities of medical postgraduates by exploring and evaluating some teaching innovations. MethodsA research-oriented clinical research design course was developed for postgraduate students, focusing on enhancing their clinical research abilities. Innovative teaching content and methods were implemented, and a questionnaire survey was conducted to assess the effectiveness of the teaching innovations among clinical medical master's students. ResultsA total of 699 clinical medical master's students completed the survey questionnaire. 94% of students expressed satisfaction with the course, 96% believed that the relevant knowledge covered in the course met the requirements of clinical research, 94% felt that their research capabilities had improved after completing the course, and 99% believed that the course helped them publish academic papers and complete their master's theses. ConclusionStudents recognized the teaching innovations in the course, which stimulated their initiative and enthusiasm for learning, improved the teaching quality of the course, and enhanced the research capabilities of the students.