Non-small cell lung cancer is one of the cancers with the highest incidence and mortality rate in the world, and precise prognostic models can guide clinical treatment plans. With the continuous upgrading of computer technology, deep learning as a breakthrough technology of artificial intelligence has shown good performance and great potential in the application of non-small cell lung cancer prognosis model. The research on the application of deep learning in survival and recurrence prediction, efficacy prediction, distant metastasis prediction, and complication prediction of non-small cell lung cancer has made some progress, and it shows a trend of multi-omics and multi-modal joint, but there are still shortcomings, which should be further explored in the future to strengthen model verification and solve practical problems in clinical practice.
Objective To develop a neural network architecture based on deep learning to assist knee CT images automatic segmentation, and validate its accuracy. Methods A knee CT scans database was established, and the bony structure was manually annotated. A deep learning neural network architecture was developed independently, and the labeled database was used to train and test the neural network. Metrics of Dice coefficient, average surface distance (ASD), and Hausdorff distance (HD) were calculated to evaluate the accuracy of the neural network. The time of automatic segmentation and manual segmentation was compared. Five orthopedic experts were invited to score the automatic and manual segmentation results using Likert scale and the scores of the two methods were compared. Results The automatic segmentation achieved a high accuracy. The Dice coefficient, ASD, and HD of the femur were 0.953±0.037, (0.076±0.048) mm, and (3.101±0.726) mm, respectively; and those of the tibia were 0.950±0.092, (0.083±0.101) mm, and (2.984±0.740) mm, respectively. The time of automatic segmentation was significantly shorter than that of manual segmentation [(2.46±0.45) minutes vs. (64.73±17.07) minutes; t=36.474, P<0.001). The clinical scores of the femur were 4.3±0.3 in the automatic segmentation group and 4.4±0.2 in the manual segmentation group, and the scores of the tibia were 4.5±0.2 and 4.5±0.3, respectively. There was no significant difference between the two groups (t=1.753, P=0.085; t=0.318, P=0.752). Conclusion The automatic segmentation of knee CT images based on deep learning has high accuracy and can achieve rapid segmentation and three-dimensional reconstruction. This method will promote the development of new technology-assisted techniques in total knee arthroplasty.
Glaucoma is the leading cause of irreversible blindness, but its early symptoms are not obvious and are easily overlooked, so early screening for glaucoma is particularly important. The cup to disc ratio is an important indicator for clinical glaucoma screening, and accurate segmentation of the optic cup and disc is the key to calculating the cup to disc ratio. In this paper, a full convolutional neural network with residual multi-scale convolution module was proposed for the optic cup and disc segmentation. First, the fundus image was contrast enhanced and polar transformation was introduced. Subsequently, W-Net was used as the backbone network, which replaced the standard convolution unit with the residual multi-scale full convolution module, the input port was added to the image pyramid to construct the multi-scale input, and the side output layer was used as the early classifier to generate the local prediction output. Finally, a new multi-tag loss function was proposed to guide network segmentation. The mean intersection over union of the optic cup and disc segmentation in the REFUGE dataset was 0.904 0 and 0.955 3 respectively, and the overlapping error was 0.178 0 and 0.066 5 respectively. The results show that this method not only realizes the joint segmentation of cup and disc, but also improves the segmentation accuracy effectively, which could be helpful for the promotion of large-scale early glaucoma screening.
ObjectiveTo systematically evaluate the efficacy and safety of computer-aided detection (CADe) and conventional colonoscopy in identifying colorectal adenomas and polyps. MethodsThe PubMed, Embase, Cochrane Library, Web of Science, WanFang Data, VIP, and CNKI databases were electronically searched to collect randomized controlled trials (RCTs) comparing the effectiveness and safety of CADe assisted colonoscopy and conventional colonoscopy in detecting colorectal tumors from 2014 to April 2023. Two reviewers independently screened the literature, extracted data, and evaluated the risk of bias of the included literature. Meta-analysis was performed by RevMan 5.3 software. ResultsA total of 9 RCTs were included, with a total of 6 393 patients. Compared with conventional colonoscopy, the CADe system significantly improved the adenoma detection rate (ADR) (RR=1.22, 95%CI 1.10 to 1.35, P<0.01) and polyp detection rate (PDR) (RR=1.19, 95%CI 1.04 to 1.36, P=0.01). It also reduced the missed diagnosis rate (AMR) of adenomas (RR=0.48, 95%CI 0.34 to 0.67, P<0.01) and the missed diagnosis rate (PMR) of polyps (RR=0.39, 95%CI 0.25 to 0.59, P<0.01). The PDR of proximal polyps significantly increased, while the PDR of ≤5 mm polyps slightly increased, but the PDR of >10mm and pedunculated polyps significantly decreased. The AMR of the cecum, transverse colon, descending colon, and sigmoid colon was significantly reduced. There was no statistically significant difference in the withdrawal time between the two groups. Conclusion The CADe system can increase the detection rate of adenomas and polyps, and reduce the missed diagnosis rate. The detection rate of polyps is related to their location, size, and shape, while the missed diagnosis rate of adenomas is related to their location.
With the development of artificial intelligence, machine learning has been widely used in diagnosis of diseases. It is crucial to conduct diagnostic test accuracy studies and evaluate the performance of models reasonably to improve the accuracy of diagnosis. For machine learning-based diagnostic test accuracy studies, this paper introduces the principles of study design in the aspects of target conditions, selection of participants, diagnostic tests, reference standards and ethics.
Organoids are an in vitro model that can simulate the complex structure and function of tissues in vivo. Functions such as classification, screening and trajectory recognition have been realized through organoid image analysis, but there are still problems such as low accuracy in recognition classification and cell tracking. Deep learning algorithm and organoid image fusion analysis are the most advanced organoid image analysis methods. In this paper, the organoid image depth perception technology is investigated and sorted out, the organoid culture mechanism and its application concept in depth perception are introduced, and the key progress of four depth perception algorithms such as organoid image and classification recognition, pattern detection, image segmentation and dynamic tracking are reviewed respectively, and the performance advantages of different depth models are compared and analyzed. In addition, this paper also summarizes the depth perception technology of various organ images from the aspects of depth perception feature learning, model generalization and multiple evaluation parameters, and prospects the development trend of organoids based on deep learning methods in the future, so as to promote the application of depth perception technology in organoid images. It provides an important reference for the academic research and practical application in this field.
ObjectiveTo study a deep learning-based dual-modality fundus camera which was used to study retinal blood oxygen saturation and vascular morphology changes in eyes with branch retinal vein occlusion (BRVO). MethodsA prospective study. From May to October 2020, 31 patients (31 eyes) of BRVO (BRVO group) and 20 healthy volunteers (20 eyes) with matched gender and age (control group) were included in the study. Among 31 patients (31 eyes) in BRVO group, 20 patients (20 eyes) received one intravitreal injection of anti-vascular endothelial growth factor drugs before, and 11 patients (11 eyes) did not receive any treatment. They were divided into treatment group and untreated group accordingly. Retinal images were collected with a dual-modality fundus camera; arterial and vein segments were segmented in the macular region of interest (MROI) using deep learning; the optical density ratio was used to calculate retinal blood oxygen saturation (SO2) on the affected and non-involved sides of the eyes in the control group and patients in the BRVO group, and calculated the diameter, curvature, fractal dimension and density of arteriovenous in MROI. Quantitative data were compared between groups using one-way analysis of variance. ResultsThere was a statistically significant difference in arterial SO2 (SO2-A) in the MROI between the affected eyes, the fellow eyes in the BRVO group and the control group (F=4.925, P<0.001), but there was no difference in the venous SO2 (SO2-V) (F=0.607, P=0.178). Compared with the control group, the SO2-A in the MROI of the affected side and the non-involved side of the untreated group was increased, and the difference was statistically significant (F=4.925, P=0.012); there was no significant difference in SO2-V (F=0.607, P=0.550). There was no significant difference in SO2-A and SO2-V in the MROI between the affected side, the non-involved side in the treatment group and the control group (F=0.159, 1.701; P=0.854, 0.197). There was no significant difference in SO2-A and SO2-V in MROI between the affected side of the treatment group, the untreated group and the control group (F=2.553, 0.265; P=0.088, 0.546). The ophthalmic artery diameter, arterial curvature, arterial fractal dimension, vein fractal dimension, arterial density, and vein density were compared in the untreated group, the treatment group, and the control group, and the differences were statistically significant (F=3.527, 3.322, 7.251, 26.128, 4.782, 5.612; P=0.047, 0.044, 0.002, <0.001, 0.013, 0.006); there was no significant difference in vein diameter and vein curvature (F=2.132, 1.199; P=0.143, 0.321). ConclusionArterial SO2 in BRVO patients is higher than that in healthy eyes, it decreases after anti-anti-vascular endothelial growth factor drugs treatment, SO2-V is unchanged.
ObjectiveTo develop an artificial intelligence based three-dimensional (3D) preoperative planning system (AIHIP) for total hip arthroplasty (THA) and verify its accuracy by preliminary clinical application.MethodsThe CT image database consisting of manually segmented CT image series was built up to train the independently developed deep learning neural network. The deep learning neural network and preoperative planning module were assembled within a visual interactive interface—AIHIP. After that, 60 patients (60 hips) with unilateral primary THA between March 2017 and May 2020 were enrolled and divided into two groups. The AIHIP system was applied in the trial group (n=30) and the traditional acetate templating was applied in the control group (n=30). There was no significant difference in age, gender, operative side, and Association Research Circulation Osseous (ARCO) grading between the two groups (P>0.05). The coincidence rate, preoperative and postoperative leg length discrepancy, the difference of bilateral femoral offsets, the difference of bilateral combined offsets of two groups were compared to evaluate the accuracy and efficiency of the AIHIP system.ResultsThe preoperative plan by the AIHIP system was completely realized in 27 patients (90.0%) of the trial group and the acetate templating was completely realized in 17 patients (56.7%) of the control group for the cup, showing significant difference (P<0.05). The preoperative plan by the AIHIP system was completely realized in 25 patients (83.3%) of the trial group and the acetate templating was completely realized in 16 patients (53.3%) of the control group for the stem, showing significant difference (P<0.05). There was no significant difference in the difference of bilateral femoral offsets, the difference of bilateral combined offsets, and the leg length discrepancy between the two groups before operation (P>0.05). The difference of bilateral combined offsets at immediate after operation was significantly less in the trial group than in the control group (t=−2.070, P=0.044); but there was no significant difference in the difference of bilateral femoral offsets and the leg length discrepancy between the two groups (P>0.05).ConclusionCompared with the traditional 2D preoperative plan, the 3D preoperative plan by the AIHIP system is more accurate and detailed, especially in demonstrating the actual anatomical structures. In this study, the working flow of this artificial intelligent preoperative system was illustrated for the first time and preliminarily applied in THA. However, its potential clinical value needs to be discovered by advanced research.
The widespread application of low-dose computed tomography (LDCT) has significantly increased the detection of pulmonary small nodules, while accurate prediction of their growth patterns is crucial to avoid overdiagnosis or underdiagnosis. This article reviews recent research advances in predicting pulmonary nodule growth based on CT imaging, with a focus on summarizing key factors influencing nodule growth, such as baseline morphological parameters, dynamic indicators, and clinical characteristics, traditional prediction models (exponential and Gompertzian models), and the applications and limitations of radiomics-based and deep learning models. Although existing studies have achieved certain progress in predicting nodule growth, challenges such as small sample sizes and lack of external validation persist. Future research should prioritize the development of personalized and visualized prediction models integrated with larger-scale datasets to enhance predictive accuracy and clinical applicability.
Magnetic resonance imaging (MRI) is an important medical imaging method, whose major limitation is its long scan time due to the imaging mechanism, increasing patients’ cost and waiting time for the examination. Currently, parallel imaging (PI) and compress sensing (CS) together with other reconstruction technologies have been proposed to accelerate image acquisition. However, the image quality of PI and CS depends on the image reconstruction algorithms, which is far from satisfying in respect to both the image quality and the reconstruction speed. In recent years, image reconstruction based on generative adversarial network (GAN) has become a research hotspot in the field of magnetic resonance imaging because of its excellent performance. In this review, we summarized the recent development of application of GAN in MRI reconstruction in both single- and multi-modality acceleration, hoping to provide a useful reference for interested researchers. In addition, we analyzed the characteristics and limitations of existing technologies and forecasted some development trends in this field.