west china medical publishers
Keyword
  • Title
  • Author
  • Keyword
  • Abstract
Advance search
Advance search

Search

find Keyword "Convolutional neural network" 23 results
  • Fetal electrocardiogram signal extraction and analysis method combining fast independent component analysis algorithm and convolutional neural network

    Fetal electrocardiogram (ECG) signals provide important clinical information for early diagnosis and intervention of fetal abnormalities. In this paper, we propose a new method for fetal ECG signal extraction and analysis. Firstly, an improved fast independent component analysis method and singular value decomposition algorithm are combined to extract high-quality fetal ECG signals and solve the waveform missing problem. Secondly, a novel convolutional neural network model is applied to identify the QRS complex waves of fetal ECG signals and effectively solve the waveform overlap problem. Finally, high quality extraction of fetal ECG signals and intelligent recognition of fetal QRS complex waves are achieved. The method proposed in this paper was validated with the data from the PhysioNet computing in cardiology challenge 2013 database of the Complex Physiological Signals Research Resource Network. The results show that the average sensitivity and positive prediction values of the extraction algorithm are 98.21% and 99.52%, respectively, and the average sensitivity and positive prediction values of the QRS complex waves recognition algorithm are 94.14% and 95.80%, respectively, which are better than those of other research results. In conclusion, the algorithm and model proposed in this paper have some practical significance and may provide a theoretical basis for clinical medical decision making in the future.

    Release date:2023-02-24 06:14 Export PDF Favorites Scan
  • Single-channel electroencephalogram signal used for sleep state recognition based on one-dimensional width kernel convolutional neural networks and long-short-term memory networks

    Aiming at the problem that the unbalanced distribution of data in sleep electroencephalogram(EEG) signals and poor comfort in the process of polysomnography information collection will reduce the model's classification ability, this paper proposed a sleep state recognition method using single-channel EEG signals (WKCNN-LSTM) based on one-dimensional width kernel convolutional neural networks(WKCNN) and long-short-term memory networks (LSTM). Firstly, the wavelet denoising and synthetic minority over-sampling technique-Tomek link (SMOTE-Tomek) algorithm were used to preprocess the original sleep EEG signals. Secondly, one-dimensional sleep EEG signals were used as the input of the model, and WKCNN was used to extract frequency-domain features and suppress high-frequency noise. Then, the LSTM layer was used to learn the time-domain features. Finally, normalized exponential function was used on the full connection layer to realize sleep state. The experimental results showed that the classification accuracy of the one-dimensional WKCNN-LSTM model was 91.80% in this paper, which was better than that of similar studies in recent years, and the model had good generalization ability. This study improved classification accuracy of single-channel sleep EEG signals that can be easily utilized in portable sleep monitoring devices.

    Release date:2023-02-24 06:14 Export PDF Favorites Scan
  • A machine learning-based risk prediction model of chronic obstructive pulmonary disease with lung cancer

    Objective To establish a machine learning-based risk prediction model of combined chronic obstructive pulmonary disease (COPD) with lung cancer, so as to explore the high risk factors for COPD patients with lung cancer and to lay the foundation for early detection of lung cancer risk in COPD patients. Methods A total of 154 patients from the Second Hospital of Dalian Medical University from 2010 to 2021 were retrospectively analyzed, including 99 patients in the COPD group and 55 patients in the COPD with lung cancer group. the chest high resolution computed tomography (HRCT) scans and pulmonary function test of each patient were acquired. The main analyses were as follow: (1) to valid the statistically differences of the basic information (such as age, body mass index, smoking index), laboratory test results, pulmonary function parameters and quantitative parameters of chest HRCT between the two groups; (2) to analyze the indicators of high risk factors for lung cancer in COPD patients using univariate and binary logistic regression (LR) methods; and (3) to establish the machine learning model (such as LR and Gaussian process) for COPD with lung cancer patients. Results Based on the statistical analysis and LR methods, decreased BMI, increased whole lung emphysema index, increased whole lung mean density, and increased percentage activity of exertional spirometry and prothrombin time were risk factors for COPD with lung cancer patients. Based on the machine learning prediction model for COPD with lung cancer patients, the area under the receiver operating characteristic curve for LR and Gaussian process were obtained as 0.88 using the soluble fragments of prothrombin time percentage activity, whole lung emphysema index, whole lung mean density, and forced vital capacity combined with neuron-specific enolase and cytokeratin 19 as features. Conclusion The prediction model of COPD with lung cancer patients using a machine learning approach can be used for early detection of lung cancer risk in COPD patients.

    Release date:2023-04-28 02:38 Export PDF Favorites Scan
  • Gesture accuracy recognition based on grayscale image of surface electromyogram signal and multi-view convolutional neural network

    This study aims to address the limitations in gesture recognition caused by the susceptibility of temporal and frequency domain feature extraction from surface electromyography signals, as well as the low recognition rates of conventional classifiers. A novel gesture recognition approach was proposed, which transformed surface electromyography signals into grayscale images and employed convolutional neural networks as classifiers. The method began by segmenting the active portions of the surface electromyography signals using an energy threshold approach. Temporal voltage values were then processed through linear scaling and power transformations to generate grayscale images for convolutional neural network input. Subsequently, a multi-view convolutional neural network model was constructed, utilizing asymmetric convolutional kernels of sizes 1 × n and 3 × n within the same layer to enhance the representation capability of surface electromyography signals. Experimental results showed that the proposed method achieved recognition accuracies of 98.11% for 13 gestures and 98.75% for 12 multi-finger movements, significantly outperforming existing machine learning approaches. The proposed gesture recognition method, based on surface electromyography grayscale images and multi-view convolutional neural networks, demonstrates simplicity and efficiency, substantially improving recognition accuracy and exhibiting strong potential for practical applications.

    Release date:2024-12-27 03:50 Export PDF Favorites Scan
  • Automatic sleep staging model based on single channel electroencephalogram signal

    Sleep staging is the basis for solving sleep problems. There’s an upper limit for the classification accuracy of sleep staging models based on single-channel electroencephalogram (EEG) data and features. To address this problem, this paper proposed an automatic sleep staging model that mixes deep convolutional neural network (DCNN) and bi-directional long short-term memory network (BiLSTM). The model used DCNN to automatically learn the time-frequency domain features of EEG signals, and used BiLSTM to extract the temporal features between the data, fully exploiting the feature information contained in the data to improve the accuracy of automatic sleep staging. At the same time, noise reduction techniques and adaptive synthetic sampling were used to reduce the impact of signal noise and unbalanced data sets on model performance. In this paper, experiments were conducted using the Sleep-European Data Format Database Expanded and the Shanghai Mental Health Center Sleep Database, and achieved an overall accuracy rate of 86.9% and 88.9% respectively. When compared with the basic network model, all the experimental results outperformed the basic network, further demonstrating the validity of this paper's model, which can provide a reference for the construction of a home sleep monitoring system based on single-channel EEG signals.

    Release date:2023-08-23 02:45 Export PDF Favorites Scan
  • Multimodal high-grade glioma semantic segmentation network with multi-scale and multi-attention fusion mechanism

    Glioma is a primary brain tumor with high incidence rate. High-grade gliomas (HGG) are those with the highest degree of malignancy and the lowest degree of survival. Surgical resection and postoperative adjuvant chemoradiotherapy are often used in clinical treatment, so accurate segmentation of tumor-related areas is of great significance for the treatment of patients. In order to improve the segmentation accuracy of HGG, this paper proposes a multi-modal glioma semantic segmentation network with multi-scale feature extraction and multi-attention fusion mechanism. The main contributions are, (1) Multi-scale residual structures were used to extract features from multi-modal gliomas magnetic resonance imaging (MRI); (2) Two types of attention modules were used for features aggregating in channel and spatial; (3) In order to improve the segmentation performance of the whole network, the branch classifier was constructed using ensemble learning strategy to adjust and correct the classification results of the backbone classifier. The experimental results showed that the Dice coefficient values of the proposed segmentation method in this article were 0.909 7, 0.877 3 and 0.839 6 for whole tumor, tumor core and enhanced tumor respectively, and the segmentation results had good boundary continuity in the three-dimensional direction. Therefore, the proposed semantic segmentation network has good segmentation performance for high-grade gliomas lesions.

    Release date:2022-08-22 03:12 Export PDF Favorites Scan
  • Research on arrhythmia classification algorithm based on adaptive multi-feature fusion network

    Deep learning method can be used to automatically analyze electrocardiogram (ECG) data and rapidly implement arrhythmia classification, which provides significant clinical value for the early screening of arrhythmias. How to select arrhythmia features effectively under limited abnormal sample supervision is an urgent issue to address. This paper proposed an arrhythmia classification algorithm based on an adaptive multi-feature fusion network. The algorithm extracted RR interval features from ECG signals, employed one-dimensional convolutional neural network (1D-CNN) to extract time-domain deep features, employed Mel frequency cepstral coefficients (MFCC) and two-dimensional convolutional neural network (2D-CNN) to extract frequency-domain deep features. The features were fused using adaptive weighting strategy for arrhythmia classification. The paper used the arrhythmia database jointly developed by the Massachusetts Institute of Technology and Beth Israel Hospital (MIT-BIH) and evaluated the algorithm under the inter-patient paradigm. Experimental results demonstrated that the proposed algorithm achieved an average precision of 75.2%, an average recall of 70.1% and an average F1-score of 71.3%, demonstrating high classification accuracy and being able to provide algorithmic support for arrhythmia classification in wearable devices.

    Release date:2025-02-21 03:20 Export PDF Favorites Scan
  • Enhancement algorithm for surface electromyographic-based gesture recognition based on real-time fusion of muscle fatigue features

    This study aims to optimize surface electromyography-based gesture recognition technique, focusing on the impact of muscle fatigue on the recognition performance. An innovative real-time analysis algorithm is proposed in the paper, which can extract muscle fatigue features in real time and fuse them into the hand gesture recognition process. Based on self-collected data, this paper applies algorithms such as convolutional neural networks and long short-term memory networks to provide an in-depth analysis of the feature extraction method of muscle fatigue, and compares the impact of muscle fatigue features on the performance of surface electromyography-based gesture recognition tasks. The results show that by fusing the muscle fatigue features in real time, the algorithm proposed in this paper improves the accuracy of hand gesture recognition at different fatigue levels, and the average recognition accuracy for different subjects is also improved. In summary, the algorithm in this paper not only improves the adaptability and robustness of the hand gesture recognition system, but its research process can also provide new insights into the development of gesture recognition technology in the field of biomedical engineering.

    Release date:2024-10-22 02:39 Export PDF Favorites Scan
  • Establishment and test of intelligent classification method of thoracolumbar fractures based on machine vision

    Objective To develop a deep learning system for CT images to assist in the diagnosis of thoracolumbar fractures and analyze the feasibility of its clinical application. Methods Collected from West China Hospital of Sichuan University from January 2019 to March 2020, a total of 1256 CT images of thoracolumbar fractures were annotated with a unified standard through the Imaging LabelImg system. All CT images were classified according to the AO Spine thoracolumbar spine injury classification. The deep learning system in diagnosing ABC fracture types was optimized using 1039 CT images for training and validation, of which 1004 were used as the training set and 35 as the validation set; the rest 217 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. The deep learning system in subtyping A was optimized using 581 CT images for training and validation, of which 556 were used as the training set and 25 as the validation set; the rest 104 CT images were used as the test set to compare the deep learning system with the clinician’s diagnosis. Results The accuracy and Kappa coefficient of the deep learning system in diagnosing ABC fracture types were 89.4% and 0.849 (P<0.001), respectively. The accuracy and Kappa coefficient of subtyping A were 87.5% and 0.817 (P<0.001), respectively. Conclusions The classification accuracy of the deep learning system for thoracolumbar fractures is high. This approach can be used to assist in the intelligent diagnosis of CT images of thoracolumbar fractures and improve the current manual and complex diagnostic process.

    Release date:2021-11-25 03:04 Export PDF Favorites Scan
  • Research progress of breast pathology image diagnosis based on deep learning

    Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.

    Release date: Export PDF Favorites Scan
3 pages Previous 1 2 3 Next

Format

Content