Emotion recognition refers to the process of determining and identifying an individual's current emotional state by analyzing various signals such as voice, facial expressions, and physiological indicators etc. Using electroencephalogram (EEG) signals and virtual reality (VR) technology for emotion recognition research helps to better understand human emotional changes, enabling applications in areas such as psychological therapy, education, and training to enhance people’s quality of life. However, there is a lack of comprehensive review literature summarizing the combined researches of EEG signals and VR environments for emotion recognition. Therefore, this paper summarizes and synthesizes relevant research from the past five years. Firstly, it introduces the relevant theories of VR and EEG signal emotion recognition. Secondly, it focuses on the analysis of emotion induction, feature extraction, and classification methods in emotion recognition using EEG signals within VR environments. The article concludes by summarizing the research’s application directions and providing an outlook on future development trends, aiming to serve as a reference for researchers in related fields.
With the development of artificial intelligence (AI) technology, great progress has been made in the application of AI in the medical field. While foreign journals have published a large number of papers on the application of AI in epilepsy, there is a dearth of studies within domestic journals. In order to understand the global research progress and development trend of AI applications in epilepsy, a total of 895 papers on AI applications in epilepsy included in the Web of Science Core Collection and published before December 31, 2022 were selected as the research objects. The annual number of papers and their cited times, the most published authors, institutions and countries, and their cooperative relationships were analyzed, and the research hotspots and future trends in this field were explored by using bibliometrics and other methods. The results showed that before 2016, the annual number of papers on the application of AI in epilepsy increased slowly, and after 2017, the number of publications increased rapidly. The United States had the largest number of papers (n=273), followed by China (n=195). The institution with the largest number of papers was the University of London (n=36), and Capital Medical University in China had 23 papers. The author with the most published papers was Gregory Worrell (n=14), and the scholar with the most published articles in China was Guo Jiayan from Xiamen University (n=7). The application of machine learning in the diagnosis and treatment of epilepsy is an early research focus in this field, while the seizure prediction model based on EEG feature extraction, deep learning especially convolutional neural network application in epilepsy diagnosis, and cloud computing application in epilepsy healthcare, are the current research priorities in this field. AI-based EEG feature extraction, the application of deep learning in the diagnosis and treatment of epilepsy, and the Internet of things to solve epilepsy health-related problems are the research aims of this field in the future.
In response to the problem that the traditional lower limb rehabilitation scale assessment method is time-consuming and difficult to use in exoskeleton rehabilitation training, this paper proposes a quantitative assessment method for lower limb walking ability based on lower limb exoskeleton robot training with multimodal synergistic information fusion. The method significantly improves the efficiency and reliability of the rehabilitation assessment process by introducing quantitative synergistic indicators fusing electrophysiological and kinematic level information. First, electromyographic and kinematic data of the lower extremity were collected from subjects trained to walk wearing an exoskeleton. Then, based on muscle synergy theory, a synergistic quantification algorithm was used to construct synergistic index features of electromyography and kinematics. Finally, the electrophysiological and kinematic level information was fused to build a modal feature fusion model and output the lower limb motor function score. The experimental results showed that the correlation coefficients of the constructed synergistic features of electromyography and kinematics with the clinical scale were 0.799 and 0.825, respectively. The results of the fused synergistic features in the K-nearest neighbor (KNN) model yielded higher correlation coefficients (r = 0.921, P < 0.01). This method can modify the rehabilitation training mode of the exoskeleton robot according to the assessment results, which provides a basis for the synchronized assessment-training mode of “human in the loop” and provides a potential method for remote rehabilitation training and assessment of the lower extremity.
China is one of the countries in the world with the highest rate of esophageal cancer. Early detection, accurate diagnosis, and treatment of esophageal cancer are critical for improving patients’ prognosis and survival. Machine learning technology has become widely used in cancer, which is benefited from the accumulation of medical images and advancement of artificial intelligence technology. Therefore, the learning model, image type, data type and application efficiency of current machine learning technology in esophageal cancer are summarized in this review. The major challenges are identified, and solutions are proposed in medical image machine learning for esophageal cancer. Machine learning's potential future directions in esophageal cancer diagnosis and treatment are discussed, with a focus on the possibility of establishing a link between medical images and molecular mechanisms. The general rules of machine learning application in the medical field are summarized and forecasted on this foundation. By drawing on the advanced achievements of machine learning in other cancers and focusing on interdisciplinary cooperation, esophageal cancer research will be effectively promoted.
Sleep apnea causes cardiac arrest, sleep rhythm disorders, nocturnal hypoxia and abnormal blood pressure fluctuations in patients, which eventually lead to nocturnal target organ damage in hypertensive patients. The incidence of obstructive sleep apnea hypopnea syndrome (OSAHS) is extremely high, which seriously affects the physical and mental health of patients. This study attempts to extract features associated with OSAHS from 24-hour ambulatory blood pressure data and identify OSAHS by machine learning models for the differential diagnosis of this disease. The study data were obtained from ambulatory blood pressure examination data of 339 patients collected in outpatient clinics of the Chinese PLA General Hospital from December 2018 to December 2019, including 115 patients with OSAHS diagnosed by polysomnography (PSG) and 224 patients with non-OSAHS. Based on the characteristics of clinical changes of blood pressure in OSAHS patients, feature extraction rules were defined and algorithms were developed to extract features, while logistic regression and lightGBM models were then used to classify and predict the disease. The results showed that the identification accuracy of the lightGBM model trained in this study was 80.0%, precision was 82.9%, recall was 72.5%, and the area under the working characteristic curve (AUC) of the subjects was 0.906. The defined ambulatory blood pressure features could be effectively used for identifying OSAHS. This study provides a new idea and method for OSAHS screening.
ObjectiveTo explore the application of Tsetlin Machine (TM) in heart beat classification. MethodsTM was used to classify the normal beats, premature ventricular contraction (PVC) and supraventricular premature beats (SPB) in the 2020 data set of China Physiological Signal Challenge. This data set consisted of the single-lead electrocardiogram data of 10 patients with arrhythmia. One patient with atrial fibrillation was excluded, and finally data of the other 9 patients were included in this study. The classification results were then analyzed. ResultsThe classification results showed that the average recognition accuracy of TM was 84.3%, and the basis of classification could be shown by the bit pattern interpretation diagram. ConclusionTM can explain the classification results when classifying heart beats. The reasonable interpretation of classification results can increase the reliability of the model and facilitate people's review and understanding.
As an interdisciplinary subject of medicine and artificial intelligence, intelligent diagnosis and treatment has received extensive attention in both academia and industry. Traditional Chinese medicine (TCM) is characterized by individual syndrome differentiation as well as personalized treatment with personality analysis, which makes the common law mining technology of big data and artificial intelligence appear distortion in TCM diagnosis and treatment study. This article put forward an intelligent diagnosis model of TCM, as well as its construction method. It could not only obtain personal diagnosis varying individually through active learning, but also integrate multiple machine learning models for training, so as to form a more accurate model of learning TCM. Firstly, we used big data extraction technique from different case sources to form a structured TCM database under a unified view. Then, taken a pediatric common disease pneumonia with dyspnea and cough as an example, the experimental analysis on large-scale data verified that the TCM intelligent diagnosis model based on active learning is more accurate than the pre-existing machine learning methods, which may provide a new effective machine learning model for studying TCM diagnosis and treatment.
ObjectiveTo establish a predictive model of surgical site infection (SSI) following colorectal surgery using machine learning.MethodsMachine learning algorithm was used to analyze and model with the colorectal data set from Duke Infection Control Outreach Network Surveillance Network. The whole data set was divided into two parts, with 80% as the training data set and 20% as the testing data set. In order to improve the training effect, the whole data set was divided into two parts again, with 90% as the training data set and 10% as the testing data set. The predictive result of the model was compared with the actual infected cases, and the sensitivity, specificity, positive predictive value, and negative predictive value of the model were calculated, the area under receiver operating characteristic (ROC) curve was used to evaluate the predictive capacity of the model, odds ratio (OR) was calculated to tested the validity of evaluation with a significance level of 0.05.ResultsThere were 7 285 patients in the whole data set registered from January 15th, 2015 to June 16th, 2016, among whom 234 were SSI cases, with an incidence of SSI of 3.21%. The predictive model was established by random forest algorithm, which was trained by 90% of the whole data set and tested by 10% of that. The sensitivity, specificity, positive predictive value, and negative predictive value of the model were 76.9%, 59.2%, 3.3%, and 99.3%, respectively, and the area under ROC curve was 0.767 [OR=4.84, 95% confidence interval (1.32, 17.74), P=0.02].ConclusionThe predictive model of SSI following colorectal surgery established by random forest algorithm has the potential to realize semi-automatic monitoring of SSIs, but more data training should be needed to improve the predictive capacity of the model before clinical application.
Steady-state visual evoked potential (SSVEP) is one of the commonly used control signals in brain-computer interface (BCI) systems. The SSVEP-based BCI has the advantages of high information transmission rate and short training time, which has become an important branch of BCI research field. In this review paper, the main progress on frequency recognition algorithm for SSVEP in past five years are summarized from three aspects, i.e., unsupervised learning algorithms, supervised learning algorithms and deep learning algorithms. Finally, some frontier topics and potential directions are explored.
Objective To systematically review prediction models of small for gestational age (SGA) based on machine learning and provide references for the construction and optimization of such a prediction model. Methods The PubMed, EMbase, Web of Science, CBM, WanFang Data, VIP and CNKI databases were electronically searched to collect studies on SGA prediction models from database inception to August 10, 2022. Two researchers independently screened the literature, extracted data, evaluated the risk of bias of the included studies, and conducted a systematic review. Results A total of 14 studies, comprising 40 prediction models constructed using 19 methods, such as logical regression and random forest, were included. The results of the risk of bias assessment from 13 studies were high; the area under the curve of the prediction models ranged from 0.561 to 0.953. Conclusion The overall risk of bias in the prediction models for SGA was high, and the predictive performance was average. Models built using extreme gradient boosting (XGBoost) demonstrated the best predictive performance across different studies. The stacking method can improve predictive performance by integrating different models. Finally, maternal blood pressure, fetal abdominal circumference, head circumference, and estimated fetal weight were important predictors of SGA.