Sort by:
Page 96 of 2382377 results

Development and validation of a cranial ultrasound imaging-based deep learning model for periventricular-intraventricular haemorrhage detection and grading: a two-centre study.

Peng Y, Hu Z, Wen M, Deng Y, Zhao D, Yu Y, Liang W, Dai X, Wang Y

pubmed logopapersJul 29 2025
Periventricular-intraventricular haemorrhage (IVH) is the most prevalent type of neonatal intracranial haemorrhage. It is especially threatening to preterm infants, in whom it is associated with significant morbidity and mortality. Cranial ultrasound has become an important means of screening periventricular IVH in infants. The integration of artificial intelligence with neonatal ultrasound is promising for enhancing diagnostic accuracy, reducing physician workload, and consequently improving periventricular IVH outcomes. The study investigated whether deep learning-based analysis of the cranial ultrasound images of infants could detect and grade periventricular IVH. This multicentre observational study included 1,060 cases and healthy controls from two hospitals. The retrospective modelling dataset encompassed 773 participants from January 2020 to July 2023, while the prospective two-centre validation dataset included 287 participants from August 2023 to January 2024. The periventricular IVH net model, a deep learning model incorporating the convolutional block attention module mechanism, was developed. The model's effectiveness was assessed by randomly dividing the retrospective data into training and validation sets, followed by independent validation with the prospective two-centre data. To evaluate the model, we measured its recall, precision, accuracy, F1-score, and area under the curve (AUC). The regions of interest (ROI) that influenced the detection by the deep learning model were visualised in significance maps, and the t-distributed stochastic neighbour embedding (t-SNE) algorithm was used to visualise the clustering of model detection parameters. The final retrospective dataset included 773 participants (mean (standard deviation (SD)) gestational age, 32.7 (4.69) weeks; mean (SD) weight, 1,862.60 (855.49) g). For the retrospective data, the model's AUC was 0.99 (95% confidence interval (CI), 0.98-0.99), precision was 0.92 (0.89-0.95), recall was 0.93 (0.89-0.95), and F1-score was 0.93 (0.90-0.95). For the prospective two-centre validation data, the model's AUC was 0.961 (95% CI, 0.94-0.98) and accuracy was 0.89 (95% CI, 0.86-0.92). The two-centre prospective validation results of the periventricular IVH net model demonstrated its tremendous potential for paediatric clinical applications. Combining artificial intelligence with paediatric ultrasound can enhance the accuracy and efficiency of periventricular IVH diagnosis, especially in primary hospitals or community hospitals.

Diagnosis of Major Depressive Disorder Based on Multi-Granularity Brain Networks Fusion.

Zhou M, Mi R, Zhao A, Wen X, Niu Y, Wu X, Dong Y, Xu Y, Li Y, Xiang J

pubmed logopapersJul 29 2025
Major Depressive Disorder (MDD) is a common mental disorder, and making an early and accurate diagnosis is crucial for effective treatment. Functional Connectivity Network (FCN) constructed based on functional Magnetic Resonance Imaging (fMRI) have demonstrated the potential to reveal the mechanisms underlying brain abnormalities. Deep learning has been widely employed to extract features from FCN, but existing methods typically operate directly on the network, failing to fully exploit their deep information. Although graph coarsening techniques offer certain advantages in extracting the brain's complex structure, they may also result in the loss of critical information. To address this issue, we propose the Multi-Granularity Brain Networks Fusion (MGBNF) framework. MGBNF models brain networks through multi-granularity analysis and constructs combinatorial modules to enhance feature extraction. Finally, the Constrained Attention Pooling (CAP) mechanism is employed to achieve the effective integration of multi-channel features. In the feature extraction stage, the parameter sharing mechanism is introduced and applied to multiple channels to capture similar connectivity patterns between different channels while reducing the number of parameters. We validate the effectiveness of the MGBNF model on multiple classification tasks and various brain atlases. The results demonstrate that MGBNF outperforms baseline models in terms of classification performance. Ablation experiments further validate its effectiveness. In addition, we conducted a thorough analysis of the variability of different subtypes of MDD by multiple classification tasks, and the results support further clinical applications.

Radiomics, machine learning, and deep learning for hippocampal sclerosis identification: a systematic review and diagnostic meta-analysis.

Baptista JM, Brenner LO, Koga JV, Ohannesian VA, Ito LA, Nabarro PH, Santos LP, Henrique A, de Oliveira Almeida G, Berbet LU, Paranhos T, Nespoli V, Bertani R

pubmed logopapersJul 29 2025
Hippocampal sclerosis (HS) is the primary pathological finding in temporal lobe epilepsy (TLE) and a common cause of refractory seizures. Conventional diagnostic methods, such as EEG and MRI, have limitations. Artificial intelligence (AI) and radiomics, utilizing machine learning and deep learning, offer a non-invasive approach to enhance diagnostic accuracy. This study synthesized recent AI and radiomics research to improve HS detection in TLE. PubMed/Medline, Embase, and Web of Science were systematically searched following PRISMA-DTA guidelines until May 2024. Statistical analysis was conducted using STATA 14. A bivariate model was used to pool sensitivity (SEN) and specificity (SPE) for HS detection, with I2 assessing heterogeneity. Six studies were included. The pooled sensitivity and specificity of AI-based models for HS detection in medial temporal lobe epilepsy (MTLE) were 0.91 (95 % CI: 0.83-0.96; I2 = 71.48 %) and 0.9 (95 % CI: 0.83-0.94; I2 = 69.62 %), with an AUC of 0.96. AI alone showed higher sensitivity (0.92) and specificity (0.93) than AI combined with radiomics (sensitivity: 0.88; specificity: 0.9). Among algorithms, support vector machine (SVM) had the highest performance (SEN: 0.92; SPE: 0.95), followed by convolutional neural networks (CNN) and logistic regression (LR). AI models, particularly SVM, demonstrate high accuracy in detecting HS, with AI alone outperforming its combination with radiomics. These findings support the integration of AI into non-invasive diagnostic workflows, potentially enabling earlier detection and more personalized clinical decision-making in epilepsy care-ultimately contributing to improved patient outcomes and behavioral management.

Machine learning-based MRI imaging for prostate cancer diagnosis: systematic review and meta-analysis.

Zhao Y, Zhang L, Zhang S, Li J, Shi K, Yao D, Li Q, Zhang T, Xu L, Geng L, Sun Y, Wan J

pubmed logopapersJul 28 2025
This study aims to evaluate the diagnostic value of machine learning-based MRI imaging in differentiating benign and malignant prostate cancer and detecting clinically significant prostate cancer (csPCa, defined as Gleason score ≥7) using systematic review and meta-analysis methods. Electronic databases (PubMed, Web of Science, Cochrane Library, and Embase) were systematically searched for predictive studies using machine learning-based MRI imaging for prostate cancer diagnosis. Sensitivity, specificity, and area under the curve (AUC) were used to assess the diagnostic accuracy of machine learning-based MRI imaging for both benign/malignant prostate cancer and csPCa. A total of 12 studies met the inclusion criteria, with 3474 patients included in the meta-analysis. Machine learning-based MRI imaging demonstrated good diagnostic value for both benign/malignant prostate cancer and csPCa. The pooled sensitivity and specificity for diagnosing benign/malignant prostate cancer were 0.92 (95% CI: 0.83-0.97) and 0.90 (95% CI: 0.68-0.97), respectively, with a combined AUC of 0.96 (95% CI: 0.94-0.98). For csPCa diagnosis, the pooled sensitivity and specificity were 0.83 (95% CI: 0.77-0.87) and 0.73 (95% CI: 0.65-0.81), respectively, with a combined AUC of 0.86 (95% CI: 0.83-0.89). Machine learning-based MRI imaging shows good diagnostic accuracy for both benign/malignant prostate cancer and csPCa. Further in-depth studies are needed to validate these findings.

Continual learning in medical image analysis: A comprehensive review of recent advancements and future prospects.

Kumari P, Chauhan J, Bozorgpour A, Huang B, Azad R, Merhof D

pubmed logopapersJul 28 2025
Medical image analysis has witnessed remarkable advancements, even surpassing human-level performance in recent years, driven by the rapid development of advanced deep-learning algorithms. However, when the inference dataset slightly differs from what the model has seen during one-time training, the model performance is greatly compromised. The situation requires restarting the training process using both the old and the new data, which is computationally costly, does not align with the human learning process, and imposes storage constraints and privacy concerns. Alternatively, continual learning has emerged as a crucial approach for developing unified and sustainable deep models to deal with new classes, tasks, and the drifting nature of data in non-stationary environments for various application areas. Continual learning techniques enable models to adapt and accumulate knowledge over time, which is essential for maintaining performance on evolving datasets and novel tasks. Owing to its popularity and promising performance, it is an active and emerging research topic in the medical field and hence demands a survey and taxonomy to clarify the current research landscape of continual learning in medical image analysis. This systematic review paper provides a comprehensive overview of the state-of-the-art in continual learning techniques applied to medical image analysis. We present an extensive survey of existing research, covering topics including catastrophic forgetting, data drifts, stability, and plasticity requirements. Further, an in-depth discussion of key components of a continual learning framework, such as continual learning scenarios, techniques, evaluation schemes, and metrics, is provided. Continual learning techniques encompass various categories, including rehearsal, regularization, architectural, and hybrid strategies. We assess the popularity and applicability of continual learning categories in various medical sub-fields like radiology and histopathology. Our exploration considers unique challenges in the medical domain, including costly data annotation, temporal drift, and the crucial need for benchmarking datasets to ensure consistent model evaluation. The paper also addresses current challenges and looks ahead to potential future research directions.

Constructing a predictive model for children with autism spectrum disorder based on whole brain magnetic resonance radiomics: a machine learning study.

Chen X, Peng J, Zhang Z, Song Q, Li D, Zhai G, Fu W, Shu Z

pubmed logopapersJul 28 2025
Autism spectrum disorder (ASD) diagnosis remains challenging and could benefit from objective imaging-based approaches. This study aimed to construct a prediction model using whole-brain imaging radiomics and machine learning to identify children with ASD. We analyzed 223 subjects (120 with ASD) from the ABIDE database, randomly divided into training and test sets (7:3 ratio), and an independent external test set of 87 participants from Georgetown University and University of Miami. Radiomics features were extracted from white matter, gray matter, and cerebrospinal fluid from whole-brain MR images. After feature dimensionality reduction, we screened clinical predictors using multivariate logistic regression and combined them with radiomics signatures to build machine learning models. Model performance was evaluated using ROC curves and by stratifying subjects into risk subgroups. Radiomics markers achieved AUCs of 0.78, 0.75, and 0.74 in training, test, and external test sets, respectively. Verbal intelligence quotient(VIQ) emerged as a significant ASD predictor. The decision tree algorithm with radiomics markers performed best, with AUCs of 0.87, 0.84, and 0.83; sensitivities of 0.89, 0.84, and 0.86; and specificities of 0.70, 0.63, and 0.66 in the three datasets, respectively. Risk stratification using a cut-off value of 0.4285 showed significant differences in ASD prevalence between subgroups across all datasets (training: χ<sup>2</sup>=21.325; test: χ<sup>2</sup>=5.379; external test: χ<sup>2</sup>=21.52m, P<0.05). A radiomics signature based on whole-brain MRI features can effectively identify ASD, with performance enhanced by incorporating VIQ data and using a decision tree algorithm, providing a potential adaptive strategy for clinical practice. ASD = Autism Spectrum Disorder; MRI = Magnetic Resonance Imaging; SVM = support vector machine; KNN = K-nearest neighbor; VIQ = Verbal intelligence quotient; FIQ = Full-Scale intelligence quotient; ROC = Receiver Operating Characteristic; AUC = Area under Curve.

Harnessing infrared thermography and multi-convolutional neural networks for early breast cancer detection.

Attallah O

pubmed logopapersJul 28 2025
Breast cancer is a relatively common carcinoma among women worldwide and remains a considerable public health concern. Consequently, the prompt identification of cancer is crucial, as research indicates that 96% of cancers are treatable if diagnosed prior to metastasis. Despite being considered the gold standard for breast cancer evaluation, conventional mammography possesses inherent drawbacks, including accessibility issues, especially in rural regions, and discomfort associated with the procedure. Therefore, there has been a surge in interest in non-invasive, radiation-free alternative diagnostic techniques, such as thermal imaging (thermography). Thermography employs infrared thermal sensors to capture and assess temperature maps of human breasts for the identification of potential tumours based on areas of thermal irregularity. This study proposes an advanced computer-aided diagnosis (CAD) system called Thermo-CAD to assess early breast cancer detection using thermal imaging, aimed at assisting radiologists. The CAD system employs a variety of deep learning techniques, specifically incorporating multiple convolutional neural networks (CNNs) to enhance diagnostic accuracy and reliability. To effectively integrate multiple deep features and diminish the dimensionality of features derived from each CNN, feature transformation and selection methods, including non-negative matrix factorization and Relief-F, are used leading to a reduction in classification complexity. The Thermo-CAD system is assessed utilising two datasets: the DMR-IR (Database for Mastology Research Infrared Images), for distinguishing between normal and abnormal breast tissues, and a novel thermography dataset to distinguish abnormal instances as benign or malignant. Thermo-CAD has proven to be an outstanding CAD system for thermographic breast cancer detection, attaining 100% accuracy on the DMR-IR dataset (normal versus abnormal breast cancer) using CSVM and MGSVM classifiers, and lower accuracy using LSVM and QSVM classifiers. However, it showed a lower ability to distinguish benign from malignant cases (second dataset), achieving an accuracy of 79.3% using CSVM. Yet, it remains a promising tool for early-stage cancer detection, especially in resource-constrained environments.

Harnessing deep learning to optimize induction chemotherapy choices in nasopharyngeal carcinoma.

Chen ZH, Han X, Lin L, Lin GY, Li B, Kou J, Wu CF, Ai XL, Zhou GQ, Gao MY, Lu LJ, Sun Y

pubmed logopapersJul 28 2025
Currently, there is no guidance for personalized choice of induction chemotherapy (IC) regimens (TPF, docetaxel + cisplatin + 5-Fu; or GP, gemcitabine + cisplatin) for locoregionally advanced nasopharyngeal carcinoma (LA-NPC). This study aimed to develop deep learning models for IC response prediction in LA-NPC. For 1438 LA-NPC patients, pretreatment magnetic resonance imaging (MRI) scans and complete biological response (cBR) information after 3 cycles of IC were collected from two centers. All models were trained in 969 patients (TPF: 548, GP: 421), and internally validated in 243 patients (TPF: 138, GP: 105), then tested on an internal dataset of 226 patients (TPF: 125, GP: 101). MRI models for the TPF and GP cohorts were constructed to predict cBR from MRI using radiomics and graph convolutional network (GCN). The MRI-Clinical models were built based on both MRI and clinical parameters. The MRI models and MRI-Clinical models achieved high discriminative accuracy in both TPF cohorts (MRI model: AUC, 0.835; MRI-Clinical model: AUC, 0.838) and GP cohorts (MRI model: AUC, 0.764; MRI-Clinical model: AUC, 0.777). The MRI-Clinical models also showed good performance in the risk stratification. The survival curve revealed that the 3-year disease-free survival of the high-sensitivity group was better than that of the low-sensitivity group in both the TPF and GP cohorts. An online tool guiding personalized choice of IC regimen was developed based on MRI-Clinical models. Our radiomics and GCN-based IC response prediction tool has robust predictive performance and may provide guidance for personalized treatment.

Prediction of 1p/19q state in glioma by integrated deep learning method based on MRI radiomics.

Li F, Li Z, Xu H, Kong G, Zhang Z, Cheng K, Gu L, Hua L

pubmed logopapersJul 28 2025
To predict the 1p/19q molecular status of Lower-grade glioma (LGG) patients nondestructively, this study developed a deep learning (DL) approach using radiomic to provide a potential decision aid for clinical determination of molecular stratification of LGG. The study retrospectively collected images and clinical data of 218 patients diagnosed with LGG between July 2018 and July 2022, including 155 cases from The Cancer Imaging Archive (TCIA) database and 63 cases from a regional medical centre. Patients' clinical data and MRI images were collected, including contrast-enhanced T1-weighted images and T2-weighted images. After pre-processing the image data, tumour regions of interest (ROI) were segmented by two senior neurosurgeons. In this study, an Ensemble Convolutional Neural Network (ECNN) was proposed to predict the 1p/19q status. This method, consisting of Variational Autoencoder (VAE), Information Gain (IG) and Convolutional Neural Network (CNN), is compared with four machine learning algorithms (Random Forest, Decision Tree, K-Nearest Neighbour, Gaussian Neff Bayes). Fivefold cross-validation was used to evaluate and calibrate the model. Precision, recall, accuracy, F1 score and area under the curve (AUC) were calculated to assess model performance. Our cohort comprises 118 patients diagnosed with 1p/19q codeletion and 100 patients diagnosed with 1p/19q non-codeletion. The study findings indicate that the ECNN method demonstrates excellent predictive performance on the validation dataset. Our model achieved an average precision of 0.981, average recall of 0.980, average F1-score of 0.981, and average accuracy of 0.981. The average area under the curve (AUC) for our model is 0.994, surpassing that of the other four traditional machine learning algorithms (AUC: 0.523-0.702). This suggests that the model based on the ECNN algorithm performs well in distinguishing the 1p/19q molecular status of LGG patients. The deep learning model based on conventional MRI radiomic integrates VAE and IG methods. Compared with traditional machine learning algorithms, it shows the best performance in the prediction of 1p/19q molecular co-deletion status. It may become a potentially effective tool for non-invasively and effectively identifying molecular features of lower-grade glioma in the future, providing an important reference for clinicians to formulate individualized diagnosis and treatment plans.

A radiomics-based interpretable model integrating delayed-phase CT and clinical features for predicting the pathological grade of appendiceal pseudomyxoma peritonei.

Bai D, Shi G, Liang Y, Li F, Zheng Z, Wang Z

pubmed logopapersJul 28 2025
This study aimed to develop an interpretable machine learning model integrating delayed-phase contrast-enhanced CT radiomics with clinical features for noninvasive prediction of pathological grading in appendiceal pseudomyxoma peritonei (PMP), using Shapley Additive Explanations (SHAP) for model interpretation. This retrospective study analyzed 158 pathologically confirmed PMP cases (85 low-grade, 73 high-grade) from January 4, 2015 to April 30, 2024. Comprehensive clinical data including demographic characteristics, serum tumor markers (CEA, CA19-9, CA125, D-dimer, CA-724, CA-242), and CT-peritoneal cancer index (CT-PCI) were collected. Radiomics features were extracted from preoperative contrast-enhanced CT scans using standardized protocols. After rigorous feature selection and five-fold cross-validation, we developed three predictive models: clinical-only, radiomics-only, and a combined clinical-radiomics model using logistic regression. Model performance was evaluated through ROC analysis (AUC), Delong test, decision curve analysis (DCA), and Brier score, with SHAP values providing interpretability. The combined model demonstrated superior performance, achieving AUCs of 0.91 (95%CI:0.86-0.95) and 0.88 (95%CI:0.82-0.93) in training and testing sets respectively, significantly outperforming standalone models (P < 0.05). DCA confirmed greater clinical utility across most threshold probabilities, with favorable Brier scores (training:0.124; testing:0.142) indicating excellent calibration. SHAP analysis identified the top predictive features: wavelet-LHH_glcm_InverseVariance (radiomics), original_shape_Elongation (radiomics), and CA-199 (clinical). Our SHAP-interpretable combined model provides an accurate, noninvasive tool for PMP grading, facilitating personalized treatment decisions. The integration of radiomics and clinical data demonstrates superior predictive performance compared to conventional approaches, with potential to improve patient outcomes.
Page 96 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.