Sort by:
Page 29 of 75745 results

Predicting pulmonary hemodynamics in pediatric pulmonary arterial hypertension using cardiac magnetic resonance imaging and machine learning: an exploratory pilot study.

Chu H, Ferreira RJ, Lokhorst C, Douwes JM, Haarman MG, Willems TP, Berger RMF, Ploegstra MJ

pubmed logopapersJun 14 2025
Pulmonary arterial hypertension (PAH) significantly affects the pulmonary vasculature, requiring accurate estimation of mean pulmonary arterial pressure (mPAP) and pulmonary vascular resistance index (PVRi). Although cardiac catheterization is the gold standard for these measurements, it poses risks, especially in children. This pilot study explored how machine learning (ML) can predict pulmonary hemodynamics from non-invasive cardiac magnetic resonance (CMR) cine images in pediatric PAH patients. A retrospective analysis of 40 CMR studies from children with PAH using a four-fold stratified group cross-validation was conducted. The endpoints were severity profiles of mPAP and PVRi, categorised as 'low', 'high', and 'extreme'. Deep learning (DL) and traditional ML models were optimized through hyperparameter tuning. Receiver operating characteristic curves and area under the curve (AUC) were used as the primary evaluation metrics. DL models utilizing CMR cine imaging showed the best potential for predicting mPAP and PVRi severity profiles on test folds (AUC<sub>mPAP</sub>=0.82 and AUC<sub>PVRi</sub>=0.73). True positive rates (TPR) for predicting low, high, and extreme mPAP were 5/10, 11/16, and 11/14, respectively. TPR for predicting low, high, and extreme PVRi were 5/13, 14/15, and 7/12, respectively. Optimal DL models only used spatial patterns from consecutive CMR cine frames to maximize prediction performance. This exploratory pilot study demonstrates the potential of DL leveraging CMR imaging for non-invasive prediction of mPAP and PVRi in pediatric PAH. While preliminary, these findings may lay the groundwork for future advancements in CMR imaging in pediatric PAH, offering a pathway to safer disease monitoring and reduced reliance on invasive cardiac catheterization.

Automated quantification of T1 and T2 relaxation times in liver mpMRI using deep learning: a sequence-adaptive approach.

Zbinden L, Erb S, Catucci D, Doorenbos L, Hulbert L, Berzigotti A, Brönimann M, Ebner L, Christe A, Obmann VC, Sznitman R, Huber AT

pubmed logopapersJun 14 2025
To evaluate a deep learning sequence-adaptive liver multiparametric MRI (mpMRI) assessment with validation in different populations using total and segmental T1 and T2 relaxation time maps. A neural network was trained to label liver segmental parenchyma and its vessels on noncontrast T1-weighted gradient-echo Dixon in-phase acquisitions on 200 liver mpMRI examinations. Then, 120 unseen liver mpMRI examinations of patients with primary sclerosing cholangitis or healthy controls were assessed by coregistering the labels to noncontrast and contrast-enhanced T1 and T2 relaxation time maps for optimization and internal testing. The algorithm was externally tested in a segmental and total liver analysis of previously unseen 65 patients with biopsy-proven liver fibrosis and 25 healthy volunteers. Measured relaxation times were compared to manual measurements using intraclass correlation coefficient (ICC) and Wilcoxon test. Comparison of manual and deep learning-generated segmental areas on different T1 and T2 maps was excellent for segmental (ICC = 0.95 ± 0.1; p < 0.001) and total liver assessment (0.97 ± 0.02, p < 0.001). The resulting median of the differences between automated and manual measurements among all testing populations and liver segments was 1.8 ms for noncontrast T1 (median 835 versus 842 ms), 2.0 ms for contrast-enhanced T1 (median 518 versus 519 ms), and 0.3 ms for T2 (median 37 versus 37 ms). Automated quantification of liver mpMRI is highly effective across different patient populations, offering excellent reliability for total and segmental T1 and T2 maps. Its scalable, sequence-adaptive design could foster comprehensive clinical decision-making. The proposed automated, sequence-adaptive algorithm for total and segmental analysis of liver mpMRI may be co-registered to any combination of parametric sequences, enabling comprehensive quantitative analysis of liver mpMRI without sequence-specific training. A deep learning-based algorithm automatically quantified segmental T1 and T2 relaxation times in liver mpMRI. The two-step approach of segmentation and co-registration allowed to assess arbitrary sequences. The algorithm demonstrated high reliability with manual reader quantification. No additional sequence-specific training is required to assess other parametric sequences. The DL algorithm has the potential to enhance individual liver phenotyping.

Optimizing stroke detection with genetic algorithm-based feature selection in deep learning models.

Nayak GS, Mallick PK, Sahu DP, Kathi A, Reddy R, Viyyapu J, Pabbisetti N, Udayana SP, Sanapathi H

pubmed logopapersJun 14 2025
Brain stroke is a leading cause of disability and mortality worldwide, necessitating the development of accurate and efficient diagnostic models. In this study, we explore the integration of Genetic Algorithm (GA)-based feature selection with three state-of-the-art deep learning architectures InceptionV3, VGG19, and MobileNetV2 to enhance stroke detection from neuroimaging data. GA is employed to optimize feature selection, reducing redundancy and improving model performance. The selected features are subsequently fed into the respective deep-learning models for classification. The dataset used in this study comprises neuroimages categorized into "Normal" and "Stroke" classes. Experimental results demonstrate that incorporating GA improves classification accuracy while reducing computational complexity. A comparative analysis of the three architectures reveals their effectiveness in stroke detection, with MobileNetV2 achieving the highest accuracy of 97.21%. Notably, the integration of Genetic Algorithms with MobileNetV2 for feature selection represents a novel contribution, setting this study apart from prior approaches that rely solely on traditional CNN pipelines. Owing to its lightweight design and low computational demands, MobileNetV2 also offers significant advantages for real-time clinical deployment, making it highly applicable for use in emergency care settings where rapid diagnosis is critical. Additionally, performance metrics such as precision, recall, F1-score, and Receiver Operating Characteristic (ROC) curves are evaluated to provide comprehensive insights into model efficacy. This research underscores the potential of genetic algorithm-driven optimization in enhancing deep learning-based medical image classification, paving the way for more efficient and reliable stroke diagnosis.

Application of Machine Learning to Breast MR Imaging.

Lo Gullo R, van Veldhuizen V, Roa T, Kapetas P, Teuwen J, Pinker K

pubmed logopapersJun 14 2025
The demand for breast imaging services continues to grow, driven by expanding indications in breast cancer diagnosis and treatment. This increasing demand underscores the potential role of artificial intelligence (AI) to enhance workflow efficiency as well as to further unlock the abundant imaging data to achieve improvements along the breast cancer pathway. Although AI has made significant advancements in mammography and digital breast tomosynthesis, with commercially available computer-aided detection (CAD systems) widely used for breast cancer screening and detection, its adoption in breast MRI has been slower. This lag is primarily attributed to the inherent complexity of breast MRI examinations and also hence the more limited availability of large, well-annotated publicly available breast MRI datasets. Despite these challenges, interest in AI implementation in breast MRI remains strong, fueled by the expanding use and indications for breast MRI. This article explores the implementation of AI in breast MRI across the breast cancer care pathway, highlighting its potential to revolutionize the way we detect and manage breast cancer. By addressing current challenges and examining emerging AI applications, we aim to provide a comprehensive overview of how AI is reshaping breast MRI and improving outcomes for patients.

FFLUNet: Feature Fused Lightweight UNet for brain tumor segmentation.

Kundu S, Dutta S, Mukhopadhyay J, Chakravorty N

pubmed logopapersJun 14 2025
Brain tumors, particularly glioblastoma multiforme, are considered one of the most threatening types of tumors in neuro-oncology. Segmenting brain tumors is a crucial part of medical imaging. It plays a key role in diagnosing conditions, planning treatments, and keeping track of patients' progress. This paper presents a novel lightweight deep convolutional neural network (CNN) model specifically designed for accurate and efficient brain tumor segmentation from magnetic resonance imaging (MRI) scans. Our model leverages a streamlined architecture that reduces computational complexity while maintaining high segmentation accuracy. We have introduced several novel approaches, including optimized convolutional layers that capture both local and global features with minimal parameters. A layerwise adaptive weighting feature fusion technique is implemented that enhances comprehensive feature representation. By incorporating shifted windowing, the model achieves better generalization across data variations. Dynamic weighting is introduced in skip connections that allows backpropagation to determine the ideal balance between semantic and positional features. To evaluate our approach, we conducted experiments on publicly available MRI datasets and compared our model against state-of-the-art segmentation methods. Our lightweight model has an efficient architecture with 1.45 million parameters - 95% fewer than nnUNet (30.78M), 91% fewer than standard UNet (16.21M), and 85% fewer than a lightweight hybrid CNN-transformer network (Liu et al., 2024) (9.9M). Coupled with a 4.9× faster GPU inference time (0.904 ± 0.002 s vs. nnUNet's 4.416 ± 0.004 s), the design enables real-time deployment on resource-constrained devices while maintaining competitive segmentation accuracy. Code is available at: FFLUNet.

Utility of Thin-slice Single-shot T2-weighted MR Imaging with Deep Learning Reconstruction as a Protocol for Evaluating Pancreatic Cystic Lesions.

Ozaki K, Hasegawa H, Kwon J, Katsumata Y, Yoneyama M, Ishida S, Iyoda T, Sakamoto M, Aramaki S, Tanahashi Y, Goshima S

pubmed logopapersJun 14 2025
To assess the effects of industry-developed deep learning reconstruction with super resolution (DLR-SR) on single-shot turbo spin-echo (SshTSE) images with thickness of 2 mm with DLR (SshTSE<sup>2mm</sup>) relative to those of images with a thickness of 5 mm with DLR (SSshTSE<sup>5mm</sup>) in the patients with pancreatic cystic lesions. Thirty consecutive patients who underwent abdominal MRI examinations because of pancreatic cystic lesions under observation between June 2024 and July 2024 were enrolled. We qualitatively and quantitatively evaluated the image qualities of SshTSE<sup>2mm</sup> and SshTSE<sup>5mm</sup> with and without DLR-SR. The SNRs of the pancreas, spleen, paraspinal muscle, peripancreatic fat, and pancreatic cystic lesions of SshTSE<sup>2mm</sup> with and without DLR-SR did not decrease in compared to that of SshTSE<sup>5mm</sup> with and without DLR-SR. There were no significant differences in contrast-to-noise ratios (CNRs) of the pancreas-to-cystic lesions and fat between 4 types of images. SshTSE<sup>2mm</sup> with DLR-SR had the highest image quality related to pancreas edge sharpness, perceived coarseness pancreatic duct clarity, noise, artifacts, overall image quality, and diagnostic confidence of cystic lesions, followed by SshTSE<sup>2mm</sup> without DLR-SR and SshTSE<sup>5mm</sup> with and without DLR-SR (P  <  0.0001). SshTSE<sup>2mm</sup> with DLR-SR images had better quality than the other images and did not have decreased SNRs and CNRs. The thin-slice SshTSE with DLR-SR may be feasible and clinically useful for the evaluation of patients with pancreatic cystic lesions.

Hierarchical Deep Feature Fusion and Ensemble Learning for Enhanced Brain Tumor MRI Classification

Zahid Ullah, Jihie Kim

arxiv logopreprintJun 14 2025
Accurate brain tumor classification is crucial in medical imaging to ensure reliable diagnosis and effective treatment planning. This study introduces a novel double ensembling framework that synergistically combines pre-trained deep learning (DL) models for feature extraction with optimized machine learning (ML) classifiers for robust classification. The framework incorporates comprehensive preprocessing and data augmentation of brain magnetic resonance images (MRI), followed by deep feature extraction using transfer learning with pre-trained Vision Transformer (ViT) networks. The novelty lies in the dual-level ensembling strategy: feature-level ensembling, which integrates deep features from the top-performing ViT models, and classifier-level ensembling, which aggregates predictions from hyperparameter-optimized ML classifiers. Experiments on two public Kaggle MRI brain tumor datasets demonstrate that this approach significantly surpasses state-of-the-art methods, underscoring the importance of feature and classifier fusion. The proposed methodology also highlights the critical roles of hyperparameter optimization (HPO) and advanced preprocessing techniques in improving diagnostic accuracy and reliability, advancing the integration of DL and ML for clinically relevant medical image analysis.

Radiomic Analysis of Molecular Magnetic Resonance Imaging of Aortic Atherosclerosis in Rabbits.

Lee H

pubmed logopapersJun 13 2025
Atherosclerosis involves not only the narrowing of blood vessels and plaque accumulation but also changes in plaque composition and stability, all of which are critical for disease progression. Conventional imaging techniques such as magnetic resonance angiography (MRA) and digital subtraction angiography (DSA) primarily assess luminal narrowing and plaque size, but have limited capability in identifying plaque instability and inflammation within the vascular muscle wall. This study aimed to develop and evaluate a novel imaging approach using ligand-modified nanomagnetic contrast (lmNMC) nanoprobes in combination with molecular magnetic resonance imaging (mMRI) to visualize and quantify vascular inflammation and plaque characteristics in a rabbit model of atherosclerosis. A rabbit model of atherosclerosis was established and underwent mMRI before and after administration of lmNMC nanoprobes. Radiomic features were extracted from segmented images using discrete wavelet transform (DWT) to assess spatial frequency changes and gray-level co-occurrence matrix (GLCM) analysis to evaluate textural properties. Further radiomic analysis was performed using neural network-based regression and clustering, including the application of self-organizing maps (SOMs) to validate the consistency of radiomic pattern between training and testing data. Radiomic analysis revealed significant changes in spatial frequency between pre- and post-contrast images in both the horizontal and vertical directions. GLCM analysis showed an increase in contrast from 0.08463 to 0.1021 and a slight decrease in homogeneity from 0.9593 to 0.9540. Energy values declined from 0.2256 to 0.2019, while correlation increased marginally from 0.9659 to 0.9708. Neural network regression demonstrated strong convergence between target and output coordinates. Additionally, SOM clustering revealed consistent weight locations and neighbor distances across datasets, supporting the reliability of the radiomic validation. The integration of lmNMC nanoprobes with mMRI enables detailed visualization of atherosclerotic plaques and surrounding vascular inflammation in a preclinical model. This method shows promise for enhancing the characterization of unstable plaques and may facilitate early detection of high-risk atherosclerotic lesions, potentially improving diagnostic and therapeutic strategies.

Prediction of NIHSS Scores and Acute Ischemic Stroke Severity Using a Cross-attention Vision Transformer Model with Multimodal MRI.

Tuxunjiang P, Huang C, Zhou Z, Zhao W, Han B, Tan W, Wang J, Kukun H, Zhao W, Xu R, Aihemaiti A, Subi Y, Zou J, Xie C, Chang Y, Wang Y

pubmed logopapersJun 13 2025
This study aimed to develop and evaluate models for classifying the severity of neurological impairment in acute ischemic stroke (AIS) patients using multimodal MRI data. A retrospective cohort of 1227 AIS patients was collected and categorized into mild (NIHSS<5) and moderate-to-severe (NIHSS≥5) stroke groups based on NIHSS scores. Eight baseline models were constructed for performance comparison, including a clinical model, radiomics models using DWI or multiple MRI sequences, and deep learning (DL) models with varying fusion strategies (early fusion, later fusion, full cross-fusion, and DWI-centered cross-fusion). All DL models were based on the Vision Transformer (ViT) framework. Model performance was evaluated using metrics such as AUC and ACC, and robustness was assessed through subgroup analyses and visualization using Grad-CAM. Among the eight models, the DL model using DWI as the primary sequence with cross-fusion of other MRI sequences (Model 8) achieved the best performance. In the test cohort, Model 8 demonstrated an AUC of 0.914, ACC of 0.830, and high specificity (0.818) and sensitivity (0.853). Subgroup analysis shows that model 8 is robust in most subgroups with no significant prediction difference (p > 0.05), and the AUC value consistently exceeds 0.900. A significant predictive difference was observed in the BMI group (p < 0.001). The results of external validation showed that the AUC values of the model 8 in center 2 and center 3 reached 0.910 and 0.912, respectively. Visualization using Grad-CAM emphasized the infarct core as the most critical region contributing to predictions, with consistent feature attention across DWI, T1WI, T2WI, and FLAIR sequences, further validating the interpretability of the model. A ViT-based DL model with cross-modal fusion strategies provides a non-invasive and efficient tool for classifying AIS severity. Its robust performance across subgroups and interpretability make it a promising tool for personalized management and decision-making in clinical practice.

Does restrictive anorexia nervosa impact brain aging? A machine learning approach to estimate age based on brain structure.

Gupta Y, de la Cruz F, Rieger K, di Giuliano M, Gaser C, Cole J, Breithaupt L, Holsen LM, Eddy KT, Thomas JJ, Cetin-Karayumak S, Kubicki M, Lawson EA, Miller KK, Misra M, Schumann A, Bär KJ

pubmed logopapersJun 13 2025
Anorexia nervosa (AN), a severe eating disorder marked by extreme weight loss and malnutrition, leads to significant alterations in brain structure. This study used machine learning (ML) to estimate brain age from structural MRI scans and investigated brain-predicted age difference (brain-PAD) as a potential biomarker in AN. Structural MRI scans were collected from female participants aged 10-40 years across two institutions (Boston, USA, and Jena, Germany), including acute AN (acAN; n=113), weight-restored AN (wrAN; n=35), and age-matched healthy controls (HC; n=90). The ML model was trained on 3487 healthy female participants (ages 5-45 years) from ten datasets, using 377 neuroanatomical features extracted from T1-weighted MRI scans. The model achieved strong performance with a mean absolute error (MAE) of 1.93 years and a correlation of r = 0.88 in HCs. In acAN patients, brain age was overestimated by an average of +2.25 years, suggesting advanced brain aging. In contrast, wrAN participants showed significantly lower brain-PAD than acAN (+0.26 years, p=0.0026) and did not differ from HC (p=0.98), suggesting normalization of brain age estimates following weight restoration. A significant group-by-age interaction effect on predicted brain age (p<0.001) indicated that brain age deviations were most pronounced in younger acAN participants. Brain-PAD in acAN was significantly negatively associated with BMI (r = -0.291, p<sub>fdr</sub> = 0.005), but not in wrAN or HC groups. Importantly, no significant associations were found between brain-PAD and clinical symptom severity. These findings suggest that acute AN is linked to advanced brain aging during the acute stage, and that may partially normalize following weight recovery.
Page 29 of 75745 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.