Sort by:
Page 151 of 3993984 results

AI-Powered Segmentation and Prognosis with Missing MRI in Pediatric Brain Tumors

Chrysochoou, D., Gandhi, D., Adib, S., Familiar, A., Khalili, N., Khalili, N., Ware, J. B., Tu, W., Jain, P., Anderson, H., Haldar, S., Storm, P. B., Franson, A., Prados, M., Kline, C., Mueller, S., Resnick, A., Vossough, A., Davatzikos, C., Nabavizadeh, A., Fathi Kazerooni, A.

medrxiv logopreprintJul 16 2025
ImportanceBrain MRI is the main imaging modality for pediatric brain tumors (PBTs); however, incomplete MRI exams are common in pediatric neuro-oncology settings and pose a barrier to the development and application of deep learning (DL) models, such as tumor segmentation and prognostic risk estimation. ObjectiveTo evaluate DL-based strategies (image-dropout training and generative image synthesis) and heuristic imputation approaches for handling missing MRI sequences in PBT imaging from clinical acquisition protocols, and to determine their impact on segmentation accuracy and prognostic risk estimation. DesignThis cohort study included 715 patients from the Childrens Brain Tumor Network (CBTN) and BraTS-PEDs, and 43 patients with longitudinal MRI (157 timepoints) from PNOC003/007 clinical trials. We developed a dropout-trained nnU-Net tumor segmentation model that randomly omitted FLAIR and/or T1w (no contrast) sequences during training to simulate missing inputs. We compared this against three imputation approaches: a generative model for image synthesis, copy-substitution heuristics, and zeroed missing inputs. Model-generated tumor volumes from each segmentation method were compared and evaluated against ground truth (expert manual segmentations) and incorporated into time-varying Cox regression models for survival analysis. SettingMulti-institutional PBT datasets and longitudinal clinical trial cohorts. ParticipantsAll patients had multi-parametric MRI and expert manual segmentations. The PNOC cohort had a median of three imaging timepoints and associated clinical data. Main Outcomes and MeasuresSegmentation accuracy (Dice scores), image quality metrics for synthesized scans (SSIM, PSNR, MSE), and survival discrimination (C-index, hazard ratios). ResultsThe dropout model achieved robust segmentation under missing MRI, with [≤]0.04 Dice drop and a stable C-index of 0.65 compared to complete-input performance. DL-based MRI synthesis achieved high image quality (SSIM > 0.90) and removed artifacts, benefiting visual interpretability. Performance was consistent across cohorts and missing data scenarios. Conclusion and RelevanceModality-dropout training yields robust segmentation and risk-stratification on incomplete pediatric MRI without the computational and clinical complexity of synthesis approaches. Image synthesis, though less effective for these tasks, provides complementary benefits for artifact removal and qualitative assessment of missing or corrupted MRI scans. Together, these approaches can facilitate broader deployment of AI tools in real-world pediatric neuro-oncology settings.

Utilizing machine learning to predict MRI signal outputs from iron oxide nanoparticles through the PSLG algorithm.

Hataminia F, Azinfar A

pubmed logopapersJul 16 2025
In this research, we predict the output signal generated by iron oxide-based nanoparticles in Magnetic Resonance Imaging (MRI) using the physical properties of the nanoparticles and the MRI machine. The parameters considered include the size of the magnetic core of the nanoparticles, their magnetic saturation (Ms), the concentration of the nanoparticles (C), and the magnetic field (MF) strength of the MRI device. These parameters serve as input variables for the model, while the relaxation rate R<sub>2</sub> (s<sup>-1</sup>) is taken as the output variable. To develop this model, we employed a machine learning approach based on a neural network known as SA-LOOCV-GRBF (SLG). In this study, we compared two different random selection patterns: SLG disperse random selection (DSLG) and SLG parallel random selection (PSLG). The sensitivity to neuron number in the hidden layers for DSLG was more pronounced compared to the PSLG pattern, and the mean square error (MSE) was calculated for this evaluation. It appears that the PSLG method demonstrated strong performance while maintaining less sensitivity to increasing neuron numbers. Consequently, the new pattern, PSLG, was selected for predicting MRI behavior.

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

Automatic segmentation of liver structures in multi-phase MRI using variants of nnU-Net and Swin UNETR.

Raab F, Strotzer Q, Stroszczynski C, Fellner C, Einspieler I, Haimerl M, Lang EW

pubmed logopapersJul 16 2025
Accurate segmentation of the liver parenchyma, portal veins, hepatic veins, and lesions from MRI is important for hepatic disease monitoring and treatment. Multi-phase contrast enhanced imaging is superior in distinguishing hepatic structures compared to single-phase approaches, but automated approaches for detailed segmentation of hepatic structures are lacking. This study evaluates deep learning architectures for segmenting liver structures from multi-phase Gd-EOB-DTPA-enhanced T1-weighted VIBE MRI scans. We utilized 458 T1-weighted VIBE scans of pathological livers, with 78 manually labeled for liver parenchyma, hepatic and portal veins, aorta, lesions, and ascites. An additional dataset of 47 labeled subjects was used for cross-scanner evaluation. Three models were evaluated using nested cross-validation: the conventional nnU-Net, the ResEnc nnU-Net, and the Swin UNETR. The late arterial phase was identified as the optimal fixed phase for co-registration. Both nnU-Net variants outperformed Swin UNETR across most tasks. The conventional nnU-Net achieved the highest segmentation performance for liver parenchyma (DSC: 0.97; 95% CI 0.97, 0.98), portal vein (DSC: 0.83; 95% CI 0.80, 0.87), and hepatic vein (DSC: 0.78; 95% CI 0.77, 0.80). Lesion and ascites segmentation proved challenging for all models, with the conventional nnU-Net performing best. This study demonstrates the effectiveness of deep learning, particularly nnU-Net variants, for detailed liver structure segmentation from multi-phase MRI. The developed models and preprocessing pipeline offer potential for improved liver disease assessment and surgical planning in clinical practice.

Artificial intelligence-based diabetes risk prediction from longitudinal DXA bone measurements.

Khan S, Shah Z

pubmed logopapersJul 16 2025
Diabetes mellitus (DM) is a serious global health concern that poses a significant threat to human life. Beyond its direct impact, diabetes substantially increases the risk of developing severe complications such as hypertension, cardiovascular disease, and musculoskeletal disorders like arthritis and osteoporosis. The field of diabetes classification has advanced significantly with the use of diverse data modalities and sophisticated tools to identify individuals or groups as diabetic. But the task of predicting diabetes prior to its onset, particularly through the use of longitudinal multi-modal data, remains relatively underexplored. To better understand the risk factors associated with diabetes development among Qatari adults, this longitudinal research aims to investigate dual-energy X-ray absorptiometry (DXA)-derived whole-body and regional bone composition measures as potential predictors of diabetes onset. We proposed a case-control retrospective study, with a total of 1,382 participants contains 725 male participants (cases: 146, control: 579) and 657 female participants (case: 133, control: 524). We excluded participants with incomplete data points. To handle class imbalance, we augmented our data using Synthetic Minority Over-sampling Technique (SMOTE) and SMOTEENN (SMOTE with Edited Nearest Neighbors), and to further investigated the association between bones data features and diabetes status, we employed ANOVA analytical method. For diabetes onset prediction, we employed both conventional and deep learning (DL) models to predict risk factors associated with diabetes in Qatari adults. We used SHAP and probabilistic methods to investigate the association of identified risk factors with diabetes. During experimental analysis, we found that bone mineral density (BMD), bone mineral contents (BMC) in the hip, femoral neck, troch area, and lumbar spine showed an upward trend in diabetic patients with [Formula: see text]. Meanwhile, we found that patients with abnormal glucose metabolism had increased wards BMD and BMC with low Z-score compared to healthy participants. Consequently, it shows that the diabetic group has superior bone health than the control group in the cohort, because they exhibit higher BMD, muscle mass, and bone area across most body regions. Moreover, in the age group distribution analysis, we found that the diabetes prediction rate was higher among healthy participants in the younger age group 20-40 years. But as the age range increased, the model predictions became more accurate for diabetic participants, especially in the older age group 56-69 years. It is also observed that male participants demonstrated a higher susceptibility to diabetes onset compared to female participants. Shallow models outperformed the DL models by presenting improved accuracy (91.08%), AUROC (96%), and recall values (91%). This pivotal approach utilizing DXA scans highlights significant potential for the rapid and minimally invasive early detection of diabetes.

CT-ScanGaze: A Dataset and Baselines for 3D Volumetric Scanpath Modeling

Trong-Thang Pham, Akash Awasthi, Saba Khan, Esteban Duran Marti, Tien-Phat Nguyen, Khoa Vo, Minh Tran, Ngoc Son Nguyen, Cuong Tran Van, Yuki Ikebe, Anh Totti Nguyen, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Understanding radiologists' eye movement during Computed Tomography (CT) reading is crucial for developing effective interpretable computer-aided diagnosis systems. However, CT research in this area has been limited by the lack of publicly available eye-tracking datasets and the three-dimensional complexity of CT volumes. To address these challenges, we present the first publicly available eye gaze dataset on CT, called CT-ScanGaze. Then, we introduce CT-Searcher, a novel 3D scanpath predictor designed specifically to process CT volumes and generate radiologist-like 3D fixation sequences, overcoming the limitations of current scanpath predictors that only handle 2D inputs. Since deep learning models benefit from a pretraining step, we develop a pipeline that converts existing 2D gaze datasets into 3D gaze data to pretrain CT-Searcher. Through both qualitative and quantitative evaluations on CT-ScanGaze, we demonstrate the effectiveness of our approach and provide a comprehensive assessment framework for 3D scanpath prediction in medical imaging.

Distinguishing symptomatic and asymptomatic trigeminal nerves through radiomics and deep learning: A microstructural study in idiopathic TN patients and asymptomatic control group.

Cüce F, Tulum G, Karadaş Ö, Işik Mİ, Dur İnce M, Nematzadeh S, Jalili M, Baş N, Özcan B, Osman O

pubmed logopapersJul 16 2025
The relationship between mild neurovascular conflict (NVC) and trigeminal neuralgia (TN) remains ill-defined, especially as mild NVC is often seen in asymptomatic population without any facial pain. We aim to analyze the trigeminal nerve microstructure using artificial intelligence (AI) to distinguish symptomatic and asymptomatic nerves between idiopathic TN (iTN) and the asymptomatic control group with incidental grade‑1 NVC. Seventy-eight symptomatic trigeminal nerves with grade-1 NVC in iTN patients, and an asymptomatic control group consisting of Bell's palsy patients free from facial pain (91 grade-1 NVC and 91 grade-0 NVC), were included in the study. Three hundred seventy-eight radiomic features were extracted from the original MRI images and processed with Laplacian-of-Gaussian filters. The dataset was split into 80% training/validation and 20% testing. Nested cross-validation was employed on the training/validation set for feature selection and model optimization. Furthermore, using the same pipeline approach, two customized deep learning models, Dense Atrous Spatial Pyramid Pooling (ASPP) -201 and MobileASPPV2, were classified using the same pipeline approach, incorporating ASPP blocks. Performance was assessed over ten and five runs for radiomics-based and deep learning-based models. Subspace Discriminant Ensemble Learning (SDEL) attained an accuracy of 78.8%±7.13%, Support Vector Machines (SVM) reached 74.8%±9.2%, and K-nearest neighbors (KNN) achieved 79%±6.55%. Meanwhile, DenseASPP-201 recorded an accuracy of 82.0 ± 8.4%, and MobileASPPV2 achieved 73.2 ± 5.59%. The AI effectively distinguished symptomatic and asymptomatic nerves with grade‑1 NVC. Further studies are required to fully elucidate the impact of vascular and nonvascular etiologies that may lead to iTN.

Deep learning-assisted comparison of different models for predicting maxillary canine impaction on panoramic radiography.

Zhang C, Zhu H, Long H, Shi Y, Guo J, You M

pubmed logopapersJul 16 2025
The panoramic radiograph is the most commonly used imaging modality for predicting maxillary canine impaction. Several prediction models have been constructed based on panoramic radiographs. This study aimed to compare the prediction accuracy of existing models in an external validation facilitated by an automatic landmark detection system based on deep learning. Patients aged 7-14 years who underwent panoramic radiographic examinations and received a diagnosis of impacted canines were included in the study. An automatic landmark localization system was employed to assist the measurement of geometric parameters on the panoramic radiographs, followed by the calculated prediction of the canine impaction. Three prediction models constructed by Arnautska, Alqerban et al, and Margot et al were evaluated. The metrics of accuracy, sensitivity, specificity, precision, and area under the receiver operating characteristic curve (AUC) were used to compare the performance of different models. A total of 102 panoramic radiographs with 102 impacted canines and 102 nonimpacted canines were analyzed in this study. The prediction outcomes indicated that the model by Margot et al achieved the highest performance, with a sensitivity of 95% and a specificity of 86% (AUC, 0.97), followed by the model by Arnautska, with a sensitivity of 93% and a specificity of 71% (AUC, 0.94). The model by Alqerban et al showed poor performance with an AUC of only 0.20. Two of the existing predictive models exhibited good diagnostic accuracy, whereas the third model demonstrated suboptimal performance. Nonetheless, even the most effective model is constrained by several limitations, such as logical and computational challenges, which necessitate further refinement.

Automated CAD-RADS scoring from multiplanar CCTA images using radiomics-driven machine learning.

Corti A, Ronchetti F, Lo Iacono F, Chiesa M, Colombo G, Annoni A, Baggiano A, Carerj ML, Del Torto A, Fazzari F, Formenti A, Junod D, Mancini ME, Maragna R, Marchetti F, Sbordone FP, Tassetti L, Volpe A, Mushtaq S, Corino VDA, Pontone G

pubmed logopapersJul 16 2025
Coronary Artery Disease-Reporting and Data System (CAD-RADS), a standardized reporting system of stenosis severity from coronary computed tomography angiography (CCTA), is performed manually by expert radiologists, being time-consuming and prone to interobserver variability. While deep learning methods automating CAD-RADS scoring have been proposed, radiomics-based machine-learning approaches are lacking, despite their improved interpretability. This study aims to introduce a novel radiomics-based machine-learning approach for automating CAD-RADS scoring from CCTA images with multiplanar reconstruction. This retrospective monocentric study included 251 patients (male 70 %; mean age 60.5 ± 12.7) who underwent CCTA in 2016-2018 for clinical evaluation of CAD. Images were automatically segmented, and radiomic features were extracted. Clinical characteristics were collected. The image dataset was partitioned into training and test sets (90 %-10 %). The training phase encompassed feature scaling and selection, data balancing and model training within a 5-fold cross-validation. A cascade pipeline was implemented for both 6-class CAD-RADS scoring and 4-class therapy-oriented classification (0-1, 2, 3-4, 5), through consecutive sub-tasks. For each classification task the cascade pipeline was applied to develop clinical, radiomic, and combined models. The radiomic, combined and clinical models yielded AUC = 0.88 [0.86-0.88], AUC = 0.90 [0.88-0.90], and AUC = 0.66 [0.66-0.67] for the CAD-RADS scoring, and AUC = 0.93 [0.91-0.93], AUC = 0.97 [0.96-0.97], and AUC = 79 [0.78-0.79] for the therapy-oriented classification. The radiomic and combined models significantly outperformed (DeLong p-value < 0.05) the clinical one in class 1 and 2 (CAD-RADS cascade) and class 2 (therapy-oriented cascade). This study represents the first CAD-RADS classification radiomic model, guaranteeing higher explainability and providing a promising support system in coronary artery stenosis assessment.

Evaluating Artificial Intelligence-Assisted Prostate Biparametric MRI Interpretation: An International Multireader Study.

Gelikman DG, Yilmaz EC, Harmon SA, Huang EP, An JY, Azamat S, Law YM, Margolis DJA, Marko J, Panebianco V, Esengur OT, Lin Y, Belue MJ, Gaur S, Bicchetti M, Xu Z, Tetreault J, Yang D, Xu D, Lay NS, Gurram S, Shih JH, Merino MJ, Lis R, Choyke PL, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersJul 16 2025
<b>Background:</b> Variability in prostate biparametric MRI (bpMRI) interpretation limits diagnostic reliability for prostate cancer (PCa). Artificial intelligence (AI) has potential to reduce this variability and improve diagnostic accuracy. <b>Objective:</b> The objective of this study was to evaluate impact of a deep learning AI model on lesion- and patient-level clinically significant PCa (csPCa) and PCa detection rates and interreader agreement in bpMRI interpretations. <b>Methods:</b> This retrospective, multireader, multicenter study used a balanced incomplete block design for MRI randomization. Six radiologists of varying experience interpreted bpMRI scans with and without AI assistance in alternating sessions. The reference standard for lesion-level detection for cases was whole-mount pathology after radical prostatectomy; for control patients, negative 12-core systematic biopsies. In all, 180 patients (120 in the case group, 60 in the control group) who underwent mpMRI and prostate biopsy or radical prostatectomy between January 2013 and December 2022 were included. Lesion-level sensitivity, PPV, patient-level AUC for csPCa and PCa detection, and interreader agreement in lesion-level PI-RADS scores and size measurements were assessed. <b>Results:</b> AI assistance improved lesion-level PPV (PI-RADS ≥ 3: 77.2% [95% CI, 71.0-83.1%] vs 67.2% [61.1-72.2%] for csPCa; 80.9% [75.2-85.7%] vs 69.4% [63.4-74.1%] for PCa; both p < .001), reduced lesion-level sensitivity (PIRADS ≥ 3: 44.4% [38.6-50.5%] vs 48.0% [42.0-54.2%] for csPCa, p = .01; 41.7% [37.0-47.4%] vs 44.9% [40.5-50.2%] for PCa, p = .01), and no difference in patient-level AUC (0.822 [95% CI, 0.768-0.866] vs 0.832 [0.787-0.868] for csPCa, p = .61; 0.833 [0.782-0.874] vs 0.835 [0.792-0.871] for PCa, p = .91). AI assistance improved interreader agreement for lesion-level PI-RADS scores (κ = 0.748 [95% CI, 0.701-0.796] vs 0.336 [0.288-0.381], p < .001), lesion size measurements (coverage probability of 0.397 [0.376-0.419] vs 0.367 [0.349-0.383], p < .001), and patient-level PI-RADS scores (κ = 0.704 [0.627-0.767] versus 0.507 [0.421-0.584], p < .001). <b>Conclusion:</b> AI improved lesion-level PPV and interreader agreement with slightly lower lesion-level sensitivity. <b>Clinical Impact:</b> AI may enhance consistency and reduce false-positives in bpMRI interpretations. Further optimization is required to improve sensitivity without compromising specificity.
Page 151 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.