Sort by:
Page 57 of 3523515 results

Automated detection of lacunes in brain MR images using SAM with robust prompts using self-distillation and anatomy-informed priors.

Deepika P, Shanker G, Narayanan R, Sundaresan V

pubmed logopapersAug 4 2025
Lacunes, which are small fluid-filled cavities in the brain, are signs of cerebral small vessel disease and have been clinically associated with various neurodegenerative and cerebrovascular diseases. Hence, accurate detection of lacunes is crucial and is one of the initial steps for the precise diagnosis of these diseases. However, developing a robust and consistently reliable method for detecting lacunes is challenging because of the heterogeneity in their appearance, contrast, shape, and size. In this study, we propose a lacune detection method using the Segment Anything Model (SAM), guided by point prompts from a candidate prompt generator. The prompt generator initially detects potential lacunes with a high sensitivity using a composite loss function. The true lacunes are then selected using SAM by discriminating their characteristics from mimics such as the sulcus and enlarged perivascular spaces, imitating the clinicians' strategy of examining the potential lacunes along all three axes. False positives are further reduced by adaptive thresholds based on the region wise prevalence of lacunes. We evaluated our method on two diverse, multi-centric MRI datasets, VALDO and ISLES, comprising only FLAIR sequences. Despite diverse imaging conditions and significant variations in slice thickness (0.5-6 mm), our method achieved sensitivities of 84% and 92%, with average false positive rates of 0.05 and 0.06 per slice in ISLES and VALDO datasets respectively. The proposed method demonstrates robust performance across varied imaging conditions and outperformed the state-of-the-art methods, demonstrating its effectiveness in lacune detection and quantification.

Glioblastoma Overall Survival Prediction With Vision Transformers

Yin Lin, iccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

Function of <sup>18</sup>F-FDG PET/CT radiomics in the detection of checkpoint inhibitor-induced liver injury (CHILI).

Huigen CMC, Coukos A, Latifyan S, Nicod Lalonde M, Schaefer N, Abler D, Depeursinge A, Prior JO, Fraga M, Jreige M

pubmed logopapersAug 4 2025
In the last decade, immunotherapy, particularly immune checkpoint inhibitors, has revolutionized cancer treatment and improved prognosis. However, severe checkpoint inhibitor-induced liver injury (CHILI), which can lead to treatment discontinuation or death, occurs in up to 18% of the patients. The aim of this study is to evaluate the value of PET/CT radiomics analysis for the detection of CHILI. Patients with CHILI grade 2 or higher who underwent liver function tests and liver biopsy were retrospectively included. Minors, patients with cognitive impairments, and patients with viral infections were excluded from the study. The patients' liver and spleen were contoured on the anonymized PET/CT imaging data, followed by radiomics feature extraction. Principal component analysis (PCA) and Bonferroni corrections were used for statistical analysis and exploration of radiomics features related to CHILI. Sixteen patients were included and 110 radiomics features were extracted from PET images. Liver PCA-5 showed significance as well as one associated feature but did not remain significant after Bonferroni correction. Spleen PCA-5 differed significantly between CHILI and non-CHILI patients even after Bonferroni correction, possibly linked to the higher metabolic function of the spleen in autoimmune diseases due to the recruitment of immune cells. This pilot study identified statistically significant differences in PET-derived radiomics features of the spleen and observable changes in the liver on PET/CT scans before and after the onset of CHILI. Identifying these features could aid in diagnosing or predicting CHILI, potentially enabling personalized treatment. Larger multicenter prospective studies are needed to confirm these findings and develop automated detection methods.

Accurate and Interpretable Postmenstrual Age Prediction via Multimodal Large Language Model

Qifan Chen, Jin Cui, Cindy Duan, Yushuo Han, Yifei Shi

arxiv logopreprintAug 4 2025
Accurate estimation of postmenstrual age (PMA) at scan is crucial for assessing neonatal development and health. While deep learning models have achieved high accuracy in predicting PMA from brain MRI, they often function as black boxes, offering limited transparency and interpretability in clinical decision support. In this work, we address the dual challenge of accuracy and interpretability by adapting a multimodal large language model (MLLM) to perform both precise PMA prediction and clinically relevant explanation generation. We introduce a parameter-efficient fine-tuning (PEFT) strategy using instruction tuning and Low-Rank Adaptation (LoRA) applied to the Qwen2.5-VL-7B model. The model is trained on four 2D cortical surface projection maps derived from neonatal MRI scans. By employing distinct prompts for training and inference, our approach enables the MLLM to handle a regression task during training and generate clinically relevant explanations during inference. The fine-tuned model achieves a low prediction error with a 95 percent confidence interval of 0.78 to 1.52 weeks, while producing interpretable outputs grounded in developmental features, marking a significant step toward transparent and trustworthy AI systems in perinatal neuroscience.

Multimodal deep learning model for prognostic prediction in cervical cancer receiving definitive radiotherapy: a multi-center study.

Wang W, Yang G, Liu Y, Wei L, Xu X, Zhang C, Pan Z, Liang Y, Yang B, Qiu J, Zhang F, Hou X, Hu K, Liang X

pubmed logopapersAug 4 2025
For patients with locally advanced cervical cancer (LACC), precise survival prediction models could guide personalized treatment. We developed and validated CerviPro, a deep learning-based multimodal prognostic model, to predict disease-free survival (DFS) in 1018 patients with LACC receiving definitive radiotherapy. The model integrates pre- and post-treatment CT imaging, handcrafted radiomic features, and clinical variables. CerviPro demonstrated robust predictive performance in the internal validation cohort (C-index 0.81), and external validation cohorts (C-index 0.70&0.66), significantly stratifying patients into distinct high- and low-risk DFS groups. Multimodal feature fusion consistently outperformed models based on single feature categories (clinical data, imaging, or radiomics alone), highlighting the synergistic value of integrating diverse data sources. By integrating multimodal data to predict DFS and recurrence risk, CerviPro provides a clinically valuable prognostic tool for LACC, offering the potential to guide personalized treatment strategies.

Retrospective evaluation of interval breast cancer screening mammograms by radiologists and AI.

Subelack J, Morant R, Blum M, Gräwingholt A, Vogel J, Geissler A, Ehlig D

pubmed logopapersAug 4 2025
To determine whether an AI system can identify breast cancer risk in interval breast cancer (IBC) screening mammograms. IBC screening mammograms from a Swiss screening program were retrospectively analyzed by radiologists/an AI system. Radiologists determined whether the IBC mammogram showed human visible signs of breast cancer (potentially missed IBCs) or not (IBCs without retrospective abnormalities). The AI system provided a case score and a prognostic risk category per mammogram. 119 IBC cases (mean age 57.3 (5.4)) were available with complete retrospective evaluations by radiologists/the AI system. 82 (68.9%) were classified as IBCs without retrospective abnormalities and 37 (31.1%) as potentially missed IBCs. 46.2% of all IBCs received a case score ≥ 25, 25.2% ≥ 50, and 13.4% ≥ 75. Of the 25.2% of the IBCs ≥ 50 (vs. 13.4% of a no breast cancer population), 45.2% had not been discussed during a consensus conference, reflecting 11.4% of all IBC cases. The potentially missed IBCs received significantly higher case scores and risk classifications than IBCs without retrospective abnormalities (case score mean: 54.1 vs. 23.1; high risk: 48.7% vs. 14.7%; p < 0.05). 13.4% of the IBCs without retrospective abnormalities received a case score ≥ 50, of which 62.5% had not been discussed during a consensus conference. An AI system can identify IBC screening mammograms with a higher risk for breast cancer, particularly in potentially missed IBCs but also in some IBCs without retrospective abnormalities where radiologists did not see anything, indicating its ability to improve mammography screening quality. Question AI presents a promising opportunity to enhance breast cancer screening in general, but evidence is missing regarding its ability to reduce interval breast cancers. Findings The AI system detected a high risk of breast cancer in most interval breast cancer screening mammograms where radiologists retrospectively detected abnormalities. Clinical relevance Utilization of an AI system in mammography screening programs can identify breast cancer risk in many interval breast cancer screening mammograms and thus potentially reduce the number of interval breast cancers.

Machine Learning and MRI-Based Whole-Organ Magnetic Resonance Imaging Score (WORMS): A Novel Approach to Enhancing Genicular Artery Embolization Outcomes in Knee Osteoarthritis.

Dablan A, Özgül H, Arslan MF, Türksayar O, Cingöz M, Mutlu IN, Erdim C, Guzelbey T, Kılıckesmez O

pubmed logopapersAug 4 2025
To evaluate the feasibility of machine learning (ML) models using preprocedural MRI-based Whole-Organ Magnetic Resonance Imaging Score (WORMS) and clinical parameters to predict treatment response after genicular artery embolization in patients with knee osteoarthritis. This retrospective study included 66 patients (72 knees) who underwent GAE between December 2022 and June 2024. Preprocedural assessments included WORMS and Kellgren-Lawrence grading. Clinical response was defined as a ≥ 50% reduction in Visual Analog Scale (VAS) score. Feature selection was performed using recursive feature elimination and correlation analysis. Multiple ML algorithms (Random Forest, Support Vector Machine, Logistic Regression) were trained using stratified fivefold cross-validation. Conventional statistical analyses assessed group differences and correlations. Of 72 knees, 33 (45.8%) achieved a clinically significant response. Responders showed significantly lower WORMSs for cartilage, bone marrow, and total joint damage (p < 0.05). The Random Forest model demonstrated the best performance, with an accuracy of 81.8%, AUC-ROC of 86.2%, sensitivity of 90%, and specificity of 75%. Key predictive features included total WORMS, ligament score, and baseline VAS. Bone marrow score showed the strongest correlation with VAS reduction (r = -0.430, p < 0.001). ML models integrating WORMS and clinical data suggest that greater cartilage loss, bone marrow edema, joint damage, and higher baseline VAS scores may help to identify patients less likely to respond to GAE for knee OA.

A Novel Deep Learning Radiomics Nomogram Integrating B-Mode Ultrasound and Contrast-Enhanced Ultrasound for Preoperative Prediction of Lymphovascular Invasion in Invasive Breast Cancer.

Niu R, Chen Z, Li Y, Fang Y, Gao J, Li J, Li S, Huang S, Zou X, Fu N, Jin Z, Shao Y, Li M, Kang Y, Wang Z

pubmed logopapersAug 4 2025
This study aimed to develop a deep learning radiomics nomogram (DLRN) that integrated B-mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) images for preoperative lymphovascular invasion (LVI) prediction in invasive breast cancer (IBC). Total 981 patients with IBC from three hospitals were retrospectively enrolled. Of 834 patients recruited from Hospital I, 688 were designated as the training cohort and 146 as the internal test cohort, whereas 147 patients from Hospitals II and III were assigned to constitute the external test cohort. Deep learning and handcrafted radiomics features of BMUS and CEUS images were extracted from breast cancer to construct a deep learning radiomics (DLR) signature. The DLRN was developed by integrating the DLR signature and independent clinicopathological parameters. The performance of the DLRN is evaluated with respect to discrimination, calibration, and clinical benefit. The DLRN exhibited good performance in predicting LVI, with areas under the receiver operating characteristic curves (AUCs) of 0.885 (95% confidence interval [CI,0.858-0.912), 0.914 (95% CI, 0.868-0.960) and 0.914 (95% CI, 0.867-0.960) in the training, internal test, and external test cohorts, respectively. The DLRN exhibited good stability and clinical practicability, as demonstrated by the calibration curve and decision curve analysis. In addition, the DLRN outperformed the traditional clinical model and the DLR signature for LVI prediction in the internal and external test cohorts (all p < 0.05). The DLRN exhibited good performance in predicting LVI, representing a non-invasive approach to preoperatively determining LVI status in IBC.

CT-Based 3D Super-Resolution Radiomics for the Differential Diagnosis of Brucella <i>vs.</i> Tuberculous Spondylitis using Deep Learning.

Wang K, Qi L, Li J, Zhang M, Du H

pubmed logopapersAug 4 2025
This study aims to improve the accuracy of distinguishing Tuberculous Spondylitis (TBS) from Brucella Spondylitis (BS) by developing radiomics models using Deep Learning and CT images enhanced with Super-Resolution (SR). A total of 94 patients diagnosed with BS or TBS were randomly divided into training (n=65) and validation (n=29) groups in a 7:3 ratio. In the training set, there were 40 BS and 25 TBS patients, with a mean age of 58.34 ± 12.53 years. In the validation set, there were 17 BS and 12 TBS patients, with a mean age of 58.48 ± 12.29 years. Standard CT images were enhanced using SR, improving spatial resolution and image quality. The lesion regions (ROIs) were manually segmented, and radiomics features were extracted. ResNet18 and ResNet34 were used for deep learning feature extraction and model training. Four multi-layer perceptron (MLP) models were developed: clinical, radiomics (Rad), deep learning (DL), and a combined model. Model performance was assessed using five-fold cross-validation, ROC, and decision curve analysis (DCA). Statistical significance was assessed, with key clinical and imaging features showing significant differences between TBS and BS (e.g., gender, p=0.0038; parrot beak appearance, p<0.001; dead bone, p<0.001; deformities of the spinal posterior process, p=0.0044; psoas abscess, p<0.001). The combined model outperformed others, achieving the highest AUC (0.952), with ResNet34 and SR-enhanced images further boosting performance. Sensitivity reached 0.909, and Specificity was 0.941. DCA confirmed clinical applicability. The integration of SR-enhanced CT imaging and deep learning radiomics appears to improve diagnostic differentiation between BS and TBS. The combined model, especially when using ResNet34 and GAN-based super-resolution, demonstrated better predictive performance. High-resolution imaging may facilitate better lesion delineation and more robust feature extraction. Nevertheless, further validation with larger, multicenter cohorts is needed to confirm generalizability and reduce potential bias from retrospective design and imaging heterogeneity. This study suggests that integrating Deep Learning Radiomics with Super-Resolution may improve the differentiation between TBS and BS compared to standard CT imaging. However, prospective multi-center studies are necessary to validate its clinical applicability.

Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis.

Sadeghi MH, Sina S, Faghihi R, Alavi M, Giammarile F, Omidi H

pubmed logopapersAug 4 2025
This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [<sup>18</sup>F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.
Page 57 of 3523515 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.