Sort by:
Page 227 of 3903899 results

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Design and Optimization of an automatic deep learning-based cerebral reperfusion scoring (TICI) using thrombus localization.

Folcher A, Piters J, Wallach D, Guillard G, Ognard J, Gentric JC

pubmed logopapersJun 26 2025
The Thrombolysis in Cerebral Infarction (TICI) scale is widely used to assess angiographic outcomes of mechanical thrombectomy despite significant variability. Our objective was to create and optimize an artificial intelligence (AI)-based classification model for digital subtraction angiography (DSA) TICI scoring. Using a monocentric DSA dataset of thrombectomies, and a platform for medical image analysis, independent readers labeled each series according to TICI score and marked each thrombus. A convolutional neural network (CNN) classification model was created to classify TICI scores, into 2 groups (TICI 0,1 or 2a versus TICI 2b, 2c or 3) and 3 groups (TICI 0,1 or 2a versus TICI 2b versus TICI 2c or 3). The algorithm was first tested alone, and then thrombi positions were introduced to the algorithm by manual placement firstly, then after using a thrombus detection module. A total of 422 patients were enrolled in the study. 2492 thrombi were annotated on the TICI-labeled series. The model trained on a total of 1609 DSA series. The classification model into two classes had a specificity of 0.97 ±0.01 and a sensibility of 0.86 ±0.01. The 3-class models showed insufficient performance, even when combined with the true thrombi positions, with, respectively, F1 scores for TICI 2b classification of 0.50 and 0.55 ±0.07. The automatic thrombus detection module did not enhance the performance of the 3-class model, with a F1 score for the TICI 2b class measured at 0.50 ±0.07. The AI model provided a reproducible 2-class (TICI 0,1 or 2a versus 2b, 2c or 3) classification according to TICI scale. Its performance in distinguishing three classes (TICI 0,1 or 2a versus 2b versus 2c or 3) remains insufficient for clinical practice. Automatic thrombus detection did not improve the model's performance.

Predicting brain metastases in EGFR-positive lung adenocarcinoma patients using pre-treatment CT lung imaging data.

He X, Guan C, Chen T, Wu H, Su L, Zhao M, Guo L

pubmed logopapersJun 26 2025
This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.

Generalizable Neural Electromagnetic Inverse Scattering

Yizhe Cheng, Chunxun Tian, Haoru Wang, Wentao Zhu, Xiaoxuan Ma, Yizhou Wang

arxiv logopreprintJun 26 2025
Solving Electromagnetic Inverse Scattering Problems (EISP) is fundamental in applications such as medical imaging, where the goal is to reconstruct the relative permittivity from scattered electromagnetic field. This inverse process is inherently ill-posed and highly nonlinear, making it particularly challenging. A recent machine learning-based approach, Img-Interiors, shows promising results by leveraging continuous implicit functions. However, it requires case-specific optimization, lacks generalization to unseen data, and fails under sparse transmitter setups (e.g., with only one transmitter). To address these limitations, we revisit EISP from a physics-informed perspective, reformulating it as a two stage inverse transmission-scattering process. This formulation reveals the induced current as a generalizable intermediate representation, effectively decoupling the nonlinear scattering process from the ill-posed inverse problem. Built on this insight, we propose the first generalizable physics-driven framework for EISP, comprising a current estimator and a permittivity solver, working in an end-to-end manner. The current estimator explicitly learns the induced current as a physical bridge between the incident and scattered field, while the permittivity solver computes the relative permittivity directly from the estimated induced current. This design enables data-driven training and generalizable feed-forward prediction of relative permittivity on unseen data while maintaining strong robustness to transmitter sparsity. Extensive experiments show that our method outperforms state-of-the-art approaches in reconstruction accuracy, generalization, and robustness. This work offers a fundamentally new perspective on electromagnetic inverse scattering and represents a major step toward cost-effective practical solutions for electromagnetic imaging.

Aneurysm Analysis Using Deep Learning

Bagheri Rajeoni, A., Pederson, B., Lessner, S. M., Valafar, H.

medrxiv logopreprintJun 25 2025
Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard-of-care primarily focuses on measuring aneurysm diameter at its widest point, providing a limited perspective on aneurysm morphology and lacking efficient methods to measure aneurysm volumes. Yet, volume measurement can offer deeper insight into aneurysm progression and severity. In this study, we propose an automated approach that leverages the strengths of pre-trained neural networks and expert systems to delineate aneurysm boundaries and compute volumes on an unannotated dataset from 60 patients. The dataset includes slice-level start/end annotations for aneurysm but no pixel-wise aorta segmentations. Our method utilizes a pre-trained UNet to automatically locate the aorta, employs SAM2 to track the aorta through vascular irregularities such as aneurysms down to the iliac bifurcation, and finally uses a Long Short-Term Memory (LSTM) network or expert system to identify the beginning and end points of the aneurysm within the aorta. Despite no manual aorta segmentation, our approach achieves promising accuracy, predicting the aneurysm start point with an R2 score of 71%, the end point with an R2 score of 76%, and the volume with an R2 score of 92%. This technique has the potential to facilitate large-scale aneurysm analysis and improve clinical decision-making by reducing dependence on annotated datasets.

The Current State of Artificial Intelligence on Detecting Pulmonary Embolism via Computerised Tomography Pulmonary Angiogram: A Systematic Review.

Hassan MSTA, Elhotiby MAM, Shah V, Rocha H, Rad AA, Miller G, Malawana J

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Pulmonary embolism (PE) is a life-threatening condition with significant diagnostic challenges due to high rates of missed or delayed detection. Computed tomography pulmonary angiography (CTPA) is the current standard for diagnosing PE, however, demand for imaging places strain on healthcare systems and increases error rates. This systematic review aims to assess the diagnostic accuracy and clinical applicability of artificial intelligence (AI)-based models for PE detection on CTPA, exploring their potential to enhance diagnostic reliability and efficiency across clinical settings. <b>Methods</b> A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, PubMed, and Google Scholar were searched for original articles from inception to September 2024. Articles were included if they reported successful AI integration, whether partial or full, alongside CTPA scans for PE detection in patients. <b>Results</b> The literature search identified 919 articles, with 745 remaining after duplicate removal. Following rigorous screening and appraisal aligned with inclusion and exclusion criteria, 12 studies were included in the final analysis. A total of three primary AI modalities emerged: convolutional neural networks (CNNs), segmentation models, and natural language processing (NLP), collectively used in the analysis of 341,112 radiographic images. CNNs were the most frequently applied modality in this review. Models such as AdaBoost and EmbNet have demonstrated high sensitivity, with EmbNet achieving 88-90.9% per scan and reducing false positives to 0.45 per scan. <b>Conclusion</b> AI shows significant promise as a diagnostic tool for identifying PE on CTPA scans, particularly when combined with other forms of clinical data. However, challenges remain, including ensuring generalisability, addressing potential bias, and conducting rigorous external validation. Variability in study methodologies and the lack of standardised reporting of key metrics complicate comparisons. Future research must focus on refining models, improving peripheral emboli detection, and validating performance across diverse settings to realise AI's potential fully.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Association of peripheral immune markers with brain age and dementia risk estimated using deep learning methods.

Huang X, Yuan S, Ling Y, Tan S, Bai Z, Xu Y, Shen S, Lyu J, Wang H

pubmed logopapersJun 25 2025
The peripheral immune system is essential for maintaining central nervous system homeostasis. This study investigates the effects of peripheral immune markers on accelerated brain aging and dementia using brain-predicted age difference based on neuroimaging. By leveraging data from the UK Biobank, Cox regression was used to explore the relationship between peripheral immune markers and dementia, and multivariate linear regression to assess associations between peripheral immune biomarkers and brain structure. Additionally, we established a brain age prediction model using Simple Fully Convolutional Network (SFCN) deep learning architecture. Analysis of the resulting brain-Predicted Age Difference (PAD) revealed relationships between accelerated brain aging, peripheral immune markers, and dementia. During the median follow-up period of 14.3 years, 4, 277 dementia cases were observed among 322, 761 participants. Both innate and adaptive immune markers correlated with dementia risk. NLR showed the strongest association with dementia risk (HR = 1.14; 95% CI: 1.11-1.18, P<0.001). Multivariate linear regression revealed significant associations between peripheral immune markers and brain regional structural indices. Utilizing the deep learning-based SFCN model, the estimated brain age of dementia subjects (MAE = 5.63, r2 = - 0.46, R = 0.22) was determined. PAD showed significant correlation with dementia risk and certain peripheral immune markers, particularly in individuals with positive brain age increment. This study employs brain age as a quantitative marker of accelerated brain aging to investigate its potential associations with peripheral immunity and dementia, highlighting the importance of early intervention targeting peripheral immune markers to delay brain aging and prevent dementia.

Regional free-water diffusion is more strongly related to neuroinflammation than neurodegeneration.

Sumra V, Hadian M, Dilliott AA, Farhan SMK, Frank AR, Lang AE, Roberts AC, Troyer A, Arnott SR, Marras C, Tang-Wai DF, Finger E, Rogaeva E, Orange JB, Ramirez J, Zinman L, Binns M, Borrie M, Freedman M, Ozzoude M, Bartha R, Swartz RH, Munoz D, Masellis M, Black SE, Dixon RA, Dowlatshahi D, Grimes D, Hassan A, Hegele RA, Kumar S, Pasternak S, Pollock B, Rajji T, Sahlas D, Saposnik G, Tartaglia MC

pubmed logopapersJun 25 2025
Recent research has suggested that neuroinflammation may be important in the pathogenesis of neurodegenerative diseases. Free-water diffusion (FWD) has been proposed as a non-invasive neuroimaging-based biomarker for neuroinflammation. Free-water maps were generated using diffusion MRI data in 367 patients from the Ontario Neurodegenerative Disease Research Initiative (108 Alzheimer's Disease/Mild Cognitive Impairment, 42 Frontotemporal Dementia, 37 Amyotrophic Lateral Sclerosis, 123 Parkinson's Disease, and 58 vascular disease-related Cognitive Impairment). The ability of FWD to predict neuroinflammation and neurodegeneration from biofluids was estimated using plasma glial fibrillary-associated protein (GFAP) and neurofilament light chain (NfL), respectively. Recursive Feature Elimination (RFE) performed the strongest out of all feature selection algorithms used and revealed regional specificity for areas that are the most important features for predicting GFAP over NfL concentration. Deep learning models using selected features and demographic information revealed better prediction of GFAP over NfL. Based on feature selection and deep learning methods, FWD was found to be more strongly related to GFAP concentration (measure of astrogliosis) over NfL (measure of neuro-axonal damage), across neurodegenerative disease groups, in terms of predictive performance. Non-invasive markers of neurodegeneration such as MRI structural imaging that can reveal neurodegeneration already exist, while non-invasive markers of neuroinflammation are not available. Our results support the use of FWD as a non-invasive neuroimaging-based biomarker for neuroinflammation.
Page 227 of 3903899 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.