Sort by:
Page 28 of 2252246 results

Deep Learning MRI Models for the Differential Diagnosis of Tumefactive Demyelination versus <i>IDH</i> Wild-Type Glioblastoma.

Conte GM, Moassefi M, Decker PA, Kosel ML, McCarthy CB, Sagen JA, Nikanpour Y, Fereidan-Esfahani M, Ruff MW, Guido FS, Pump HK, Burns TC, Jenkins RB, Erickson BJ, Lachance DH, Tobin WO, Eckel-Passow JE

pubmed logopapersJun 26 2025
Diagnosis of tumefactive demyelination can be challenging. The diagnosis of indeterminate brain lesions on MRI often requires tissue confirmation via brain biopsy. Noninvasive methods for accurate diagnosis of tumor and nontumor etiologies allows for tailored therapy, optimal tumor control, and a reduced risk of iatrogenic morbidity and mortality. Tumefactive demyelination has imaging features that mimic <i>isocitrate dehydrogenase</i> wild-type glioblastoma (<i>IDH</i>wt GBM). We hypothesized that deep learning applied to postcontrast T1-weighted (T1C) and T2-weighted (T2) MRI can discriminate tumefactive demyelination from <i>IDH</i>wt GBM. Patients with tumefactive demyelination (<i>n</i> = 144) and <i>IDH</i>wt GBM (<i>n</i> = 455) were identified by clinical registries. A 3D DenseNet121 architecture was used to develop models to differentiate tumefactive demyelination and <i>IDH</i>wt GBM by using both T1C and T2 MRI, as well as only T1C and only T2 images. A 3-stage design was used: 1) model development and internal validation via 5-fold cross validation by using a sex-, age-, and MRI technology-matched set of tumefactive demyelination and <i>IDH</i>wt GBM, 2) validation of model specificity on independent <i>IDH</i>wt GBM, and 3) prospective validation on tumefactive demyelination and <i>IDH</i>wt GBM. Stratified area under the receiver operating curves (AUROCs) were used to evaluate model performance stratified by sex, age at diagnosis, MRI scanner strength, and MRI acquisition. The deep learning model developed by using both T1C and T2 images had a prospective validation AUROC of 88% (95% CI: 0.82-0.95). In the prospective validation stage, a model score threshold of 0.28 resulted in 91% sensitivity of correctly classifying tumefactive demyelination and 80% specificity (correctly classifying <i>IDH</i>wt GBM). Stratified AUROCs demonstrated that model performance may be improved if thresholds were chosen stratified by age and MRI acquisition. MRI can provide the basis for applying deep learning models to aid in the differential diagnosis of brain lesions. Further validation is needed to evaluate how well the model generalizes across institutions, patient populations, and technology, and to evaluate optimal thresholds for classification. Next steps also should incorporate additional tumor etiologies such as CNS lymphoma and brain metastases.

Epicardial adipose tissue, myocardial remodelling and adverse outcomes in asymptomatic aortic stenosis: a post hoc analysis of a randomised controlled trial.

Geers J, Manral N, Razipour A, Park C, Tomasino GF, Xing E, Grodecki K, Kwiecinski J, Pawade T, Doris MK, Bing R, White AC, Droogmans S, Cosyns B, Slomka PJ, Newby DE, Dweck MR, Dey D

pubmed logopapersJun 26 2025
Epicardial adipose tissue represents a metabolically active visceral fat depot that is in direct contact with the left ventricular myocardium. While it is associated with coronary artery disease, little is known regarding its role in aortic stenosis. We sought to investigate the association of epicardial adipose tissue with aortic stenosis severity and progression, myocardial remodelling and function, and mortality in asymptomatic patients with aortic stenosis. In a post hoc analysis of 124 patients with asymptomatic mild-to-severe aortic stenosis participating in a prospective clinical trial, baseline epicardial adipose tissue was quantified on CT angiography using fully automated deep learning-enabled software. Aortic stenosis disease severity was assessed at baseline and 1 year. The primary endpoint was all-cause mortality. Neither epicardial adipose tissue volume nor attenuation correlated with aortic stenosis severity or subsequent disease progression as assessed by echocardiography or CT (p>0.05 for all). Epicardial adipose tissue volume correlated with plasma cardiac troponin concentration (r=0.23, p=0.009), left ventricular mass (r=0.46, p<0.001), ejection fraction (r=-0.28, p=0.002), global longitudinal strain (r=0.28, p=0.017), and left atrial volume (r=0.39, p<0.001). During the median follow-up of 48 (IQR 26-73) months, a total of 23 (18%) patients died. In multivariable analysis, both epicardial adipose tissue volume (HR 1.82, 95% CI 1.10 to 3.03; p=0.021) and plasma cardiac troponin concentration (HR 1.47, 95% CI 1.13 to 1.90; p=0.004) were associated with all-cause mortality, after adjustment for age, body mass index and left ventricular ejection fraction. Patients with epicardial adipose tissue volume >90 mm<sup>3</sup> had 3-4 times higher risk of death (adjusted HR 3.74, 95% CI 1.08 to 12.96; p=0.037). Epicardial adipose tissue volume does not associate with aortic stenosis severity or its progression but does correlate with blood and imaging biomarkers of impaired myocardial health. The latter may explain the association of epicardial adipose tissue volume with an increased risk of all-cause mortality in patients with asymptomatic aortic stenosis. gov (NCT02132026).

Enhancing cancer diagnostics through a novel deep learning-based semantic segmentation algorithm: A low-cost, high-speed, and accurate approach.

Benabbou T, Sahel A, Badri A, Mourabit IE

pubmed logopapersJun 26 2025
Deep learning-based semantic segmentation approaches provide an efficient and automated means for cancer diagnosis and monitoring, which is important in clinical applications. However, implementing these approaches outside the experimental environment and using them in real-world applications requires powerful and adequate hardware resources, which are not available in most hospitals, especially in low- and middle-income countries. Consequently, clinical settings will never use most of these algorithms, or at best, their adoption will be relatively limited. To address these issues, some approaches that reduce computational costs were proposed, but they performed poorly and failed to produce satisfactory results. Therefore, finding a method that overcomes these limitations without losing performance is highly challenging. To face this challenge, our study proposes a novel, optimal convolutional neural network-based approach for medical image segmentation that consists of multiple synthesis and analysis paths connected through a series of long skip connections. The design leverages multi-scale convolution, multi-scale feature extraction, downsampling strategies, and feature map fusion methods, all of which have proven effective in enhancing performance. This framework was extensively evaluated against current state-of-the-art architectures on various medical image segmentation tasks, including lung tumors, spleen, and pancreatic tumors. The results of these experiments conclusively demonstrate the efficacy of the proposed approach in outperforming existing state-of-the-art methods across multiple evaluation metrics. This superiority is further enhanced by the framework's ability to minimize the computational complexity and decrease the number of parameters required, resulting in greater segmentation accuracy, faster processing, and better implementation efficiency.

Constructing high-quality enhanced 4D-MRI with personalized modeling for liver cancer radiotherapy.

Yao Y, Chen B, Wang K, Cao Y, Zuo L, Zhang K, Chen X, Kuo M, Dai J

pubmed logopapersJun 26 2025
For magnetic resonance imaging (MRI), a short acquisition time and good image quality are incompatible. Thus, reconstructing time-resolved volumetric MRI (4D-MRI) to delineate and monitor thoracic and upper abdominal tumor movements is a challenge. Existing MRI sequences have limited applicability to 4D-MRI. A method is proposed for reconstructing high-quality personalized enhanced 4D-MR images. Low-quality 4D-MR images are scanned followed by deep learning-based personalization to generate high-quality 4D-MR images. High-speed multiphase 3D fast spoiled gradient recalled echo (FSPGR) sequences were utilized to generate low-quality enhanced free-breathing 4D-MR images and paired low-/high-quality breath-holding 4D-MR images for 58 liver cancer patients. Then, a personalized model guided by the paired breath-holding 4D-MR images was developed for each patient to cope with patient heterogeneity. The 4D-MR images generated by the personalized model were of much higher quality compared with the low-quality 4D-MRI images obtained by conventional scanning as demonstrated by significant improvements in the peak signal-to-noise ratio, structural similarity, normalized root mean square error, and cumulative probability of blur detection. The introduction of individualized information helped the personalized model demonstrate a statistically significant improvement compared to the general model (p < 0.001). The proposed method can be used to quickly reconstruct high-quality 4D-MR images and is potentially applicable to radiotherapy for liver cancer.

The Current State of Artificial Intelligence on Detecting Pulmonary Embolism via Computerised Tomography Pulmonary Angiogram: A Systematic Review.

Hassan MSTA, Elhotiby MAM, Shah V, Rocha H, Rad AA, Miller G, Malawana J

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Pulmonary embolism (PE) is a life-threatening condition with significant diagnostic challenges due to high rates of missed or delayed detection. Computed tomography pulmonary angiography (CTPA) is the current standard for diagnosing PE, however, demand for imaging places strain on healthcare systems and increases error rates. This systematic review aims to assess the diagnostic accuracy and clinical applicability of artificial intelligence (AI)-based models for PE detection on CTPA, exploring their potential to enhance diagnostic reliability and efficiency across clinical settings. <b>Methods</b> A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, PubMed, and Google Scholar were searched for original articles from inception to September 2024. Articles were included if they reported successful AI integration, whether partial or full, alongside CTPA scans for PE detection in patients. <b>Results</b> The literature search identified 919 articles, with 745 remaining after duplicate removal. Following rigorous screening and appraisal aligned with inclusion and exclusion criteria, 12 studies were included in the final analysis. A total of three primary AI modalities emerged: convolutional neural networks (CNNs), segmentation models, and natural language processing (NLP), collectively used in the analysis of 341,112 radiographic images. CNNs were the most frequently applied modality in this review. Models such as AdaBoost and EmbNet have demonstrated high sensitivity, with EmbNet achieving 88-90.9% per scan and reducing false positives to 0.45 per scan. <b>Conclusion</b> AI shows significant promise as a diagnostic tool for identifying PE on CTPA scans, particularly when combined with other forms of clinical data. However, challenges remain, including ensuring generalisability, addressing potential bias, and conducting rigorous external validation. Variability in study methodologies and the lack of standardised reporting of key metrics complicate comparisons. Future research must focus on refining models, improving peripheral emboli detection, and validating performance across diverse settings to realise AI's potential fully.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Association of peripheral immune markers with brain age and dementia risk estimated using deep learning methods.

Huang X, Yuan S, Ling Y, Tan S, Bai Z, Xu Y, Shen S, Lyu J, Wang H

pubmed logopapersJun 25 2025
The peripheral immune system is essential for maintaining central nervous system homeostasis. This study investigates the effects of peripheral immune markers on accelerated brain aging and dementia using brain-predicted age difference based on neuroimaging. By leveraging data from the UK Biobank, Cox regression was used to explore the relationship between peripheral immune markers and dementia, and multivariate linear regression to assess associations between peripheral immune biomarkers and brain structure. Additionally, we established a brain age prediction model using Simple Fully Convolutional Network (SFCN) deep learning architecture. Analysis of the resulting brain-Predicted Age Difference (PAD) revealed relationships between accelerated brain aging, peripheral immune markers, and dementia. During the median follow-up period of 14.3 years, 4, 277 dementia cases were observed among 322, 761 participants. Both innate and adaptive immune markers correlated with dementia risk. NLR showed the strongest association with dementia risk (HR = 1.14; 95% CI: 1.11-1.18, P<0.001). Multivariate linear regression revealed significant associations between peripheral immune markers and brain regional structural indices. Utilizing the deep learning-based SFCN model, the estimated brain age of dementia subjects (MAE = 5.63, r2 = - 0.46, R = 0.22) was determined. PAD showed significant correlation with dementia risk and certain peripheral immune markers, particularly in individuals with positive brain age increment. This study employs brain age as a quantitative marker of accelerated brain aging to investigate its potential associations with peripheral immunity and dementia, highlighting the importance of early intervention targeting peripheral immune markers to delay brain aging and prevent dementia.

Regional free-water diffusion is more strongly related to neuroinflammation than neurodegeneration.

Sumra V, Hadian M, Dilliott AA, Farhan SMK, Frank AR, Lang AE, Roberts AC, Troyer A, Arnott SR, Marras C, Tang-Wai DF, Finger E, Rogaeva E, Orange JB, Ramirez J, Zinman L, Binns M, Borrie M, Freedman M, Ozzoude M, Bartha R, Swartz RH, Munoz D, Masellis M, Black SE, Dixon RA, Dowlatshahi D, Grimes D, Hassan A, Hegele RA, Kumar S, Pasternak S, Pollock B, Rajji T, Sahlas D, Saposnik G, Tartaglia MC

pubmed logopapersJun 25 2025
Recent research has suggested that neuroinflammation may be important in the pathogenesis of neurodegenerative diseases. Free-water diffusion (FWD) has been proposed as a non-invasive neuroimaging-based biomarker for neuroinflammation. Free-water maps were generated using diffusion MRI data in 367 patients from the Ontario Neurodegenerative Disease Research Initiative (108 Alzheimer's Disease/Mild Cognitive Impairment, 42 Frontotemporal Dementia, 37 Amyotrophic Lateral Sclerosis, 123 Parkinson's Disease, and 58 vascular disease-related Cognitive Impairment). The ability of FWD to predict neuroinflammation and neurodegeneration from biofluids was estimated using plasma glial fibrillary-associated protein (GFAP) and neurofilament light chain (NfL), respectively. Recursive Feature Elimination (RFE) performed the strongest out of all feature selection algorithms used and revealed regional specificity for areas that are the most important features for predicting GFAP over NfL concentration. Deep learning models using selected features and demographic information revealed better prediction of GFAP over NfL. Based on feature selection and deep learning methods, FWD was found to be more strongly related to GFAP concentration (measure of astrogliosis) over NfL (measure of neuro-axonal damage), across neurodegenerative disease groups, in terms of predictive performance. Non-invasive markers of neurodegeneration such as MRI structural imaging that can reveal neurodegeneration already exist, while non-invasive markers of neuroinflammation are not available. Our results support the use of FWD as a non-invasive neuroimaging-based biomarker for neuroinflammation.

Interventional Radiology Reporting Standards and Checklist for Artificial Intelligence Research Evaluation (iCARE).

Anibal JT, Huth HB, Boeken T, Daye D, Gichoya J, Muñoz FG, Chapiro J, Wood BJ, Sze DY, Hausegger K

pubmed logopapersJun 25 2025
As artificial intelligence (AI) becomes increasingly prevalent within interventional radiology (IR) research and clinical practice, steps must be taken to ensure the robustness of novel technological systems presented in peer-reviewed journals. This report introduces comprehensive standards and an evaluation checklist (iCARE) that covers the application of modern AI methods in IR-specific contexts. The iCARE checklist encompasses the full "code-to-clinic" pipeline of AI development, including dataset curation, pre-training, task-specific training, explainability, privacy protection, bias mitigation, reproducibility, and model deployment. The iCARE checklist aims to support the development of safe, generalizable technologies for enhancing IR workflows, the delivery of care, and patient outcomes.
Page 28 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.