Sort by:
Page 72 of 2352345 results

Aneurysm Analysis Using Deep Learning

Bagheri Rajeoni, A., Pederson, B., Lessner, S. M., Valafar, H.

medrxiv logopreprintJun 25 2025
Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard-of-care primarily focuses on measuring aneurysm diameter at its widest point, providing a limited perspective on aneurysm morphology and lacking efficient methods to measure aneurysm volumes. Yet, volume measurement can offer deeper insight into aneurysm progression and severity. In this study, we propose an automated approach that leverages the strengths of pre-trained neural networks and expert systems to delineate aneurysm boundaries and compute volumes on an unannotated dataset from 60 patients. The dataset includes slice-level start/end annotations for aneurysm but no pixel-wise aorta segmentations. Our method utilizes a pre-trained UNet to automatically locate the aorta, employs SAM2 to track the aorta through vascular irregularities such as aneurysms down to the iliac bifurcation, and finally uses a Long Short-Term Memory (LSTM) network or expert system to identify the beginning and end points of the aneurysm within the aorta. Despite no manual aorta segmentation, our approach achieves promising accuracy, predicting the aneurysm start point with an R2 score of 71%, the end point with an R2 score of 76%, and the volume with an R2 score of 92%. This technique has the potential to facilitate large-scale aneurysm analysis and improve clinical decision-making by reducing dependence on annotated datasets.

The Current State of Artificial Intelligence on Detecting Pulmonary Embolism via Computerised Tomography Pulmonary Angiogram: A Systematic Review.

Hassan MSTA, Elhotiby MAM, Shah V, Rocha H, Rad AA, Miller G, Malawana J

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Pulmonary embolism (PE) is a life-threatening condition with significant diagnostic challenges due to high rates of missed or delayed detection. Computed tomography pulmonary angiography (CTPA) is the current standard for diagnosing PE, however, demand for imaging places strain on healthcare systems and increases error rates. This systematic review aims to assess the diagnostic accuracy and clinical applicability of artificial intelligence (AI)-based models for PE detection on CTPA, exploring their potential to enhance diagnostic reliability and efficiency across clinical settings. <b>Methods</b> A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, PubMed, and Google Scholar were searched for original articles from inception to September 2024. Articles were included if they reported successful AI integration, whether partial or full, alongside CTPA scans for PE detection in patients. <b>Results</b> The literature search identified 919 articles, with 745 remaining after duplicate removal. Following rigorous screening and appraisal aligned with inclusion and exclusion criteria, 12 studies were included in the final analysis. A total of three primary AI modalities emerged: convolutional neural networks (CNNs), segmentation models, and natural language processing (NLP), collectively used in the analysis of 341,112 radiographic images. CNNs were the most frequently applied modality in this review. Models such as AdaBoost and EmbNet have demonstrated high sensitivity, with EmbNet achieving 88-90.9% per scan and reducing false positives to 0.45 per scan. <b>Conclusion</b> AI shows significant promise as a diagnostic tool for identifying PE on CTPA scans, particularly when combined with other forms of clinical data. However, challenges remain, including ensuring generalisability, addressing potential bias, and conducting rigorous external validation. Variability in study methodologies and the lack of standardised reporting of key metrics complicate comparisons. Future research must focus on refining models, improving peripheral emboli detection, and validating performance across diverse settings to realise AI's potential fully.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Association of peripheral immune markers with brain age and dementia risk estimated using deep learning methods.

Huang X, Yuan S, Ling Y, Tan S, Bai Z, Xu Y, Shen S, Lyu J, Wang H

pubmed logopapersJun 25 2025
The peripheral immune system is essential for maintaining central nervous system homeostasis. This study investigates the effects of peripheral immune markers on accelerated brain aging and dementia using brain-predicted age difference based on neuroimaging. By leveraging data from the UK Biobank, Cox regression was used to explore the relationship between peripheral immune markers and dementia, and multivariate linear regression to assess associations between peripheral immune biomarkers and brain structure. Additionally, we established a brain age prediction model using Simple Fully Convolutional Network (SFCN) deep learning architecture. Analysis of the resulting brain-Predicted Age Difference (PAD) revealed relationships between accelerated brain aging, peripheral immune markers, and dementia. During the median follow-up period of 14.3 years, 4, 277 dementia cases were observed among 322, 761 participants. Both innate and adaptive immune markers correlated with dementia risk. NLR showed the strongest association with dementia risk (HR = 1.14; 95% CI: 1.11-1.18, P<0.001). Multivariate linear regression revealed significant associations between peripheral immune markers and brain regional structural indices. Utilizing the deep learning-based SFCN model, the estimated brain age of dementia subjects (MAE = 5.63, r2 = - 0.46, R = 0.22) was determined. PAD showed significant correlation with dementia risk and certain peripheral immune markers, particularly in individuals with positive brain age increment. This study employs brain age as a quantitative marker of accelerated brain aging to investigate its potential associations with peripheral immunity and dementia, highlighting the importance of early intervention targeting peripheral immune markers to delay brain aging and prevent dementia.

Regional free-water diffusion is more strongly related to neuroinflammation than neurodegeneration.

Sumra V, Hadian M, Dilliott AA, Farhan SMK, Frank AR, Lang AE, Roberts AC, Troyer A, Arnott SR, Marras C, Tang-Wai DF, Finger E, Rogaeva E, Orange JB, Ramirez J, Zinman L, Binns M, Borrie M, Freedman M, Ozzoude M, Bartha R, Swartz RH, Munoz D, Masellis M, Black SE, Dixon RA, Dowlatshahi D, Grimes D, Hassan A, Hegele RA, Kumar S, Pasternak S, Pollock B, Rajji T, Sahlas D, Saposnik G, Tartaglia MC

pubmed logopapersJun 25 2025
Recent research has suggested that neuroinflammation may be important in the pathogenesis of neurodegenerative diseases. Free-water diffusion (FWD) has been proposed as a non-invasive neuroimaging-based biomarker for neuroinflammation. Free-water maps were generated using diffusion MRI data in 367 patients from the Ontario Neurodegenerative Disease Research Initiative (108 Alzheimer's Disease/Mild Cognitive Impairment, 42 Frontotemporal Dementia, 37 Amyotrophic Lateral Sclerosis, 123 Parkinson's Disease, and 58 vascular disease-related Cognitive Impairment). The ability of FWD to predict neuroinflammation and neurodegeneration from biofluids was estimated using plasma glial fibrillary-associated protein (GFAP) and neurofilament light chain (NfL), respectively. Recursive Feature Elimination (RFE) performed the strongest out of all feature selection algorithms used and revealed regional specificity for areas that are the most important features for predicting GFAP over NfL concentration. Deep learning models using selected features and demographic information revealed better prediction of GFAP over NfL. Based on feature selection and deep learning methods, FWD was found to be more strongly related to GFAP concentration (measure of astrogliosis) over NfL (measure of neuro-axonal damage), across neurodegenerative disease groups, in terms of predictive performance. Non-invasive markers of neurodegeneration such as MRI structural imaging that can reveal neurodegeneration already exist, while non-invasive markers of neuroinflammation are not available. Our results support the use of FWD as a non-invasive neuroimaging-based biomarker for neuroinflammation.

[AI-enabled clinical decision support systems: challenges and opportunities].

Tschochohei M, Adams LC, Bressem KK, Lammert J

pubmed logopapersJun 25 2025
Clinical decision-making is inherently complex, time-sensitive, and prone to error. AI-enabled clinical decision support systems (CDSS) offer promising solutions by leveraging large datasets to provide evidence-based recommendations. These systems range from rule-based and knowledge-based to increasingly AI-driven approaches. However, key challenges persist, particularly concerning data quality, seamless integration into clinical workflows, and clinician trust and acceptance. Ethical and legal considerations, especially data privacy, are also paramount.AI-CDSS have demonstrated success in fields like radiology (e.g., pulmonary nodule detection, mammography interpretation) and cardiology, where they enhance diagnostic accuracy and improve patient outcomes. Looking ahead, chat and voice interfaces powered by large language models (LLMs) could support shared decision-making (SDM) by fostering better patient engagement and understanding.To fully realize the potential of AI-CDSS in advancing efficient, patient-centered care, it is essential to ensure their responsible development. This includes grounding AI models in domain-specific data, anonymizing user inputs, and implementing rigorous validation of AI-generated outputs before presentation. Thoughtful design and ethical oversight will be critical to integrating AI safely and effectively into clinical practice.

Accuracy and Efficiency of Artificial Intelligence and Manual Virtual Segmentation for Generation of 3D Printed Tooth Replicas.

Pedrinaci I, Nasseri A, Calatrava J, Couso-Queiruga E, Giannobile WV, Gallucci GO, Sanz M

pubmed logopapersJun 25 2025
The primary aim of this in vitro study was to compare methods for generating 3D-printed replicas through virtual segmentation, utilizing artificial intelligence (AI) or manual processes, by assessing accuracy in terms of volumetric and linear discrepancies. The secondary aims were the assessment of time efficiency with both segmentation methods, and the effect of post-processing on 3D-printed replicas. Thirty teeth were scanned through Cone Beam Computed Tomography (CBCT), capturing the region of interest from human subjects. DICOM files underwent virtual segmentation through both AI and manual methods. Replicas were fabricated with a stereolithography 3D printer. After surface scanning of pre-processed replicas and extracted teeth, STL files were superimposed to compare linear and volumetric differences using the extracted teeth as the reference. Post-processed replicas were scanned to assess the effect of post-processing on linear and volumetric changes. AI-driven segmentation resulted in statistically significant mean linear and volumetric differences of -0.709mm (SD 0.491, P< 0.001) and -4.70%, respectively. Manual segmentation showed no statistically significant differences in mean linear, -0.463mm (SD 0.335, P<0.001) and volumetric (-1.20%) measures. Comparing manual and AI-driven segmentations, AI-driven segmentation displayed mean linear and volumetric differences of -0.329mm (SD 0.566, p=0.003) and -2.23%, respectively. Additionally, AI segmentation reduced the mean time by 21.8 minutes. When comparing post-processed to pre-processed replicas, there was a volumetric reduction of -4.53% and a mean linear difference of -0.151mm (SD 0.564, p=0.042). Both segmentation methods achieved acceptable accuracy, with manual segmentation slightly more accurate but AI-driven segmentation more time-efficient. Continuous improvement in AI offers the potential for increased accuracy, efficiency, and broader application in the future.

Comparative Analysis of Automated vs. Expert-Designed Machine Learning Models in Age-Related Macular Degeneration Detection and Classification.

Durmaz Engin C, Beşenk U, Özizmirliler D, Selver MA

pubmed logopapersJun 25 2025
To compare the effectiveness of expert-designed machine learning models and code-free automated machine learning (AutoML) models in classifying optical coherence tomography (OCT) images for detecting age-related macular degeneration (AMD) and distinguishing between its dry and wet forms. Custom models were developed by an artificial intelligence expert using the EfficientNet V2 architecture, while AutoML models were created by an ophthalmologist utilizing LobeAI with transfer learning via ResNet-50 V2. Both models were designed to differentiate normal OCT images from AMD and to also distinguish between dry and wet AMD. The models were trained and tested using an 80:20 split, with each diagnostic group containing 500 OCT images. Performance metrics, including sensitivity, specificity, accuracy, and F1 scores, were calculated and compared. The expert-designed model achieved an overall accuracy of 99.67% for classifying all images, with F1 scores of 0.99 or higher across all binary class comparisons. In contrast, the AutoML model achieved an overall accuracy of 89.00%, with F1 scores ranging from 0.86 to 0.90 in binary comparisons. Notably lower recall was observed for dry AMD vs. normal (0.85) in the AutoML model, indicating challenges in correctly identifying dry AMD. While the AutoML models demonstrated acceptable performance in identifying and classifying AMD cases, the expert-designed models significantly outperformed them. The use of advanced neural network architectures and rigorous optimization in the expert-developed models underscores the continued necessity of expert involvement in the development of high-precision diagnostic tools for medical image classification.

High-performance Open-source AI for Breast Cancer Detection and Localization in MRI.

Hirsch L, Sutton EJ, Huang Y, Kayis B, Hughes M, Martinez D, Makse HA, Parra LC

pubmed logopapersJun 25 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an open-source deep learning model for detection and localization of breast cancer on MRI. Materials and Methods In this retrospective study, a deep learning model for breast cancer detection and localization was trained on the largest breast MRI dataset to date. Data included all breast MRIs conducted at a tertiary cancer center in the United States between 2002 and 2019. The model was validated on sagittal MRIs from the primary site (<i>n</i> = 6,615 breasts). Generalizability was assessed by evaluating model performance on axial data from the primary site (<i>n</i> = 7,058 breasts) and a second clinical site (<i>n</i> = 1,840 breasts). Results The primary site dataset included 30,672 sagittal MRI examinations (52,598 breasts) from 9,986 female patients (mean [SD] age, 53 [11] years). The model achieved an area under the receiver operating characteristic curve (AUC) of 0.95 for detecting cancer in the primary site. At 90% specificity (5717/6353), model sensitivity was 83% (217/262), which was comparable to historical performance data for radiologists. The model generalized well to axial examinations, achieving an AUC of 0.92 on data from the same clinical site and 0.92 on data from a secondary site. The model accurately located the tumor in 88.5% (232/262) of sagittal images, 92.8% (272/293) of axial images from the primary site, and 87.7% (807/920) of secondary site axial images. Conclusion The model demonstrated state-of-the-art performance on breast cancer detection. Code and weights are openly available to stimulate further development and validation. ©RSNA, 2025.
Page 72 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.