Sort by:
Page 177 of 3973969 results

Dynamic abdominal MRI image generation using cGANs: A generalized model for various breathing patterns with extensive evaluation.

Cordón-Avila A, Ballı ÖF, Damme K, Abayazid M

pubmed logopapersJul 7 2025
Organ motion is a limiting factor during the treatment of abdominal tumors. During abdominal interventions, medical images are acquired to provide guidance, however, this increases operative time and radiation exposure. In this paper, conditional generative adversarial networks are implemented to generate dynamic magnetic resonance images using external abdominal motion as a surrogate signal. The generator was trained to account for breathing variability, and different models were investigated to improve motion quality. Additionally, an objective and subjective study were conducted to assess image and motion quality. The objective study included different metrics, such as structural similarity index measure (SSIM) and mean absolute error (MAE). In the subjective study, 32 clinical experts participated in evaluating the generated images by completing different tasks. The tasks involved identifying images and videos as real or fake, via a questionnaire allowing experts to assess the realism in static images and dynamic sequences. The results of the best-performing model displayed an SSIM of 0.73 ± 0.13, and the MAE was below 4.5 and 1.8 mm for the superior-inferior and anterior-posterior directions of motion. The proposed framework was compared to a related method that utilized a set of convolutional neural networks combined with recurrent layers. In the subjective study, more than 50% of the generated images and dynamic sequences were classified as real, except for one task. Synthetic images have the potential to reduce the need for acquiring intraoperative images, decreasing time and radiation exposure. A video summary can be found in the supplementary material.

X-ray transferable polyrepresentation learning

Weronika Hryniewska-Guzik, Przemyslaw Biecek

arxiv logopreprintJul 7 2025
The success of machine learning algorithms is inherently related to the extraction of meaningful features, as they play a pivotal role in the performance of these algorithms. Central to this challenge is the quality of data representation. However, the ability to generalize and extract these features effectively from unseen datasets is also crucial. In light of this, we introduce a novel concept: the polyrepresentation. Polyrepresentation integrates multiple representations of the same modality extracted from distinct sources, for example, vector embeddings from the Siamese Network, self-supervised models, and interpretable radiomic features. This approach yields better performance metrics compared to relying on a single representation. Additionally, in the context of X-ray images, we demonstrate the transferability of the created polyrepresentation to a smaller dataset, underscoring its potential as a pragmatic and resource-efficient approach in various image-related solutions. It is worth noting that the concept of polyprepresentation on the example of medical data can also be applied to other domains, showcasing its versatility and broad potential impact.

Sequential Attention-based Sampling for Histopathological Analysis

Tarun G, Naman Malpani, Gugan Thoppe, Sridharan Devarajan

arxiv logopreprintJul 7 2025
Deep neural networks are increasingly applied for automated histopathology. Yet, whole-slide images (WSIs) are often acquired at gigapixel sizes, rendering it computationally infeasible to analyze them entirely at high resolution. Diagnostic labels are largely available only at the slide-level, because expert annotation of images at a finer (patch) level is both laborious and expensive. Moreover, regions with diagnostic information typically occupy only a small fraction of the WSI, making it inefficient to examine the entire slide at full resolution. Here, we propose SASHA -- {\it S}equential {\it A}ttention-based {\it S}ampling for {\it H}istopathological {\it A}nalysis -- a deep reinforcement learning approach for efficient analysis of histopathological images. First, SASHA learns informative features with a lightweight hierarchical, attention-based multiple instance learning (MIL) model. Second, SASHA samples intelligently and zooms selectively into a small fraction (10-20\%) of high-resolution patches, to achieve reliable diagnosis. We show that SASHA matches state-of-the-art methods that analyze the WSI fully at high-resolution, albeit at a fraction of their computational and memory costs. In addition, it significantly outperforms competing, sparse sampling methods. We propose SASHA as an intelligent sampling model for medical imaging challenges that involve automated diagnosis with exceptionally large images containing sparsely informative features.

Automated Deep Learning-Based 3D-to-2D Segmentation of Geographic Atrophy in Optical Coherence Tomography Data

Al-khersan, H., Oakley, J. D., Russakoff, D. B., Cao, J. A., Saju, S. M., Zhou, A., Sodhi, S. K., Pattathil, N., Choudhry, N., Boyer, D. S., Wykoff, C. C.

medrxiv logopreprintJul 7 2025
PurposeWe report on a deep learning-based approach to the segmentation of geographic atrophy (GA) in patients with advanced age-related macular degeneration (AMD). MethodThree-dimensional (3D) optical coherence tomography (OCT) data was collected from two instruments at two different retina practices. This totaled 367 and 348 volumes, respectively, of routinely collected clinical data. For all data, the accuracy of a 3D-to-2D segmentation model was assessed relative to ground-truth manual labeling. ResultsDice Similarity Scores (DSC) averaged 0.824 and 0.826 for each data set. Correlations (r2) between manual and automated areas were 0.883 and 0.906, respectively. The inclusion of near Infra-red imagery as an additional information channel to the algorithm did not notably improve performance. ConclusionAccurate assessment of GA in real-world clinical OCT data can be achieved using deep learning. In the advent of therapeutics to slow the rate of GA progression, reliable, automated assessment is a clinical objective and this work validates one such method.

Development and International Validation of a Deep Learning Model for Predicting Acute Pancreatitis Severity from CT Scans

Xu, Y., Teutsch, B., Zeng, W., Hu, Y., Rastogi, S., Hu, E. Y., DeGregorio, I. M., Fung, C. W., Richter, B. I., Cummings, R., Goldberg, J. E., Mathieu, E., Appiah Asare, B., Hegedus, P., Gurza, K.-B., Szabo, I. V., Tarjan, H., Szentesi, A., Borbely, R., Molnar, D., Faluhelyi, N., Vincze, A., Marta, K., Hegyi, P., Lei, Q., Gonda, T., Huang, C., Shen, Y.

medrxiv logopreprintJul 7 2025
Background and aimsAcute pancreatitis (AP) is a common gastrointestinal disease with rising global incidence. While most cases are mild, severe AP (SAP) carries high mortality. Early and accurate severity prediction is crucial for optimal management. However, existing severity prediction models, such as BISAP and mCTSI, have modest accuracy and often rely on data unavailable at admission. This study proposes a deep learning (DL) model to predict AP severity using abdominal contrast-enhanced CT (CECT) scans acquired within 24 hours of admission. MethodsWe collected 10,130 studies from 8,335 patients across a multi-site U.S. health system. The model was trained in two stages: (1) self-supervised pretraining on large-scale unlabeled CT studies and (2) fine-tuning on 550 labeled studies. Performance was evaluated against mCTSI and BISAP on a hold-out internal test set (n=100 patients) and externally validated on a Hungarian AP registry (n=518 patients). ResultsOn the internal test set, the model achieved AUROCs of 0.888 (95% CI: 0.800-0.960) for SAP and 0.888 (95% CI: 0.819-0.946) for mild AP (MAP), outperforming mCTSI (p = 0.002). External validation showed robust AUROCs of 0.887 (95% CI: 0.825-0.941) for SAP and 0.858 (95% CI: 0.826-0.888) for MAP, surpassing mCTSI (p = 0.024) and BISAP (p = 0.002). Retrospective simulation suggested the models potential to support admission triage and serve as a second reader during CECT interpretation. ConclusionsThe proposed DL model outperformed standard scoring systems for AP severity prediction, generalized well to external data, and shows promise for providing early clinical decision support and improving resource allocation.

RADAI: A Deep Learning-Based Classification of Lung Abnormalities in Chest X-Rays.

Aljuaid H, Albalahad H, Alshuaibi W, Almutairi S, Aljohani TH, Hussain N, Mohammad F

pubmed logopapersJul 7 2025
<b>Background:</b> Chest X-rays are rapidly gaining prominence as a prevalent diagnostic tool, as recognized by the World Health Organization (WHO). However, interpreting chest X-rays can be demanding and time-consuming, even for experienced radiologists, leading to potential misinterpretations and delays in treatment. <b>Method:</b> The purpose of this research is the development of a RadAI model. The RadAI model can accurately detect four types of lung abnormalities in chest X-rays and generate a report on each identified abnormality. Moreover, deep learning algorithms, particularly convolutional neural networks (CNNs), have demonstrated remarkable potential in automating medical image analysis, including chest X-rays. This work addresses the challenge of chest X-ray interpretation by fine tuning the following three advanced deep learning models: Feature-selective and Spatial Receptive Fields Network (FSRFNet50), ResNext50, and ResNet50. These models are compared based on accuracy, precision, recall, and F1-score. <b>Results:</b> The outstanding performance of RadAI shows its potential to assist radiologists to interpret the detected chest abnormalities accurately. <b>Conclusions:</b> RadAI is beneficial in enhancing the accuracy and efficiency of chest X-ray interpretation, ultimately supporting the timely and reliable diagnosis of lung abnormalities.

Multi-Stage Cascaded Deep Learning-Based Model for Acute Aortic Syndrome Detection: A Multisite Validation Study.

Chang J, Lee KJ, Wang TH, Chen CM

pubmed logopapersJul 7 2025
<b>Background</b>: Acute Aortic Syndrome (AAS), encompassing aortic dissection (AD), intramural hematoma (IMH), and penetrating atherosclerotic ulcer (PAU), presents diagnostic challenges due to its varied manifestations and the critical need for rapid assessment. <b>Methods</b>: We developed a multi-stage deep learning model trained on chest computed tomography angiography (CTA) scans. The model utilizes a U-Net architecture for aortic segmentation, followed by a cascaded classification approach for detecting AD and IMH, and a multiscale CNN for identifying PAU. External validation was conducted on 260 anonymized CTA scans from 14 U.S. clinical sites, encompassing data from four different CT manufacturers. Performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUC), were calculated with 95% confidence intervals (CIs) using Wilson's method. Model performance was compared against predefined benchmarks. <b>Results</b>: The model achieved a sensitivity of 0.94 (95% CI: 0.88-0.97), specificity of 0.93 (95% CI: 0.89-0.97), and an AUC of 0.96 (95% CI: 0.94-0.98) for overall AAS detection, with <i>p</i>-values < 0.001 when compared to the 0.80 benchmark. Subgroup analyses demonstrated consistent performance across different patient demographics, CT manufacturers, slice thicknesses, and anatomical locations. <b>Conclusions</b>: This deep learning model effectively detects the full spectrum of AAS across diverse populations and imaging platforms, suggesting its potential utility in clinical settings to enable faster triage and expedite patient management.

A Deep Learning Model Integrating Clinical and MRI Features Improves Risk Stratification and Reduces Unnecessary Biopsies in Men with Suspected Prostate Cancer.

Bacchetti E, De Nardin A, Giannarini G, Cereser L, Zuiani C, Crestani A, Girometti R, Foresti GL

pubmed logopapersJul 7 2025
<b>Background:</b> Accurate upfront risk stratification in suspected clinically significant prostate cancer (csPCa) may reduce unnecessary prostate biopsies. Integrating clinical and Magnetic Resonance Imaging (MRI) variables using deep learning could improve prediction. <b>Methods:</b> We retrospectively analysed 538 men who underwent MRI and biopsy between April 2019-September 2024. A fully connected neural network was trained using 5-fold cross-validation. Model 1 included clinical features (age, prostate-specific antigen [PSA], PSA density, digital rectal examination, family history, prior negative biopsy, and ongoing therapy). Model 2 used MRI-derived Prostate Imaging Reporting and Data System (PI-RADS) categories. Model 3 used all previous variables as well as lesion size, location, and prostate volume as determined on MRI. <b>Results:</b> Model 3 achieved the highest area under the receiver operating characteristic curve (AUC = 0.822), followed by Model 2 (AUC = 0.778) and Model 1 (AUC = 0.716). Sensitivities for detecting clinically significant prostate cancer (csPCa) were 87.4%, 91.6%, and 86.8% for Models 1, 2, and 3, respectively. Although Model 3 had slightly lower sensitivity than Model 2, it showed higher specificity, reducing false positives and avoiding 43.4% and 21.2% more biopsies compared to Models 1 and 2. Decision curve analysis showed M2 had the highest net benefit at risk thresholds ≤ 20%, while M3 was superior above 20%. <b>Conclusions:</b> Model 3 improved csPCa risk stratification, particularly in biopsy-averse settings, while Model 2 was more effective in cancer-averse scenarios. These models support personalized, context-sensitive biopsy decisions.

Artificial Intelligence-Assisted Standard Plane Detection in Hip Ultrasound for Developmental Dysplasia of the Hip: A Novel Real-Time Deep Learning Approach.

Darilmaz MF, Demirel M, Altun HO, Adiyaman MC, Bilgili F, Durmaz H, Sağlam Y

pubmed logopapersJul 6 2025
Developmental dysplasia of the hip (DDH) includes a range of conditions caused by inadequate hip joint development. Early diagnosis is essential to prevent long-term complications. Ultrasound, particularly the Graf method, is commonly used for DDH screening, but its interpretation is highly operator-dependent and lacks standardization, especially in identifying the correct standard plane. This variability often leads to misdiagnosis, particularly among less experienced users. This study presents AI-SPS, an AI-based instant standard plane detection software for real-time hip ultrasound analysis. Using 2,737 annotated frames, including 1,737 standard and 1,000 non-standard examples extracted from 45 clinical ultrasound videos, we trained and evaluated two object detection models: SSD-MobileNet V2 and YOLOv11n. The software was further validated on an independent set of 934 additional frames (347 standard and 587 non-standard) from the same video sources. YOLOv11n achieved an accuracy of 86.3%, precision of 0.78, recall of 0.88, and F1-score of 0.83, outperforming SSD-MobileNet V2, which reached an accuracy of 75.2%. These results indicate that AI-SPS can detect the standard plane with expert-level performance and improve consistency in DDH screening. By reducing operator variability, the software supports more reliable ultrasound assessments. Integration with live systems and Graf typing may enable a fully automated DDH diagnostic workflow. Level of Evidence: Level III, diagnostic study.

A CT-Based Deep Learning Radiomics Nomogram for Early Recurrence Prediction in Pancreatic Cancer: A Multicenter Study.

Guan X, Liu J, Xu L, Jiang W, Wang C

pubmed logopapersJul 6 2025
Early recurrence (ER) following curative-intent surgery remains a major obstacle to improving long-term outcomes in patients with pancreatic cancer (PC). The accurate preoperative prediction of ER could significantly aid clinical decision-making and guide postoperative management. A retrospective cohort of 493 patients with histologically confirmed PC who underwent resection was analyzed. Contrast-enhanced computed tomography (CT) images were used for tumor segmentation, followed by radiomics and deep learning feature extraction. In total, four distinct feature selection algorithms were employed. Predictive models were constructed using random forest (RF) and support vector machine (SVM) classifiers. The model performance was evaluated by the area under the receiver operating characteristic curve (AUC). A comprehensive nomogram integrating feature scores and clinical factors was developed and validated. Among all of the constructed models, the Inte-SVM demonstrated superior classification performance. The nomogram, incorporating the Inte-feature score, CT-assessed lymph node status, and carbohydrate antigen 19-9 (CA19-9), yielded excellent predictive accuracy in the validation cohort (AUC = 0.920). Calibration curves showed strong agreement between predicted and observed outcomes, and decision curve analysis confirmed the clinical utility of the nomogram. A CT-based deep learning radiomics nomogram enabled the accurate preoperative prediction of early recurrence in patients with pancreatic cancer. This model may serve as a valuable tool to assist clinicians in tailoring postoperative strategies and promoting personalized therapeutic approaches.
Page 177 of 3973969 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.