Sort by:
Page 21 of 2982975 results

Deep Learning-Based Cascade 3D Kidney Segmentation Method.

Hao Z, Chapman BE

pubmed logopapersAug 7 2025
Renal tumors require early diagnosis and precise localization for effective treatment. This study aims to automate renal tumor analysis in abdominal CT images using a cascade 3D U-Net architecture for semantic kidney segmentation. To address challenges like edge detection and small object segmentation, the framework incorporates residual blocks to enhance convergence and efficiency. Comprehensive training configurations, preprocessing, and postprocessing strategies were employed to ensure accurate results. Tested on KiTS2019 data, the method ranked 23rd on the leaderboard (Nov 2024), demonstrating the enhanced cascade 3D U-Net's effectiveness in improving segmentation precision.

AIMR-MediTell: Attention-Infused Mask RNN for Medical Image Interpretation and Report Generation.

Chen L, Yang L, Bedir O

pubmed logopapersAug 7 2025
Medical diagnostics often rely on the interpretation of complex medical images. However, manual analysis and report generation by medical practitioners are time-consuming, and the inherent ambiguity in chest X-rays presents significant challenges for automated systems in producing interpretable results. To address this, we propose Attention-Infused Mask Recurrent Neural Network (AIMR-MediTell), a deep learning framework integrating instance segmentation using Mask RCNN with attention-based feature extraction to identify and highlight abnormal regions in chest X-rays. This framework also incorporates an encoder-decoder structure with pretrained BioWordVec embeddings to generate explanatory reports based on augmented images. We evaluated AIMR-MediTell on the Open-I dataset, achieving a BLEU-4 score of 0.415, outperforming existing models. Our results demonstrate the effectiveness of the proposed model, showing that incorporating masked regions enhances report accuracy and interpretability. By identifying malfunction areas and automating report generation for X-ray images, our approach has the potential to significantly improve the efficiency and accuracy of medical image analysis.

Towards Real-Time Detection of Fatty Liver Disease in Ultrasound Imaging: Challenges and Opportunities.

Alshagathrh FM, Schneider J, Househ MS

pubmed logopapersAug 7 2025
This study presents an AI framework for real-time NAFLD detection using ultrasound imaging, addressing operator dependency, imaging variability, and class imbalance. It integrates CNNs with machine learning classifiers and applies preprocessing techniques, including normalization and GAN-based augmentation, to enhance prediction for underrepresented disease stages. Grad-CAM provides visual explanations to support clinical interpretation. Trained on 10,352 annotated images from multiple Saudi centers, the framework achieved 98.9% accuracy and an AUC of 0.99, outperforming baseline CNNs by 12.4% and improving sensitivity for advanced fibrosis and subtle features. Future work will extend multi-class classification, validate performance across settings, and integrate with clinical systems.

Sparse transformer and multipath decision tree: a novel approach for efficient brain tumor classification.

Li P, Jin Y, Wang M, Liu F

pubmed logopapersAug 7 2025
Early classification of brain tumors is the key to effective treatment. With advances in medical imaging technology, automated classification algorithms face challenges due to tumor diversity. Although Swin Transformer is effective in handling high-resolution images, it encounters difficulties with small datasets and high computational complexity. This study introduces SparseSwinMDT, a novel model that combines sparse token representation with multipath decision trees. Experimental results show that SparseSwinMDT achieves an accuracy of 99.47% in brain tumor classification, significantly outperforming existing methods while reducing computation time, making it particularly suitable for resource-constrained medical environments.

Patient Preferences for Artificial Intelligence in Medical Imaging: A Single-Center Cross-Sectional Survey.

McGhee KN, Barrett DJ, Safarini O, Elkassem AA, Eddins JT, Smith AD, Rothenberg SA

pubmed logopapersAug 7 2025
Artificial Intelligence (AI) is rapidly being implemented into clinical practice to improve diagnostic accuracy and reduce provider burnout. However, patient self-perceived knowledge and perceptions of AI's role in their care remain unclear. This study aims to explore patient preferences regarding the use of and communication of AI in their care for patients undergoing cross-sectional imaging exams. This single-center cross-sectional study, a structured questionnaire recruited patients undergoing outpatient CT or MRI examinations between June and July 2024 to assess baseline self-perceived knowledge of AI, perspectives on AI in clinical care, preferences regarding AI-generated results, and economic considerations related to AI, using Likert scales and categorical questions. A total of 226 participants (143 females; mean age 53 years) were surveyed with 67.4% (151/224) reporting having minimal to no knowledge of AI in medicine, with lower knowledge levels associated with lower socioeconomic status (p < .001). 90.3% (204/226) believed they should be informed about the use of AI in their care, and 91.1% (204/224) supported the right to opt out. Additionally, 91.1% (204/224) of participants expressed a strong preference for being informed when AI was involved in interpreting their medical images. 65.6% (143/218) indicated that they would not accept a screening imaging exam exclusively interpreted by an AI algorithm. Finally, 91.1% (204/224) of participants wanted disclosure when AI was used and 89.1% (196/220) felt such disclosure and clarification of discrepancies should be considered standard care. To align AI adoption with patient preferences and expectations, radiology practices must prioritize disclosure, patient engagement, and standardized documentation of AI use without being overly burdensome to the diagnostic workflow. Patients prefer transparency for AI utilization in their care, and our study highlights the discrepancy between patient preferences and current clinical practice. Patients are not expected to determine the technical aspects of an image examination such as acquisition parameters or reconstruction kernel and must trust their providers to act in their best interest. Clear communication of how AI is being used in their care should be provided in ways that do not overly burden the radiologist.

Quantum annealing feature selection on light-weight medical image datasets.

Nau MA, Nutricati LA, Camino B, Warburton PA, Maier AK

pubmed logopapersAug 7 2025
We investigate the use of quantum computing algorithms on real quantum hardware to tackle the computationally intensive task of feature selection for light-weight medical image datasets. Feature selection is often formulated as a k of n selection problem, where the complexity grows binomially with increasing k and n. Quantum computers, particularly quantum annealers, are well-suited for such problems, which may offer advantages under certain problem formulations. We present a method to solve larger feature selection instances than previously demonstrated on commercial quantum annealers. Our approach combines a linear Ising penalty mechanism with subsampling and thresholding techniques to enhance scalability. The method is tested in a toy problem where feature selection identifies pixel masks used to reconstruct small-scale medical images. We compare our approach against a range of feature selection strategies, including randomized baselines, classical supervised and unsupervised methods, combinatorial optimization via classical and quantum solvers, and learning-based feature representations. The results indicate that quantum annealing-based feature selection is effective for this simplified use case, demonstrating its potential in high-dimensional optimization tasks. However, its applicability to broader, real-world problems remains uncertain, given the current limitations of quantum computing hardware. While learned feature representations such as autoencoders achieve superior reconstruction performance, they do not offer the same level of interpretability or direct control over input feature selection as our approach.

Longitudinal structural MRI-based deep learning and radiomics features for predicting Alzheimer's disease progression.

Aghajanian S, Mohammadifard F, Mohammadi I, Rajai Firouzabadi S, Baradaran Bagheri A, Moases Ghaffary E, Mirmosayyeb O

pubmed logopapersAug 7 2025
Alzheimer's disease (AD) is the principal cause of dementia and requires the early diagnosis of people with mild cognitive impairment (MCI) who are at high risk of progressing. Early diagnosis is imperative for optimizing clinical management and selecting proper therapeutic interventions. Structural magnetic resonance imaging (MRI) markers have been widely investigated for predicting the conversion of MCI to AD, and recent advances in deep learning (DL) methods offer enhanced capabilities for identifying subtle neurodegenerative changes over time. We selected 228 MCI participants from the Alzheimer's Disease Neuroimaging Initiative (ADNI) who had at least three T1-weighted MRI scans within 18 months of baseline. MRI volumes underwent bias correction, segmentation, and radiomics feature extraction. A 3D residual network (ResNet3D) was trained using a pairwise ranking loss to capture single-timepoint risk scores. Longitudinal analyses were performed by extracting deep convolutional neural network (CNN) embeddings and gray matter radiomics for each scan, which were put into a time-aware long short-term memory (LSTM) model with an attention mechanism. A single-timepoint ResNet3D model achieved modest performance (c-index ~ 0.70). Incorporating longitudinal MRI files or downstream survival models led to a pronounced prognostic improvement (c-index:0.80-0.90), but was not further improved by longitudinal radiomics data. Time-specific classification within two- and three-year and five-year windows after the last MRI acquisition showed high accuracy (AUC > 0.85). Several radiomics, including gray matter surface to volume and elongation, emerged as the most predictive features. Each SD change in the gray matter surface to volume change within the last visit was associated with an increased risk of developing AD (HR: 1.50; 95% CI: 1.25-1.79). These findings emphasize the value of structural MRI within the advanced DL architectures for predicting MCI-to-AD conversion. The approach may enable earlier risk stratification and targeted interventions for individuals most likely to progress. limitations in sample size and computational resources warrant larger, more diverse studies to confirm these observations and explore additional improvements.

Novel radiotherapy target definition using AI-driven predictions of glioblastoma recurrence from metabolic and diffusion MRI.

Tran N, Luks TL, Li Y, Jakary A, Ellison J, Liu B, Adegbite O, Nair D, Kakhandiki P, Molinaro AM, Villanueva-Meyer JE, Butowski N, Clarke JL, Chang SM, Braunstein SE, Morin O, Lin H, Lupo JM

pubmed logopapersAug 7 2025
The current standard-of-care (SOC) practice for defining the clinical target volume (CTV) for radiation therapy (RT) in patients with glioblastoma still employs an isotropic 1-2 cm expansion of the T2-hyperintensity lesion, without considering the heterogeneous infiltrative nature of these tumors. This study aims to improve RT CTV definition in patients with glioblastoma by incorporating biologically relevant metabolic and physiologic imaging acquired before RT along with a deep learning model that can predict regions of subsequent tumor progression by either the presence of contrast-enhancement or T2-hyperintensity. The results were compared against two standard CTV definitions. Our multi-parametric deep learning model significantly outperformed the uniform 2 cm expansion of the T2-lesion CTV in terms of specificity (0.89 ± 0.05 vs 0.79 ± 0.11; p = 0.004), while also achieving comparable sensitivity (0.92 ± 0.11 vs 0.95 ± 0.08; p = 0.10), sparing more normal brain. Model performance was significantly enhanced by incorporating lesion size-weighted loss functions during training and including metabolic images as inputs.

An evaluation of rectum contours generated by artificial intelligence automatic contouring software using geometry, dosimetry and predicted toxicity.

Mc Laughlin O, Gholami F, Osman S, O'Sullivan JM, McMahon SJ, Jain S, McGarry CK

pubmed logopapersAug 7 2025
Objective&#xD;This study assesses rectum contours generated using a commercial deep learning auto-contouring model and compares them to clinician contours using geometry, changes in dosimetry and toxicity modelling. &#xD;Approach&#xD;This retrospective study involved 308 prostate cancer patients who were treated using 3D-conformal radiotherapy. Computed tomography images were input into Limbus Contour (v1.8.0b3) to generate auto-contour structures for each patient. Auto-contours were not edited after their generation.&#xD;Rectum auto-contours were compared to clinician contours geometrically and dosimetrically. Dice similarity coefficient (DSC), mean Hausdorff distance (HD) and volume difference were assessed. Dose-volume histogram (DVH) constraints (V41%-V100%) were compared, and a Wilcoxon signed rank test was used to evaluate statistical significance of differences. &#xD;Toxicity modelling to compare contours was carried out using equivalent uniform dose (EUD) and clinical factors of abdominal surgery and atrial fibrillation. Trained models were tested (80:20) in their prediction of grade 1 late rectal bleeding (ntotal=124) using area-under the receiver operating characteristic curve (AUC).&#xD;Main results&#xD;Median DSC (interquartile range (IQR)) was 0.85 (0.09), median HD was 1.38 mm (0.60 mm) and median volume difference was -1.73 cc (14.58 cc). Median DVH differences between contours were found to be small (<1.5%) for all constraints although systematically larger than clinician contours (p<0.05). However, an IQR up to 8.0% was seen for individual patients across all dose constraints.&#xD;Models using EUD alone derived from clinician or auto-contours had AUCs of 0.60 (0.10) and 0.60 (0.09). AUC for models involving clinical factors and dosimetry was 0.65 (0.09) and 0.66 (0.09) when using clinician contours and auto-contours.&#xD;Significance&#xD;Although median DVH metrics were similar, variation for individual patients highlights the importance of clinician review. Rectal bleeding prediction accuracy did not depend on the contour method for this cohort. The auto-contouring model used in this study shows promise in a supervised workflow.&#xD.

Memory-enhanced and multi-domain learning-based deep unrolling network for medical image reconstruction.

Jiang H, Zhang Q, Hu Y, Jin Y, Liu H, Chen Z, Yumo Z, Fan W, Zheng HR, Liang D, Hu Z

pubmed logopapersAug 7 2025
Reconstructing high-quality images from corrupted measurements remains a fundamental challenge in medical imaging. Recently, deep unrolling (DUN) methods have emerged as a promising solution, combining the interpretability of traditional iterative algorithms with the powerful representation capabilities of deep learning. However, their performance is often limited by weak information flow between iterative stages and a constrained ability to capture global features across stages-limitations that tend to worsen as the number of iterations increases.&#xD;Approach: In this work, we propose a memory-enhanced and multi-domain learning-based deep unrolling network for interpretable, high-fidelity medical image reconstruction. First, a memory-enhanced module is designed to adaptively integrate historical outputs across stages, reducing information loss. Second, we introduce a cross-stage spatial-domain learning transformer (CS-SLFormer) to extract both local and non-local features within and across stages, improving reconstruction performance. Finally, a frequency-domain consistency learning (FDCL) module ensures alignment between reconstructed and ground truth images in the frequency domain, recovering fine image details.&#xD;Main Results: Comprehensive experiments evaluated on three representative medical imaging modalities (PET, MRI, and CT) show that the proposed method consistently outperforms state-of-the-art (SOTA) approaches in both quantitative metrics and visual quality. Specifically, our method achieved a PSNR of 37.835 dB and an SSIM of 0.970 in 1 $\%$ dose PET reconstruction.&#xD;Significance: This study expands the use of model-driven deep learning in medical imaging, demonstrating the potential of memory-enhanced deep unrolling frameworks for high-quality reconstructions.
Page 21 of 2982975 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.