Sort by:
Page 195 of 2092082 results

Multiparameter MRI-based model integrating radiomics and deep learning for preoperative staging of laryngeal squamous cell carcinoma.

Xie K, Jiang H, Chen X, Ning Y, Yu Q, Lv F, Liu R, Zhou Y, Xu L, Yue Q, Peng J

pubmed logopapersMay 9 2025
The accurate preoperative staging of laryngeal squamous cell carcinoma (LSCC) provides valuable guidance for clinical decision-making. The objective of this study was to establish a multiparametric MRI model using radiomics and deep learning (DL) to preoperatively distinguish between Stages I-II and III-IV of LSCC. Data from 401 histologically confirmed LSCC patients were collected from two centers (training set: 213; internal test set: 91; external test set: 97). Radiomics features were extracted from the MRI images, and seven radiomics models based on single and combined sequences were developed via random forest (RF). A DL model was constructed via ResNet 18, where DL features were extracted from its final fully connected layer. These features were fused with crucial radiomics features to create a combined model. The performance of the models was assessed using the area under the receiver operating characteristic (ROC) curve (AUC) and compared with the radiologist performances. The predictive capability of the combined model for Progression-Free Survival (PFS) was evaluated via Kaplan-Meier survival analysis and the Harrell's Concordance Index (C-index). In the external test set, the combined model had an AUC of 0.877 (95% CI 0.807-0.946), outperforming the DL model (AUC: 0.811) and the optimal radiomics model (AUC: 0.835). The combined model significantly outperformed both the DL (p = 0.017) and the optimal radiomics models (p = 0.039), and the radiologists (both p < 0.050). Moreover, the combined model demonstrated great prognostic predictive value in patients with LSCC, achieving a C-index of 0.624 for PFS. This combined model enhances preoperative LSCC staging, aiding in making more informed clinical decisions.

Application of Artificial Intelligence in Cardio-Oncology Imaging for Cancer Therapy-Related Cardiovascular Toxicity: Systematic Review.

Mushcab H, Al Ramis M, AlRujaib A, Eskandarani R, Sunbul T, AlOtaibi A, Obaidan M, Al Harbi R, Aljabri D

pubmed logopapersMay 9 2025
Artificial intelligence (AI) is a revolutionary tool yet to be fully integrated into several health care sectors, including medical imaging. AI can transform how medical imaging is conducted and interpreted, especially in cardio-oncology. This study aims to systematically review the available literature on the use of AI in cardio-oncology imaging to predict cardiotoxicity and describe the possible improvement of different imaging modalities that can be achieved if AI is successfully deployed to routine practice. We conducted a database search in PubMed, Ovid MEDLINE, Cochrane Library, CINAHL, and Google Scholar from inception to 2023 using the AI research assistant tool (Elicit) to search for original studies reporting AI outcomes in adult patients diagnosed with any cancer and undergoing cardiotoxicity assessment. Outcomes included incidence of cardiotoxicity, left ventricular ejection fraction, risk factors associated with cardiotoxicity, heart failure, myocardial dysfunction, signs of cancer therapy-related cardiovascular toxicity, echocardiography, and cardiac magnetic resonance imaging. Descriptive information about each study was recorded, including imaging technique, AI model, outcomes, and limitations. The systematic search resulted in 7 studies conducted between 2018 and 2023, which are included in this review. Most of these studies were conducted in the United States (71%), included patients with breast cancer (86%), and used magnetic resonance imaging as the imaging modality (57%). The quality assessment of the studies had an average of 86% compliance in all of the tool's sections. In conclusion, this systematic review demonstrates the potential of AI to enhance cardio-oncology imaging for predicting cardiotoxicity in patients with cancer. Our findings suggest that AI can enhance the accuracy and efficiency of cardiotoxicity assessments. However, further research through larger, multicenter trials is needed to validate these applications and refine AI technologies for routine use, paving the way for improved patient outcomes in cancer survivors at risk of cardiotoxicity.

Shortcut learning leads to sex bias in deep learning models for photoacoustic tomography.

Knopp M, Bender CJ, Holzwarth N, Li Y, Kempf J, Caranovic M, Knieling F, Lang W, Rother U, Seitel A, Maier-Hein L, Dreher KK

pubmed logopapersMay 9 2025
Shortcut learning has been identified as a source of algorithmic unfairness in medical imaging artificial intelligence (AI), but its impact on photoacoustic tomography (PAT), particularly concerning sex bias, remains underexplored. This study investigates this issue using peripheral artery disease (PAD) diagnosis as a specific clinical application. To examine the potential for sex bias due to shortcut learning in convolutional neural network (CNNs) and assess how such biases might affect diagnostic predictions, we created training and test datasets with varying PAD prevalence between sexes. Using these datasets, we explored (1) whether CNNs can classify the sex from imaging data, (2) how sex-specific prevalence shifts impact PAD diagnosis performance and underdiagnosis disparity between sexes, and (3) how similarly CNNs encode sex and PAD features. Our study with 147 individuals demonstrates that CNNs can classify the sex from calf muscle PAT images, achieving an AUROC of 0.75. For PAD diagnosis, models trained on data with imbalanced sex-specific disease prevalence experienced significant performance drops (up to 0.21 AUROC) when applied to balanced test sets. Additionally, greater imbalances in sex-specific prevalence within the training data exacerbated underdiagnosis disparities between sexes. Finally, we identify evidence of shortcut learning by demonstrating the effective reuse of learned feature representations between PAD diagnosis and sex classification tasks. CNN-based models trained on PAT data may engage in shortcut learning by leveraging sex-related features, leading to biased and unreliable diagnostic predictions. Addressing demographic-specific prevalence imbalances and preventing shortcut learning is critical for developing models in the medical field that are both accurate and equitable across diverse patient populations.

APD-FFNet: A Novel Explainable Deep Feature Fusion Network for Automated Periodontitis Diagnosis on Dental Panoramic Radiography.

Resul ES, Senirkentli GB, Bostanci E, Oduncuoglu BF

pubmed logopapersMay 9 2025
This study introduces APD-FFNet, a novel, explainable deep learning architecture for automated periodontitis diagnosis using panoramic radiographs. A total of 337 panoramic radiographs, annotated by a periodontist, served as the dataset. APD-FFNet combines custom convolutional and transformer-based layers within a deep feature fusion framework that captures both local and global contextual features. Performance was evaluated using accuracy, the F1 score, the area under the receiver operating characteristic curve, the Jaccard similarity coefficient, and the Matthews correlation coefficient. McNemar's test confirmed statistical significance, and SHapley Additive exPlanations provided interpretability insights. APD-FFNet achieved 94% accuracy, a 93.88% F1 score, 93.47% area under the receiver operating characteristic curve, 88.47% Jaccard similarity coefficient, and 88.46% Matthews correlation coefficient, surpassing comparable approaches. McNemar's test validated these findings (p < 0.05). Explanations generated by SHapley Additive exPlanations highlighted important regions in each radiograph, supporting clinical applicability. By merging convolutional and transformer-based layers, APD-FFNet establishes a new benchmark in automated, interpretable periodontitis diagnosis, with low hyperparameter sensitivity facilitating its integration into regular dental practice. Its adaptable design suggests broader relevance to other medical imaging domains. This is the first feature fusion method specifically devised for periodontitis diagnosis, supported by an expert-curated dataset and advanced explainable artificial intelligence. Its robust accuracy, low hyperparameter sensitivity, and transparent outputs set a new standard for automated periodontal analysis.

The present and future of lung cancer screening: latest evidence.

Gutiérrez Alliende J, Kazerooni EA, Crosbie PAJ, Xie X, Sharma A, Reis J

pubmed logopapersMay 9 2025
Lung cancer is the leading cause of cancer-related mortality worldwide. Early lung cancer detection improves lung cancer-related mortality and survival. This report summarizes presentations and panel discussions from a webinar, "The Present and Future of Lung Cancer Screening: Latest Evidence and AI Perspectives." The webinar provided the perspectives of experts from the United States, United Kingdom, and China on evidence-based recommendations and management in lung cancer screening (LCS), barriers, and the role of artificial intelligence (AI). With several countries now incorporating the utilization of AI in their screening programs, AI offers potential solutions to some of the challenges associated with LCS.

Harnessing Advanced Machine Learning Techniques for Microscopic Vessel Segmentation in Pulmonary Fibrosis Using Novel Hierarchical Phase-Contrast Tomography Images.

Vasudev P, Azimbagirad M, Aslani S, Xu M, Wang Y, Chapman R, Coleman H, Werlein C, Walsh C, Lee P, Tafforeau P, Jacob J

pubmed logopapersMay 9 2025
 Fibrotic lung disease is a progressive illness that causes scarring and ultimately respiratory failure, with irreversible damage by the time it is diagnosed on computed tomography imaging. Recent research postulates the role of the lung vasculature on the pathogenesis of the disease. With the recent development of high-resolution hierarchical phase-contrast tomography (HiP-CT), we have the potential to understand and detect changes in the lungs long before conventional imaging. However, to gain quantitative insight into vascular changes you first need to be able to segment the vessels before further downstream analysis can be conducted. Aside from this, HiP-CT generates large-volume, high-resolution data which is time-consuming and expensive to label.  This project aims to qualitatively assess the latest machine learning methods for vessel segmentation in HiP-CT data to enable label propagation as the first step for imaging biomarker discovery, with the goal to identify early-stage interstitial lung disease amenable to treatment, before fibrosis begins.  Semisupervised learning (SSL) has become a growing method to tackle sparsely labeled datasets due to its leveraging of unlabeled data. In this study, we will compare two SSL methods; Seg PL, based on pseudo-labeling, and MisMatch, using consistency regularization against state-of-the-art supervised learning method, nnU-Net, on vessel segmentation in sparsely labeled lung HiP-CT data.  On initial experimentation, both MisMatch and SegPL showed promising performance on qualitative review. In comparison with supervised learning, both MisMatch and SegPL showed better out-of-distribution performance within the same sample (different vessel morphology and texture vessels), though supervised learning provided more consistent segmentations for well-represented labels in the limited annotations.  Further quantitative research is required to better assess the generalizability of these findings, though they show promising first steps toward leveraging this novel data to tackle fibrotic lung disease.

Adapting a Segmentation Foundation Model for Medical Image Classification

Pengfei Gu, Haoteng Tang, Islam A. Ebeid, Jose A. Nunez, Fabian Vazquez, Diego Adame, Marcus Zhan, Huimin Li, Bin Fu, Danny Z. Chen

arxiv logopreprintMay 9 2025
Recent advancements in foundation models, such as the Segment Anything Model (SAM), have shown strong performance in various vision tasks, particularly image segmentation, due to their impressive zero-shot segmentation capabilities. However, effectively adapting such models for medical image classification is still a less explored topic. In this paper, we introduce a new framework to adapt SAM for medical image classification. First, we utilize the SAM image encoder as a feature extractor to capture segmentation-based features that convey important spatial and contextual details of the image, while freezing its weights to avoid unnecessary overhead during training. Next, we propose a novel Spatially Localized Channel Attention (SLCA) mechanism to compute spatially localized attention weights for the feature maps. The features extracted from SAM's image encoder are processed through SLCA to compute attention weights, which are then integrated into deep learning classification models to enhance their focus on spatially relevant or meaningful regions of the image, thus improving classification performance. Experimental results on three public medical image classification datasets demonstrate the effectiveness and data-efficiency of our approach.

Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation

Kunpeng Qiu, Zhiqiang Gao, Zhiying Zhou, Mingjie Sun, Yongxin Guo

arxiv logopreprintMay 9 2025
Deep learning has revolutionized medical image segmentation, yet its full potential remains constrained by the paucity of annotated datasets. While diffusion models have emerged as a promising approach for generating synthetic image-mask pairs to augment these datasets, they paradoxically suffer from the same data scarcity challenges they aim to mitigate. Traditional mask-only models frequently yield low-fidelity images due to their inability to adequately capture morphological intricacies, which can critically compromise the robustness and reliability of segmentation models. To alleviate this limitation, we introduce Siamese-Diffusion, a novel dual-component model comprising Mask-Diffusion and Image-Diffusion. During training, a Noise Consistency Loss is introduced between these components to enhance the morphological fidelity of Mask-Diffusion in the parameter space. During sampling, only Mask-Diffusion is used, ensuring diversity and scalability. Comprehensive experiments demonstrate the superiority of our method. Siamese-Diffusion boosts SANet's mDice and mIoU by 3.6% and 4.4% on the Polyps, while UNet improves by 1.52% and 1.64% on the ISIC2018. Code is available at GitHub.

DFEN: Dual Feature Equalization Network for Medical Image Segmentation

Jianjian Yin, Yi Chen, Chengyu Li, Zhichao Zheng, Yanhui Gu, Junsheng Zhou

arxiv logopreprintMay 9 2025
Current methods for medical image segmentation primarily focus on extracting contextual feature information from the perspective of the whole image. While these methods have shown effective performance, none of them take into account the fact that pixels at the boundary and regions with a low number of class pixels capture more contextual feature information from other classes, leading to misclassification of pixels by unequal contextual feature information. In this paper, we propose a dual feature equalization network based on the hybrid architecture of Swin Transformer and Convolutional Neural Network, aiming to augment the pixel feature representations by image-level equalization feature information and class-level equalization feature information. Firstly, the image-level feature equalization module is designed to equalize the contextual information of pixels within the image. Secondly, we aggregate regions of the same class to equalize the pixel feature representations of the corresponding class by class-level feature equalization module. Finally, the pixel feature representations are enhanced by learning weights for image-level equalization feature information and class-level equalization feature information. In addition, Swin Transformer is utilized as both the encoder and decoder, thereby bolstering the ability of the model to capture long-range dependencies and spatial correlations. We conducted extensive experiments on Breast Ultrasound Images (BUSI), International Skin Imaging Collaboration (ISIC2017), Automated Cardiac Diagnosis Challenge (ACDC) and PH$^2$ datasets. The experimental results demonstrate that our method have achieved state-of-the-art performance. Our code is publicly available at https://github.com/JianJianYin/DFEN.

Hybrid Learning: A Novel Combination of Self-Supervised and Supervised Learning for MRI Reconstruction without High-Quality Training Reference

Haoyang Pei, Ding Xia, Xiang Xu, William Moore, Yao Wang, Hersh Chandarana, Li Feng

arxiv logopreprintMay 9 2025
Purpose: Deep learning has demonstrated strong potential for MRI reconstruction, but conventional supervised learning methods require high-quality reference images, which are often unavailable in practice. Self-supervised learning offers an alternative, yet its performance degrades at high acceleration rates. To overcome these limitations, we propose hybrid learning, a novel two-stage training framework that combines self-supervised and supervised learning for robust image reconstruction. Methods: Hybrid learning is implemented in two sequential stages. In the first stage, self-supervised learning is employed to generate improved images from noisy or undersampled reference data. These enhanced images then serve as pseudo-ground truths for the second stage, which uses supervised learning to refine reconstruction performance and support higher acceleration rates. We evaluated hybrid learning in two representative applications: (1) accelerated 0.55T spiral-UTE lung MRI using noisy reference data, and (2) 3D T1 mapping of the brain without access to fully sampled ground truth. Results: For spiral-UTE lung MRI, hybrid learning consistently improved image quality over both self-supervised and conventional supervised methods across different acceleration rates, as measured by SSIM and NMSE. For 3D T1 mapping, hybrid learning achieved superior T1 quantification accuracy across a wide dynamic range, outperforming self-supervised learning in all tested conditions. Conclusions: Hybrid learning provides a practical and effective solution for training deep MRI reconstruction networks when only low-quality or incomplete reference data are available. It enables improved image quality and accurate quantitative mapping across different applications and field strengths, representing a promising technique toward broader clinical deployment of deep learning-based MRI.
Page 195 of 2092082 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.