Sort by:
Page 19 of 2982974 results

Vision-Language Model-Based Semantic-Guided Imaging Biomarker for Lung Nodule Malignancy Prediction.

Zhuang L, Tabatabaei SMH, Salehi-Rad R, Tran LM, Aberle DR, Prosper AE, Hsu W

pubmed logopapersAug 8 2025
Machine learning models have utilized semantic features, deep features, or both to assess lung nodule malignancy. However, their reliance on manual annotation during inference, limited interpretability, and sensitivity to imaging variations hinder their application in real-world clinical settings. Thus, this research aims to integrate semantic features derived from radiologists' assessments of nodules, guiding the model to learn clinically relevant, robust, and explainable imaging features for predicting lung cancer. We obtained 938 low-dose CT scans from the National Lung Screening Trial (NLST) with 1,246 nodules and semantic features. Additionally, the Lung Image Database Consortium dataset contains 1,018 CT scans, with 2,625 lesions annotated for nodule characteristics. Three external datasets were obtained from UCLA Health, the LUNGx Challenge, and the Duke Lung Cancer Screening. We fine-tuned a pretrained Contrastive Language-Image Pretraining (CLIP) model with a parameter-efficient fine-tuning approach to align imaging and semantic text features and predict the one-year lung cancer diagnosis. Our model outperformed state-of-the-art (SOTA) models in the NLST test set with an AUROC of 0.901 and AUPRC of 0.776. It also showed robust results in external datasets. Using CLIP, we also obtained predictions on semantic features through zero-shot inference, such as nodule margin (AUROC: 0.812), nodule consistency (0.812), and pleural attachment (0.840). Our approach surpasses the SOTA models in predicting lung cancer across datasets collected from diverse clinical settings, providing explainable outputs, aiding clinicians in comprehending the underlying meaning of model predictions. This approach also prevents the model from learning shortcuts and generalizes across clinical settings. The code is available at https://github.com/luotingzhuang/CLIP_nodule.

BM3D filtering with Ensemble Hilbert-Huang Transform and spiking neural networks for cardiomegaly detection in chest radiographs.

Patel RK

pubmed logopapersAug 8 2025
Cardiomyopathy is a life-threatening condition associated with heart failure, arrhythmias, thromboembolism, and sudden cardiac death, posing a significant contribution to worldwide morbidity and mortality. Cardiomegaly, which is usually the initial radiologic sign, may reflect the progression of an underlying heart disease or an underlying undiagnosed cardiac condition. Chest radiography is the most frequently used imaging method for detecting heart enlargement. Prompt and accurate diagnosis is essential for prompt intervention and appropriate treatment planning to prevent disease progression and improve patient outcomes. The current work provides a new methodology for automated cardiomegaly diagnosis using X-ray images through the fusion of Block-Matching and 3D Filtering (BM3D) within the Ensemble Hilbert-Huang Transform (EHHT), convolutional neural networks like Pretrained VGG16, ResNet50, InceptionV3, DenseNet169, and Spiking Neural Networks (SNN), and Classifiers. BM3D is first used for image edge retention and noise reduction, and then EHHT is applied to obtain informative features from X-ray images. The features that have been extracted are then processed using an SNN that simulates neural processes at a biological level and offers a biologically possible classification solution. Gradient-weighted Class Activation Mapping (GradCAM) emphasized important areas that affected model predictions. The SNN performed the best among all the models tested, with 97.6 % accuracy, 96.3 % sensitivity, and 98.2 % specificity. These findings show the SNN's high potential for facilitating accurate and efficient cardiomyopathy diagnosis, leading to enhanced clinical decision-making and patient outcomes.

CT-based Radiomics Signature of Visceral Adipose Tissue for Prediction of Early Recurrence in Patients With NMIBC: a Multicentre Cohort Study.

Yu N, Li J, Cao D, Chen X, Yang D, Jiang N, Wu J, Zhao C, Zheng Y, Chen Y, Jin X

pubmed logopapersAug 7 2025
The objective of this study is to investigate the predictive ability of abdominal fat features derived from computed tomography (CT) to predict early recurrence within a year following the initial transurethral resection of bladder tumor (TURBT) in patients with non-muscle-invasive bladder cancer (NMIBC). A predictive model is constructed in combination with clinical factors to aid in the evaluation of the risk of early recurrence among patients with NMIBC after initial TURBT. This retrospective study enrolled 325 NMIBC patients from three centers. Machine-learning-based visceral adipose tissue (VAT) radiomics models (VAT-RM) and subcutaneous adipose tissue (SAT) radiomics models (SAT-RM) were constructed to identify patients with early recurrence. A combined model integrating VAT-RM and clinical factors was established. The predictive performance of each variable and model was analyzed using the area under the receiver operating characteristic curve (AUC). The net benefit of each variable and model was presented through decision curve analysis (DCA). The calibration was evaluated utilizing the Hosmer-Lemeshow test. The VAT-RM demonstrated satisfactory performance in the training cohort (AUC = 0.853, 95% CI 0.768-0.937), test cohort 1 (AUC = 0.823, 95% CI 0.730-0.916), and test cohort 2 (AUC = 0.808, 95% CI 0.681-0.935). Across all cohorts, the AUC values of the VAT-RM were higher than those of the SAT-RM (P < 0.001). The DCA curves further confirmed that the clinical net profit of the VAT-RM was superior to that of the SAT-RM. In multivariate logistic regression analysis, the VAT-RM emerged as the most significant independent predictor (odds ratio [OR] = 0.295, 95% CI 0.141-0.508, P < 0.001). The fusion model exhibited excellent AUC values of 0.938, 0.909, and 0.905 across three cohorts. The fusion model surpassed the traditional risk assessment frameworks in both predictive efficacy and clinical net benefit. VAT serves as a crucial factor in early postoperative recurrence in NMIBC patients. The VAT-RM can accurately identify high-risk patients with early postoperative recurrence, offering significant advantages over SAT-RM. The new predictive model constructed by integrating the VAT-RM and clinical factors exhibits excellent predictive performance, clinical net benefits, and calibration accuracy.

Gastrointestinal bleeding detection on digital subtraction angiography using convolutional neural networks with and without temporal information.

Smetanick D, Naidu S, Wallace A, Knuttinen MG, Patel I, Alzubaidi S

pubmed logopapersAug 7 2025
Digital subtraction angiography (DSA) offers a real-time approach to locating lower gastrointestinal (GI) bleeding. However, many sources of bleeding are not easily visible on angiograms. This investigation aims to develop a machine learning tool that can locate GI bleeding on DSA prior to transarterial embolization. All mesenteric artery angiograms and arterial embolization DSA images obtained in the interventional radiology department between January 1, 2007, and December 31, 2021, were analyzed. These images were acquired using fluoroscopy imaging systems (Siemens Healthineers, USA). Thirty-nine unique series of bleeding images were augmented to train two-dimensional (2D) and three-dimensional (3D) residual neural networks (ResUNet++) for image segmentation. The 2D ResUNet++ network was trained on 3,548 images and tested on 394 images, whereas the 3D ResUNet++ network was trained on 316 3D objects and tested on 35 objects. For each case, both manually cropped images focused on the GI bleed and uncropped images were evaluated, with a superimposition post-processing (SIPP) technique applied to both image types. Based on both quantitative and qualitative analyses, the 2D ResUNet++ network significantly outperformed the 3D ResUNet++ model. In the qualitative evaluation, the 2D ResUNet++ model achieved the highest accuracy across both 128 × 128 and 256 × 256 input resolutions when enhanced with the SIPP technique, reaching accuracy rates between 95% and 97%. However, despite the improved detection consistency provided by SIPP, a reduction in Dice similarity coefficients was observed compared with models without post-processing. Specifically, the 2D ResUNet++ model combined with SIPP achieved a Dice accuracy of only 80%. This decline is primarily attributed to an increase in false positive predictions introduced by the temporal propagation of segmentation masks across frames. Both 2D and 3D ResUNet++ networks can be trained to locate GI bleeding on DSA images prior to transarterial embolization. However, further research and refinement are needed before this technology can be implemented in DSA for real-time prediction. Automated detection of GI bleeding in DSA may reduce time to embolization, thereby improving patient outcomes.

Multimodal Deep Learning Approaches for Early Detection of Alzheimer's Disease: A Comprehensive Systematic Review of Image Processing Techniques.

Amine JM, Mourad M

pubmed logopapersAug 7 2025
Alzheimer's disease (AD) is the most common form of dementia, and it is important to diagnose the disease at an early stage to help people with the condition and their families. Recently, artificial intelligence, especially deep learning approaches applied to medical imaging, has shown potential in enhancing AD diagnosis. This comprehensive review investigates the current state of the art in multimodal deep learning for the early diagnosis of Alzheimer's disease using image processing. The research underpinning this review spanned several months. Numerous deep learning architectures are examined, including CNNs, transfer learning methods, and combined models that use different imaging modalities, such as structural MRI, functional MRI, and amyloid PET. The latest work on explainable AI (XAI) is also reviewed to improve the understandability of the models and identify the particular regions of the brain related to AD pathology. The results indicate that multimodal approaches generally outperform single-modality methods, and three-dimensional (volumetric) data provides a better form of representation compared to two-dimensional images. Current challenges are also discussed, including insufficient and/or poorly prepared datasets, computational expense, and the lack of integration with clinical practice. The findings highlight the potential of applying deep learning approaches for early AD diagnosis and for directing future research pathways. The integration of multimodal imaging with deep learning techniques presents an exciting direction for developing improved AD diagnostic tools. However, significant challenges remain in achieving accurate, reliable, and understandable clinical applications.

Response Assessment in Hepatocellular Carcinoma: A Primer for Radiologists.

Mroueh N, Cao J, Srinivas Rao S, Ghosh S, Song OK, Kongboonvijit S, Shenoy-Bhangle A, Kambadakone A

pubmed logopapersAug 7 2025
Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related deaths worldwide, necessitating accurate and early diagnosis to guide therapy, along with assessment of treatment response. Response assessment criteria have evolved from traditional morphologic approaches, such as WHO criteria and Response Evaluation Criteria in Solid Tumors (RECIST), to more recent methods focused on evaluating viable tumor burden, including European Association for Study of Liver (EASL) criteria, modified RECIST (mRECIST) and Liver Imaging Reporting and Data System (LI-RADS) Treatment Response (LI-TR) algorithm. This shift reflects the complex and evolving landscape of HCC treatment in the context of emerging systemic and locoregional therapies. Each of these criteria have their own nuanced strengths and limitations in capturing the detailed characteristics of HCC treatment and response assessment. The emergence of functional imaging techniques, including dual-energy CT, perfusion imaging, and rising use of radiomics, are enhancing the capabilities of response assessment. Growth in the realm of artificial intelligence and machine learning models provides an opportunity to refine the precision of response assessment by facilitating analysis of complex imaging data patterns. This review article provides a comprehensive overview of existing criteria, discusses functional and emerging imaging techniques, and outlines future directions for advancing HCC tumor response assessment.

Artificial Intelligence for the Detection of Fetal Ultrasound Findings Concerning for Major Congenital Heart Defects.

Zelop CM, Lam-Rachlin J, Arunamata A, Punn R, Behera SK, Lachaud M, David N, DeVore GR, Rebarber A, Fox NS, Gayanilo M, Garmel S, Boukobza P, Uzan P, Joly H, Girardot R, Cohen L, Stos B, De Boisredon M, Askinazi E, Thorey V, Gardella C, Levy M, Geiger M

pubmed logopapersAug 7 2025
To evaluate the performance of an artificial intelligence (AI)-based software to identify second-trimester fetal ultrasound examinations suspicious for congenital heart defects. The software analyzes all grayscale two-dimensional ultrasound cine clips of an examination to evaluate eight morphologic findings associated with severe congenital heart defects. A data set of 877 examinations was retrospectively collected from 11 centers. The presence of suspicious findings was determined by a panel of expert pediatric cardiologists, who determined that 311 examinations had at least one of the eight suspicious findings. The AI software processed each examination, labeling each finding as present, absent, or inconclusive. Of the 280 examinations with known severe congenital heart defects, 278 (sensitivity 0.993, 95% CI, 0.974-0.998) had at least one of the eight suspicious findings present as determined by the fetal cardiologists, highlighting the relevance of these eight findings. We then evaluated the performance of the AI software, which identified at least one finding as present in 271 examinations, that all eight findings were absent in five examinations, and was inconclusive in four of the 280 examinations with severe congenital heart defects, yielding a sensitivity of 0.968 (95% CI, 0.940-0.983) for severe congenital heart defects. When comparing the AI to the determination of findings by fetal cardiologists, the detection of any finding by the AI had a sensitivity of 0.987 (95% CI, 0.967-0.995) and a specificity of 0.977 (95% CI, 0.961-0.986) after exclusion of inconclusive examinations. The AI rendered a decision for any finding (either present or absent) in 98.7% of examinations. The AI-based software demonstrated high accuracy in identification of suspicious findings associated with severe congenital heart defects, yielding a high sensitivity for detecting severe congenital heart defects. These results show that AI has potential to improve antenatal congenital heart defect detection.

MLAgg-UNet: Advancing Medical Image Segmentation with Efficient Transformer and Mamba-Inspired Multi-Scale Sequence.

Jiang J, Lei S, Li H, Sun Y

pubmed logopapersAug 7 2025
Transformers and state space sequence models (SSMs) have attracted interest in biomedical image segmentation for their ability to capture long-range dependency. However, traditional visual state space (VSS) methods suffer from the incompatibility of image tokens with autoregressive assumption. Although Transformer attention does not require this assumption, its high computational cost limits effective channel-wise information utilization. To overcome these limitations, we propose the Mamba-Like Aggregated UNet (MLAgg-UNet), which introduces Mamba-inspired mechanism to enrich Transformer channel representation and exploit implicit autoregressive characteristic within U-shaped architecture. For establishing dependencies among image tokens in single scale, the Mamba-Like Aggregated Attention (MLAgg) block is designed to balance representational ability and computational efficiency. Inspired by the human foveal vision system, Mamba macro-structure, and differential attention, MLAgg block can slide its focus over each image token, suppress irrelevant tokens, and simultaneously strengthen channel-wise information utilization. Moreover, leveraging causal relationships between consecutive low-level and high-level features in U-shaped architecture, we propose the Multi-Scale Mamba Module with Implicit Causality (MSMM) to optimize complementary information across scales. Embedded within skip connections, this module enhances semantic consistency between encoder and decoder features. Extensive experiments on four benchmark datasets, including AbdomenMRI, ACDC, BTCV, and EndoVis17, which cover MRI, CT, and endoscopy modalities, demonstrate that the proposed MLAgg-UNet consistently outperforms state-of-the-art CNN-based, Transformer-based, and Mamba-based methods. Specifically, it achieves improvements of at least 1.24%, 0.20%, 0.33%, and 0.39% in DSC scores on these datasets, respectively. These results highlight the model's ability to effectively capture feature correlations and integrate complementary multi-scale information, providing a robust solution for medical image segmentation. The implementation is publicly available at https://github.com/aticejiang/MLAgg-UNet.

Enhancing Domain Generalization in Medical Image Segmentation With Global and Local Prompts.

Zhao C, Li X

pubmed logopapersAug 7 2025
Enhancing domain generalization (DG) is a crucial and compelling research pursuit within the field of medical image segmentation, owing to the inherent heterogeneity observed in medical images. The recent success with large-scale pre-trained vision models (PVMs), such as Vision Transformer (ViT), inspires us to explore their application in this specific area. While a straightforward strategy involves fine-tuning the PVM using supervised signals from the source domains, this approach overlooks the domain shift issue and neglects the rich knowledge inherent in the instances themselves. To overcome these limitations, we introduce a novel framework enhanced by global and local prompts (GLPs). Specifically, to adapt PVM in the medical DG scenario, we explicitly separate domain-shared and domain-specific knowledge in the form of GLPs. Furthermore, we develop an individualized domain adapter to intricately investigate the relationship between each target domain sample and the source domains. To harness the inherent knowledge within instances, we devise two innovative regularization terms from both the consistency and anatomy perspectives, encouraging the model to preserve instance discriminability and organ position invariance. Extensive experiments and in-depth discussions in both vanilla and semi-supervised DG scenarios deriving from five diverse medical datasets consistently demonstrate the superior segmentation performance achieved by GLP. Our code and datasets are publicly available at https://github.com/xmed-lab/GLP.

Best Machine Learning Model for Predicting Axial Symptoms After Unilateral Laminoplasty: Based on C2 Spinous Process Muscle Radiomics Features and Sagittal Parameters.

Zheng B, Zhu Z, Liang Y, Liu H

pubmed logopapersAug 7 2025
Study DesignRetrospective study.ObjectiveTo develop a machine learning model for predicting axial symptoms (AS) after unilateral laminoplasty by integrating C2 spinous process muscle radiomics features and cervical sagittal parameters.MethodsIn this retrospective study of 96 cervical myelopathy patients (30 with AS, 66 without) who underwent unilateral laminoplasty between 2018-2022, we extracted radiomics features from preoperative MRI of C2 spinous muscles using PyRadiomics. Clinical data including C2-C7 Cobb angle, cervical sagittal vertical axis (cSVA), T1 slope (T1S) and C2 muscle fat infiltration are collected for clinical model construction. After LASSO regression feature selection, we constructed six machine learning models (SVM, KNN, Random Forest, ExtraTrees, XGBoost, and LightGBM) and evaluated their performance using ROC curves and AUC.ResultsThe AS group demonstrated significantly lower preoperative C2-C7 Cobb angles (12.80° ± 7.49° vs 18.02° ± 8.59°, <i>P</i> = .006), higher cSVA (3.01 cm ± 0.87 vs 2.46 ± 1.19 cm, <i>P</i> = .026), T1S (26.68° ± 5.12° vs 23.66° ± 7.58°, <i>P</i> = .025) and higher C2 muscle fat infiltration (23.73 ± 7.78 vs 20.62 ± 6.93 <i>P</i> = .026). Key radiomics features included local binary pattern texture features and wavelet transform characteristics. The combined model integrating radiomics and clinical parameters achieved the best performance with test AUC of 0.881, sensitivity of 0.833, and specificity of 0.786.ConclusionThe machine learning model based on C2 spinous process muscle radiomics features and clinical parameters (C2-C7 Cobb angle, cSVA, T1S and C2 muscle infiltration) effectively predicts AS occurrence after unilateral laminoplasty, providing clinicians with a valuable tool for preoperative risk assessment and personalized treatment planning.
Page 19 of 2982974 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.