Sort by:
Page 27 of 6046038 results

Yang RH, Fan WX, Zhong Y, Lin ZP, Chen JP, Jiang GH, Dai HY

pubmed logopapersOct 15 2025
Predicting the pathological response of esophageal cancer (EC) to neoadjuvant therapy (NAT) is of significant clinical importance. To evaluate the pathological response of NAT in EC patients using multiple machine learning algorithms based on magnetic resonance imaging (MRI) radiomics. This retrospective study included 132 patients with pathologically confirmed EC, were randomly divided into a training cohort (<i>n</i> = 92) and a validation cohort (<i>n</i> = 40) in a 7:3 ratio. All patients underwent a preoperative MRI scan from the neck to the abdomen. High-throughput and quantitative radiomics features were extracted from T2-weighted imaging (T2WI). Radiomics signatures were selected using minimal redundancy maximal relevance and the least absolute shrinkage and selection operator. Nine classification algorithms were used to build the models, and the diagnostic performance of each model was evaluated using the area under the curve (AUC), sensitivity (SEN), and specificity (SPE). A total of 1834 features were extracted. Following feature dimension reduction, ten radiomics features were selected to construct radiomics signatures. Among the nine classification algorithms, the ExtraTrees algorithm demonstrated the best diagnostic performance in both the training (AUC: 0.932; SEN: 0.906; SPE: 0.817) and validation cohorts (AUC: 0.900; SEN: 0.667; SPE: 0.700). The Delong test proved no significance in the diagnostic efficiency within these models (<i>P</i> > 0.05). T2WI radiomics may aid in determining the pathological response to NAT in EC patients, serving as a noninvasive and quantitative tool to assist personalized treatment planning.

Ren M, Yang Z, Fu Y, Chen Z, Shi Y, Lv Y

pubmed logopapersOct 15 2025
<p> Introduction: Ultrasound is routinely used for thyroid nodule diagnosis, yet distinguishing benign from malignant TI-RADS category 4 nodules remains challenging. This study has integrated two-dimensional ultrasound, shear wave elastography (SWE), and contrast-enhanced ultrasound (CEUS) features via machine learning to improve diagnostic accuracy for these nodules. </p> <p> Methods: A total of 117 TI-RADS 4 thyroid nodules from 108 patients were included and classified as benign or malignant based on pathological results. Two-dimensional ultrasound, CEUS, and SWE were compared. Predictive features were selected using LASSO regression. Feature importance was further validated using Random Forest, SVM, and XGBoost algorithms. A logistic regression model was constructed and visualized as a nomogram. Model performance was assessed using receiver operating characteristic (ROC) analysis, calibration curves, and decision curve analysis (DCA). </p> <p> Results: Malignant nodules exhibited significantly elevated serum FT3, FT4, FT3/FT4, TSH, and TI-RADS scores compared to benign lesions. Key imaging discriminators included unclear boundaries, aspect ratio ≥1, low internal echo, microcalcifications on ultrasound; enhancement degree, circumferential enhancement, and excretion on CEUS; and elevated SWE values (Emax, Emean, Esd, etc.) and altered CEUS quantitative parameters (PE, WiR, WoR, etc.) (all P< 0.05). A nomogram integrating four optimal predictors, including Emax, FT4, TI-RADS, and ΔPE, demonstrated robust predictive performance upon validation by ROC, calibration, and DCA curve analysis. </p> <p> Discussion: The nomogram incorporating Emax, FT4, TI-RADS, and ΔPE showed high predictive accuracy, particularly for papillary carcinoma in TI-RADS 4 nodules. Its applicability may, however, be constrained by the single-center retrospective design and limited pathological coverage. </p> <p> Conclusion: The multimodal ultrasound-based machine learning model effectively predicted malignancy in TI-RADS category 4 thyroid nodules. </p>.

Balasaranya K, Ezhumalai P, Shanker NR

pubmed logopapersOct 15 2025
<p> Introduction: Intracranial hemorrhage (IH) causes dementia and Alzheimer's disease in the later stages. Until now, the accurate, early detection of IH, its prognosis, and therapeutic interventions have been a challenging task. Objective: A Multimodal Joint Fusion Sentiment Analysis (MJFSA) framework is proposed for the early detection and classification of IH, as well as sentiment analysis to support prognosis and therapeutic report generation. </p> <p> Methodology: MJFSA integrates radiological images and the radiological clinical narrative reports (RCNRs). In the proposed MJFSA model, MRI brain images are enhanced using the modified Contrast Limited Adaptive Histogram Equalization (M-CLAHE) algorithm. Enhanced images are processed with the proposed Tuned Temporal-GAN (Tuned-T-GAN) algorithm to generate temporal images. RCNRs are generated for temporal images using the Microsoft-Phi2 language model. Temporal images are processed with the Tuned-Vision Image Transformer (T-ViT) model to extract image features. On the other hand, the Bio-Bidirectional Encoder Representation Transformer (Bio-BERT) processes the RCNR texts for text feature extraction. Temporal image and RCNR text features are used for IH classification, such as intracerebral hemorrhage (ICH), epidural hemorrhage (EDH), subdural hemorrhage (SDH), and intraventricular hemorrhage (IVH), resulting in sentiment analysis for prognosis and therapeutic reports. </p> <p> Results: The MJFSA model has achieved an accuracy of 96.5% in prognosis sentiment analysis and 94.5% in therapeutic sentiment analysis. </p> <p> Discussion: The Multimodal Joint Fusion Sentiment Analysis (MJFSA) framework detects IH and classifies it using sentiment analysis for prognosis and therapeutic report generation. </p> <p> Conclusion: The MJFSA model's prognosis and therapeutic sentiment analysis report aims to support the early identification and management of risk factors associated with dementia and Alzheimer's disease. </p>.

Haotian Feng, Ke Sheng

arxiv logopreprintOct 15 2025
We develop and validate a novel spherical radiomics framework for predicting key molecular biomarkers using multiparametric MRI. Conventional Cartesian radiomics extract tumor features on orthogonal grids, which do not fully capture the tumor's radial growth patterns and can be insensitive to evolving molecular signatures. In this study, we analyzed GBM radiomic features on concentric 2D shells, which were then mapped onto 2D planes for radiomics analysis. Radiomic features were extracted using PyRadiomics from four different regions in GBM. Feature selection was performed using ANOVA F-statistics. Classification was conducted with multiple machine-learning models. Model interpretability was evaluated through SHAP analysis, clustering analysis, feature significance profiling, and comparison between radiomic patterns and underlying biological processes. Spherical radiomics consistently outperformed conventional 2D and 3D Cartesian radiomics across all prediction tasks. The best framework reached an AUC of 0.85 for MGMT, 0.80 for EGFR, 0.80 for PTEN, and 0.83 for survival prediction. GLCM-derived features were identified as the most informative predictors. Radial transition analysis using the Mann-Whitney U-test demonstrates that transition slopes between T1-weighted contrast-enhancing and T2/FLAIR hyperintense lesion regions, as well as between T2 intense lesion and a 2 cm peritumoral expansion region, are significantly associated with biomarker status. Furthermore, the observed radiomic changes along the radial direction closely reflected known biological characteristics. Radiomic features extracted on the spherical surfaces at varying radial distances to the GBM tumor centroid are better correlated with important tumor molecular markers and patient survival than the conventional Cartesian analysis.

BARON, M., Nguyen, Q., Kovacina, B., van Eeden, C., Langs, G.

medrxiv logopreprintOct 15 2025
BackgroundArtificial Intelligence can analyse high resolution CT lung scans (HRCT) in various interstitial lung diseases (ILD) including Systemic Sclerosis (SSc). Older HRCT lung scans may have been saved as small dicom file sets consisting of non-contiguous slices. These are not amenable to AI analyses. ObjectivesOur aim was to develop and test a method of rebuilding small non-contiguous sets of HRCT lung slices into larger sets of contiguous slices that could be analysed by AI programs. MethodsWe deleted sets of dicom files from 14 large dicom file set scans from SSc patients and were left with a scan with about 30 equidistant non-contiguous slices. We then inserted copies of scans between each pair of slices to create a large dicom file set similar in size to the original large file set scan. Both the original scan and the rebuilt large dicom file set scan were analyzed by Contextflow ADVANCE Chest CT. We recorded the values for honeycombing (HC), reticular pattern (RP), ground glass opacities (GGO) and total ILD. We analyzed agreement between the original scan and the rebuilt large file set scan using intraclass correlation coefficient (ICC), Lins concordance correlation coefficient (CCC),1 Bland-Altman limits-of-agreement (LOA) plots and the Bradley-Blackwood p value. ResultsICC, CCC, Bradly-Blackwood p values and Bland Altman plots showed excellent agreement between scans for HC, RC, GGO and total ILD except for the Bradley-Blackwood p value for RP. ConclusionsSmall non-contiguous HRCT lung scans in SSc can be manipulated to allow analysis by AI.

Nejati, S. F., Sadabad, F. E., Ren, R., Huang, Y., Bini, J.

medrxiv logopreprintOct 15 2025
ObjectiveTo determine if combining PET-derived beta-cell mass (BCM) estimates with MRI- based morphology metrics improves the prediction of beta-cell functional mass in type 2 diabetes (T2D). MethodsWe performed a retrospective analysis of 40 participants; 19 T2D, 16 healthy obese volunteers (HOV), 5 prediabetes, who underwent [18F]FP-(+)-DTBZ PET to quantify vesicular monoamine transporter type 2 (VMAT2) density (SUVR-1), T1-weighted MRI for 3D morphology metric analysis, and an arginine stimulus test to measure acute (AIRarg) and maximum (AIRargMAX) insulin responses. Lasso regression models identified the optimal combination of PET, MRI, and clinical variables to predict beta-cell function for the whole pancreas and its subregions. ResultsCompared to HOV, individuals with T2D exhibited significantly reduced AIRarg and AIRargMAX. Only pancreas body volume was significantly smaller in the T2D cohort. For the whole pancreas, a model including PET-derived SUVR-1 and a subset of clinical covariates best predicted acute beta-cell function (AIRarg). However, predicting maximum functional reserve (AIRargMAX) required the addition of MRI-based morphology metrics in combination with SUVR-1 and a subset of clinical covariates. ConclusionWe combined PET imaging of BCM and MRI morphology metrics with a robust machine learning-based variable selection method to extract useful PET- and MRI-based metrics for predicting functional and not-fully functional BCM. This synergistic approach offers a novel combination of biomarkers for staging disease and evaluating therapeutic interventions.

Asif S, Ou D, Hadi F, Yan Y, Wang E, Zhang Y, Xu D

pubmed logopapersOct 14 2025
Despite advances in deep learning (DL) and computer vision, breast cancer (BC) detection via ultra-sound remains challenging. Existing methods often focus on single tasks using complex pipelines and publicly available datasets, limiting clinical applicability. To address this, we propose BreastUS-Net-a novel architecture for hierarchical BC classification using diverse datasets. Our approach uses a dual-branch MobileNet architecture with fine-tuned and frozen layers to capture both task-specific and general features, eliminating manual feature extraction. These features are then fused to create a comprehensive representation, which is subsequently aggregated and refined. The aggregation step merges the outputs from both branches, while the refinement module reduces complexity, highlights relevant patterns, and mitigates overfitting to improve generalization. Additionally, we integrate a multihead self-attention (MHSA) block to highlight diagnostically significant regions in ultrasound images, enhancing both accuracy and robustness. Finally, the orthogonal softmax layer (OSL) boosts discriminative power by enforcing orthogonality among weight vectors, reducing parameter coadaptation and enabling more effective optimization. We used six diverse datasets from multiple centers, including: a large Zhejiang Cancer Hospital set (2,171 images), public BUSI dataset (780 images), external test sets from Yunnan Cancer Hospital (351 images) and Sir Run Run Shaw Hospitals (365 images), fibroadenoma (FA) vs. phyllodes tumor (PT) classification, and a PT grading dataset. We use explainable AI (XAI) techniques-Grad-CAM, SHAP, and saliency maps-to enhance trust in breast ultrasound predictions. Our model achieves state-of-the-art performance, with accuracies of 94.48% on a clinical dataset and 94.23% on the BUSI dataset, highlighting its potential to improve BC diagnosis and personalized treatment.

Khairy P, Fuentes Rojas S, Hermann Honfo S

pubmed logopapersOct 14 2025
Sudden cardiac death (SCD) remains a feared and difficult-to-predict outcome in patients with congenital heart disease (CHD). This review examines the latest evidence in risk stratification, with a focus on limitations of existing models and the mechanistic and statistical complexities that hinder individualized decision-making. New multivariable risk scores for repaired tetralogy of Fallot and systemic right ventricle have improved prognostic resolution. Artificial intelligence-enabled ECG algorithms have shown promise in early identification of high-risk individuals with repaired tetralogy of Fallot. In parallel, three-dimensional cardiac magnetic resonance imaging has been leveraged to delineate arrhythmogenic isthmuses, enhancing substrate-guided interventions. While these tools enhance risk estimation, they require validation specific to the prediction of shockable terminal rhythms, improved interpretability, and integration into individualized decision frameworks. SCD risk prediction in CHD is evolving toward a multimodal, individualized approach that emphasizes probabilistic reasoning, shared decision-making, and epistemic humility. Although new models and technologies offer incremental gains, they do not eliminate the uncertainty inherent in predicting rare events. The application of population-based tools to individual patients must be interpreted cautiously, recognizing that SCD represents a final common pathway for diverse pathophysiological processes, and that decisions about ICD implantation entail complex trade-offs.

Lin H, Song Y, Su Y, Ma Y

pubmed logopapersOct 14 2025
Deformable image registration aims to achieve nonlinear alignment of image spaces by estimating dense displacement fields. It is widely used in clinical tasks such as surgical planning, assisted diagnosis, and surgical navigation. While efficient, deep learning registration methods often struggle with large, complex displacements. Pyramid-based approaches address this with a coarse-to-fine strategy, but their single-feature processing can lead to error accumulation. In this paper, we introduce a dense Mixture of Experts (MoE) pyramid registration model, using routing schemes and multiple heterogeneous experts to increase the width and flexibility of feature processing within a single layer. The collaboration among heterogeneous experts enables the model to retain more precise details and maintain greater feature freedom when dealing with complex displacements. We use only deformation fields as the information transmission paradigm between different levels, with deformation field interactions between layers, which encourages the model to focus on the feature location matching process and perform registration in the correct direction. We do not utilize any complex mechanisms such as attention or ViT, keeping the model at its simplest form. The powerful deformable capability allows the model to perform volume registration directly and accurately without the need for affine registration. Experimental results show that the model achieves outstanding performance across four public datasets, including brain registration, lung registration, and abdominal multi-modal registration. The code will be published at https://github.com/Darlinglinlinlin/MOE_Morph.

Dole L, Mattos CT, Bianchi J, Oh H, Evangelista K, Valladares Neto J, Mota-Júnior SL, Cevidanes L, Prieto JC

pubmed logopapersOct 14 2025
Enlarged adenoids that obstruct nasal breathing can cause significant health complications, including cognitive deficits, cardiovascular risks, and developmental delays. Early and accurate diagnosis is critical for effective treatment planning, but current diagnostic methods-such as polysomnography and clinical visual inspection-are either time-consuming, expensive, or lack sufficient accuracy. As cone-beam computed tomography (CBCT) scans are frequently available for these patients and may complement diagnosis, we propose an open-source, automated deep learning tool for quantitative airway obstruction assessment. Our method leverages CBCT scans, which are automatically segmented and processed to extract 3D airway morphology. Our approach combines two advanced techniques for 3D shape analysis: multi-view and point cloud representations to capture both global and local airway features, enhancing classification and regression performance. Our model achieves an accuracy of 81.88% in classifying the presence or absence of adenoid hypertrophy and demonstrates improved performance in predicting the nasopharynx airway obstruction ratio. While the model performs well in detecting severe cases, further refinement is needed to improve classification and regression across all severity levels. This tool has the potential to enhance clinical workflows by providing rapid, quantitative, and reproducible assessments of airway obstruction, offering a promising solution for improving diagnostic efficiency and patient outcomes in clinical practice.
Page 27 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.