Sort by:
Page 42 of 3993982 results

Diagnostic Performance of Imaging-Based Artificial Intelligence Models for Preoperative Detection of Cervical Lymph Node Metastasis in Clinically Node-Negative Papillary Thyroid Carcinoma: A Systematic Review and Meta-Analysis.

Li B, Cheng G, Mo Y, Dai J, Cheng S, Gong S, Li H, Liu Y

pubmed logopapersAug 4 2025
This systematic review and meta-analysis evaluated the performance of imaging-based artificial intelligence (AI) models in diagnosing preoperative cervical lymph node metastasis (LNM) in clinically node-negative (cN0) papillary thyroid carcinoma (PTC). We conducted a literature search in PubMed, Embase, and Web of Science until February 25, 2025. Studies were selected that focused on imaging-based AI models for predicting cervical LNM in cN0 PTC. The diagnostic performance metrics were analyzed using a bivariate random-effects model, and study quality was assessed with the QUADAS-2 tool. From 671 articles, 11 studies involving 3366 patients were included. Ultrasound (US)-based AI models showed pooled sensitivity of 0.79 and specificity of 0.82, significantly higher than radiologists (p < 0.001). CT-based AI models demonstrated sensitivity of 0.78 and specificity of 0.89. Imaging-based AI models, particularly US-based AI, show promising diagnostic performance. There is a need for further multicenter prospective studies for validation. PROSPERO: (CRD420251063416).

A Novel Dual-Output Deep Learning Model Based on InceptionV3 for Radiographic Bone Age and Gender Assessment.

Rayed B, Amasya H, Sezdi M

pubmed logopapersAug 4 2025
Hand-wrist radiographs are used in bone age prediction. Computer-assisted clinical decision support systems offer solutions to the limitations of the radiographic bone age assessment methods. In this study, a multi-output prediction model was designed to predict bone age and gender using digital hand-wrist radiographs. The InceptionV3 architecture was used as the backbone, and the model was trained and tested using the open-access dataset of 2017 RSNA Pediatric Bone Age Challenge. A total of 14,048 samples were divided to training, validation, and testing subsets with the ratio of 7:2:1, and additional specialized convolutional neural network layers were implemented for robust feature management, such as Squeeze-and-Excitation block. The proposed model achieved a mean squared error of approximately 25 and a mean absolute error of 3.1 for predicting bone age. In gender classification, an accuracy of 95% and an area under the curve of 97% were achieved. The intra-class correlation coefficient for the continuous bone age predictions was found to be 0.997, while the Cohen's <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>κ</mi></math> coefficient for the gender predictions was found to be 0.898 ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo></mrow> </math> 0.001). The proposed model aims to increase model efficiency by identifying common and discrete features. Based on the results, the proposed algorithm is promising; however, the mid-high-end hardware requirement may be a limitation for its use on local machines in the clinic. The future studies may consider increasing the dataset and simplification of the algorithms.

Development and Validation of an Explainable MRI-Based Habitat Radiomics Model for Predicting p53-Abnormal Endometrial Cancer: A Multicentre Feasibility Study.

Jin W, Zhang H, Ning Y, Chen X, Zhang G, Li H, Zhang H

pubmed logopapersAug 4 2025
We developed an MRI-based habitat radiomics model (HRM) to predict p53-abnormal (p53abn) molecular subtypes of endometrial cancer (EC). Patients with pathologically confirmed EC were retrospectively enrolled from three hospitals and categorized into a training cohort (n = 270), test cohort 1 (n = 70), and test cohort 2 (n = 154). The tumour was divided into habitat sub-regions using diffusion-weighted imaging (DWI) and contrast-enhanced (CE) images with the K-means algorithm. Radiomics features were extracted from T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), DWI, and CE images. Three machine learning classifiers-logistic regression, support vector machines, and random forests-were applied to develop predictive models for p53abn EC. Model performance was validated using receiver operating characteristic (ROC) curves, and the model with the best predictive performance was selected as the HRM. A whole-region radiomics model (WRM) was also constructed, and a clinical model (CM) with five clinical features was developed. The SHApley Additive ExPlanations (SHAP) method was used to explain the outputs of the models. DeLong's test evaluated and compared the performance across the cohorts. A total of 1920 habitat radiomics features were considered. Eight features were selected for the HRM, ten for the WRM, and three clinical features for the CM. The HRM achieved the highest AUC: 0.855 (training), 0.769 (test1), and 0.766 (test2). The AUCs of the WRM were 0.707 (training), 0.703 (test1), and 0.738 (test2). The AUCs of the CM were 0.709 (training), 0.641 (test1), and 0.665 (test2). The MRI-based HRM successfully predicted p53abn EC. The results indicate that habitat combined with machine learning, radiomics, and SHAP can effectively predict p53abn EC, providing clinicians with intuitive insights and interpretability regarding the impact of risk factors in the model.

Deep Learning Reconstruction for T2 Weighted Turbo-Spin-Echo Imaging of the Pelvis: Prospective Comparison With Standard T2-Weighted TSE Imaging With Respect to Image Quality, Lesion Depiction, and Acquisition Time.

Sussman MS, Cui L, Tan SBM, Prasla S, Wah-Kahn T, Nickel D, Jhaveri KS

pubmed logopapersAug 4 2025
In pelvic MRI, Turbo Spin Echo (TSE) pulse sequences are used for T2-weighted imaging. However, its lengthy acquisition time increases the potential for artifacts. Deep learning (DL) reconstruction achieves reduced scan times without the degradation in image quality associated with other accelerated techniques. Unfortunately, a comprehensive assessment of DL-reconstruction in pelvic MRI has not been performed. The objective of this prospective study was to compare the performance of DL-TSE and conventional TSE pulse sequences in a broad spectrum of pelvic MRI indications. Fifty-five subjects (33 females and 22 males) were scanned at 3 T using DL-TSE and conventional TSE sequences in axial and/or oblique acquisition planes. Two radiologists independently assessed image quality in 6 categories: edge definition, vessel margin sharpness, T2 Contrast Dynamic Range, artifacts, overall image quality, and lesion features. The contrast ratio was calculated for quantitative assessment. A two-tailed sign test was used for assessment. The 2 readers found DL-TSE to deliver equal or superior image quality than conventional TSE in most cases. There were only 3 instances out of 24 where conventional TSE was scored as providing better image quality. Readers agreed on DL-TSE superiority/inferiority/equivalence in 67% of categories in the axial plane and 75% in the oblique plane. DL-TSE also demonstrated a better contrast ratio in 75% of cases. DL-TSE reduced scan time by approximately 50%. DL-accelerated TSE sequences generally provide equal or better image quality in pelvic MRI than standard TSE with significantly reduced acquisition times.

Incorporating Artificial Intelligence into Fracture Risk Assessment: Using Clinical Imaging to Predict the Unpredictable.

Kong SH

pubmed logopapersAug 4 2025
Artificial intelligence (AI) is increasingly being explored as a complementary tool to traditional fracture risk assessment methods. Conventional approaches, such as bone mineral density measurement and established clinical risk calculators, provide populationlevel stratification but often fail to capture the structural nuances of bone fragility. Recent advances in AI-particularly deep learning techniques applied to imaging-enable opportunistic screening and individualized risk estimation using routinely acquired radiographs and computed tomography (CT) data. These models demonstrate improved discrimination for osteoporotic fracture detection and risk prediction, supporting applications such as time-to-event modeling and short-term prognosis. CT- and radiograph-based models have shown superiority over conventional metrics in diverse cohorts, while innovations like multitask learning and survival plots contribute to enhanced interpretability and patient-centered communication. Nevertheless, challenges related to model generalizability, data bias, and automation bias persist. Successful clinical integration will require rigorous external validation, transparent reporting, and seamless embedding into electronic medical systems. This review summarizes recent advances in AI-driven fracture assessment, critically evaluates their clinical promise, and outlines a roadmap for translation into real-world practice.

Glioblastoma Overall Survival Prediction With Vision Transformers

Yin Lin, iccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

Function of <sup>18</sup>F-FDG PET/CT radiomics in the detection of checkpoint inhibitor-induced liver injury (CHILI).

Huigen CMC, Coukos A, Latifyan S, Nicod Lalonde M, Schaefer N, Abler D, Depeursinge A, Prior JO, Fraga M, Jreige M

pubmed logopapersAug 4 2025
In the last decade, immunotherapy, particularly immune checkpoint inhibitors, has revolutionized cancer treatment and improved prognosis. However, severe checkpoint inhibitor-induced liver injury (CHILI), which can lead to treatment discontinuation or death, occurs in up to 18% of the patients. The aim of this study is to evaluate the value of PET/CT radiomics analysis for the detection of CHILI. Patients with CHILI grade 2 or higher who underwent liver function tests and liver biopsy were retrospectively included. Minors, patients with cognitive impairments, and patients with viral infections were excluded from the study. The patients' liver and spleen were contoured on the anonymized PET/CT imaging data, followed by radiomics feature extraction. Principal component analysis (PCA) and Bonferroni corrections were used for statistical analysis and exploration of radiomics features related to CHILI. Sixteen patients were included and 110 radiomics features were extracted from PET images. Liver PCA-5 showed significance as well as one associated feature but did not remain significant after Bonferroni correction. Spleen PCA-5 differed significantly between CHILI and non-CHILI patients even after Bonferroni correction, possibly linked to the higher metabolic function of the spleen in autoimmune diseases due to the recruitment of immune cells. This pilot study identified statistically significant differences in PET-derived radiomics features of the spleen and observable changes in the liver on PET/CT scans before and after the onset of CHILI. Identifying these features could aid in diagnosing or predicting CHILI, potentially enabling personalized treatment. Larger multicenter prospective studies are needed to confirm these findings and develop automated detection methods.

Accurate and Interpretable Postmenstrual Age Prediction via Multimodal Large Language Model

Qifan Chen, Jin Cui, Cindy Duan, Yushuo Han, Yifei Shi

arxiv logopreprintAug 4 2025
Accurate estimation of postmenstrual age (PMA) at scan is crucial for assessing neonatal development and health. While deep learning models have achieved high accuracy in predicting PMA from brain MRI, they often function as black boxes, offering limited transparency and interpretability in clinical decision support. In this work, we address the dual challenge of accuracy and interpretability by adapting a multimodal large language model (MLLM) to perform both precise PMA prediction and clinically relevant explanation generation. We introduce a parameter-efficient fine-tuning (PEFT) strategy using instruction tuning and Low-Rank Adaptation (LoRA) applied to the Qwen2.5-VL-7B model. The model is trained on four 2D cortical surface projection maps derived from neonatal MRI scans. By employing distinct prompts for training and inference, our approach enables the MLLM to handle a regression task during training and generate clinically relevant explanations during inference. The fine-tuned model achieves a low prediction error with a 95 percent confidence interval of 0.78 to 1.52 weeks, while producing interpretable outputs grounded in developmental features, marking a significant step toward transparent and trustworthy AI systems in perinatal neuroscience.

A Dual Radiomic and Dosiomic Filtering Technique for Locoregional Radiation Pneumonitis Prediction in Breast Cancer Patients

Zhenyu Yang, Qian Chen, Rihui Zhang, Manju Liu, Fengqiu Guo, Minjie Yang, Min Tang, Lina Zhou, Chunhao Wang, Minbin Chen, Fang-Fang Yin

arxiv logopreprintAug 4 2025
Purpose: Radiation pneumonitis (RP) is a serious complication of intensity-modulated radiation therapy (IMRT) for breast cancer patients, underscoring the need for precise and explainable predictive models. This study presents an Explainable Dual-Omics Filtering (EDOF) model that integrates spatially localized dosiomic and radiomic features for voxel-level RP prediction. Methods: A retrospective cohort of 72 breast cancer patients treated with IMRT was analyzed, including 28 who developed RP. The EDOF model consists of two components: (1) dosiomic filtering, which extracts local dose intensity and spatial distribution features from planning dose maps, and (2) radiomic filtering, which captures texture-based features from pre-treatment CT scans. These features are jointly analyzed using the Explainable Boosting Machine (EBM), a transparent machine learning model that enables feature-specific risk evaluation. Model performance was assessed using five-fold cross-validation, reporting area under the curve (AUC), sensitivity, and specificity. Feature importance was quantified by mean absolute scores, and Partial Dependence Plots (PDPs) were used to visualize nonlinear relationships between RP risk and dual-omic features. Results: The EDOF model achieved strong predictive performance (AUC = 0.95 +- 0.01; sensitivity = 0.81 +- 0.05). The most influential features included dosiomic Intensity Mean, dosiomic Intensity Mean Absolute Deviation, and radiomic SRLGLE. PDPs revealed that RP risk increases beyond 5 Gy and rises sharply between 10-30 Gy, consistent with clinical dose thresholds. SRLGLE also captured structural heterogeneity linked to RP in specific lung regions. Conclusion: The EDOF framework enables spatially resolved, explainable RP prediction and may support personalized radiation planning to mitigate pulmonary toxicity.

S-RRG-Bench: Structured Radiology Report Generation with Fine-Grained Evaluation Framework

Yingshu Li, Yunyi Liu, Zhanyu Wang, Xinyu Liang, Lingqiao Liu, Lei Wang, Luping Zhou

arxiv logopreprintAug 4 2025
Radiology report generation (RRG) for diagnostic images, such as chest X-rays, plays a pivotal role in both clinical practice and AI. Traditional free-text reports suffer from redundancy and inconsistent language, complicating the extraction of critical clinical details. Structured radiology report generation (S-RRG) offers a promising solution by organizing information into standardized, concise formats. However, existing approaches often rely on classification or visual question answering (VQA) pipelines that require predefined label sets and produce only fragmented outputs. Template-based approaches, which generate reports by replacing keywords within fixed sentence patterns, further compromise expressiveness and often omit clinically important details. In this work, we present a novel approach to S-RRG that includes dataset construction, model training, and the introduction of a new evaluation framework. We first create a robust chest X-ray dataset (MIMIC-STRUC) that includes disease names, severity levels, probabilities, and anatomical locations, ensuring that the dataset is both clinically relevant and well-structured. We train an LLM-based model to generate standardized, high-quality reports. To assess the generated reports, we propose a specialized evaluation metric (S-Score) that not only measures disease prediction accuracy but also evaluates the precision of disease-specific details, thus offering a clinically meaningful metric for report quality that focuses on elements critical to clinical decision-making and demonstrates a stronger alignment with human assessments. Our approach highlights the effectiveness of structured reports and the importance of a tailored evaluation metric for S-RRG, providing a more clinically relevant measure of report quality.
Page 42 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.