Sort by:
Page 7 of 99986 results

Variational autoencoder-based deep learning and radiomics for predicting pathologic complete response to neoadjuvant chemoimmunotherapy in locally advanced esophageal squamous cell carcinoma.

Gu Q, Chen S, Dekker A, Wee L, Kalendralis P, Yan M, Wang J, Yuan J, Jiang Y

pubmed logopapersSep 25 2025
Neoadjuvant chemoimmunotherapy (nCIT) is gradually becoming an important treatment strategy for patients with locally advanced esophageal squamous cell carcinoma (LA-ESCC). This study aimed to predict the pathological complete response (pCR) of these patients using variational autoencoder (VAE)-based deep learning and radiomics technology. A total of 253 LA-ESCC patients who were treated with nCIT and underwent enhanced CT at our hospital between July 2019 and July 2023 were included in the training cohort. VAE-based deep learning and radiomics were utilized to construct deep learning (DL) models and deep learning radiomics (DLR) models. The models were trained and validated via 5-fold cross-validation among 253 patients. Forty patients were recruited from our institution between August 2023 and August 2024 as the test cohort. The AUCs of DL and DLR model were 0.935 (95% CI: 0.786-0.992) and 0.949 (95% CI: 0.910-0.986) in the validation cohort and 0.839 (95% CI: 0.726-0.853), 0.926 (95% CI: 0.886-0.934) in the test cohort. The performance gap between Precision and Recall of the DLR model was smaller than that of DL model. The F1 scores of the DL and DLR model were 0.726 (95% confidence interval [CI]: 0.476-0.842) and 0.766 (95% CI: 0.625-0.842) in the validation cohort and 0.727 (95% CI: 0.645-0.811), 0.836 (95% CI: 0.820-0.850) in the test cohort. We constructed a DLR model to predict pCR in nCIT treated LA-ESCC patients, which demonstrated superior performance compared to the DL model. We innovatively used VAE-based deep learning and radiomics to construct the DLR model for predicting pCR of LA-ESCC after nCIT.

Artificial intelligence applications in thyroid cancer care.

Pozdeyev N, White SL, Bell CC, Haugen BR, Thomas J

pubmed logopapersSep 25 2025
Artificial intelligence (AI) has created tremendous opportunities to improve thyroid cancer care. We used the "artificial intelligence thyroid cancer" query to search the PubMed database until May 31, 2025. We highlight a set of high-impact publications selected based on technical innovation, large generalizable training datasets, and independent and/or prospective validation of AI. We review the key applications of AI for diagnosing and managing thyroid cancer. Our primary focus is on using computer vision to evaluate thyroid nodules on thyroid ultrasound, an area of thyroid AI that has gained the most attention from researchers and will likely have a significant clinical impact. We also highlight AI for detecting and predicting thyroid cancer neck lymph node metastases, digital cyto- and histopathology, large language models for unstructured data analysis, patient education, and other clinical applications. We discuss how thyroid AI technology has evolved and cite the most impactful research studies. Finally, we balance our excitement about the potential of AI to improve clinical care for thyroid cancer with current limitations, such as the lack of high-quality, independent prospective validation of AI in clinical trials, the uncertain added value of AI software, unknown performance on non-papillary thyroid cancer types, and the complexity of clinical implementation. AI promises to improve thyroid cancer diagnosis, reduce healthcare costs and enable personalized management. High-quality, independent prospective validation of AI in clinical trials is lacking and is necessary for the clinical community's broad adoption of this technology.

3D gadolinium-enhanced high-resolution near-isotropic pancreatic imaging at 3.0-T MR using deep-learning reconstruction.

Guan S, Poujol J, Gouhier E, Touloupas C, Delpla A, Boulay-Coletta I, Zins M

pubmed logopapersSep 24 2025
To compare overall image quality, lesion conspicuity and detectability on 3D-T1w-GRE arterial phase high-resolution MR images with deep learning reconstruction (3D-DLR) against standard-of-care reconstruction (SOC-Recon) in patients with suspected pancreatic disease. Patients who underwent a pancreatic MR exam with a high-resolution 3D-T1w-GRE arterial phase acquisition on a 3.0-T MR system between December 2021 and June 2022 in our center were retrospectively included. A new deep learning-based reconstruction algorithm (3D-DLR) was used to additionally reconstruct arterial phase images. Two radiologists blinded to the reconstruction type assessed images for image quality, artifacts and lesion conspicuity using a Likert scale and counted the lesions. Signal-to-noise ratio and lesion contrast-to-noise ratio were calculated for each reconstruction. Quantitative data were evaluated using paired t-tests. Ordinal data such as image quality, artifacts and lesions conspicuity were analyzed using paired-Wilcoxon tests. Interobserver agreement for image quality and artifact assessment was evaluated using Cohen's kappa. Thirty-two patients (mean age 62 years ± 12, 16 female) were included. 3D-DLR significantly improved SNR for each pancreatic segment and lesion CNR compared to SOC-Recon (p < 0.01), and demonstrated significantly higher average image quality score (3.34 vs 2.68, p < 0.01). 3D DLR also significantly reduced artifacts compared to SOC-Recon (p < 0.01) for one radiologist. 3D-DLR exhibited significantly higher average lesion conspicuity (2.30 vs 1.85, p < 0.01). The sensitivity was increased with 3D-DLR compared to SOC-Recon for both reader 1 and reader 2 (1 vs 0.88 and 0.88 vs 0.83, p = 0.62 for both results). 3D-DLR images demonstrated higher overall image quality, leading to better lesion conspicuity. 3D deep learning reconstruction can be applied to gadolinium-enhanced pancreatic 3D-T1w arterial phase high-resolution images without additional acquisition time to further improve image quality and lesion conspicuity. 3D DLR has not yet been applied to pancreatic MRI high-resolution sequences. This method improves SNR, CNR, and overall 3D T1w arterial pancreatic image quality. Enhanced lesion conspicuity may improve pancreatic lesion detectability.

Automated Resectability Classification of Pancreatic Cancer CT Reports with Privacy-Preserving Open-Weight Large Language Models: A Multicenter Study.

Lee JH, Min JH, Gu K, Han S, Hwang JA, Choi SY, Song KD, Lee JE, Lee J, Moon JE, Adetyan H, Yang JD

pubmed logopapersSep 24 2025
 To evaluate the effectiveness of open-weight large language models (LLMs) in extracting key radiological features and determining National Comprehensive Cancer Network (NCCN) resectability status from free-text radiology reports for pancreatic ductal adenocarcinoma (PDAC). Methods. Prompts were developed using 30 fictitious reports, internally validated on 100 additional fictitious reports, and tested using 200 real reports from two institutions (January 2022 to December 2023). Two radiologists established ground truth for 18 key features and resectability status. Gemma-2-27b-it and Llama-3-70b-instruct models were evaluated using recall, precision, F1-score, extraction accuracy, and overall resectability accuracy. Statistical analyses included McNemar's test and mixed-effects logistic regression. Results. In internal validation, Llama had significantly higher recall than Gemma (99% vs. 95%, p < 0.01) and slightly higher extraction accuracy (98% vs. 97%). Llama also demonstrated higher overall resectability accuracy (93% vs. 91%). In the internal test set, both models achieved 96% recall and 96% extraction accuracy. Overall resectability accuracy was 95% for Llama and 93% for Gemma. In the external test set, both models had 93% recall. Extraction accuracy was 93% for Llama and 95% for Gemma. Gemma achieved higher overall resectability accuracy (89% vs. 83%), but the difference was not statistically significant (p > 0.05). Conclusion. Open-weight models accurately extracted key radiological features and determined NCCN resectability status from free-text PDAC reports. While internal dataset performance was robust, performance on external data decreased, highlighting the need for institution-specific optimization.

Radiomics-based artificial intelligence (AI) models in colorectal cancer (CRC) diagnosis, metastasis detection, prognosis, and treatment response prediction.

Elahi R, Karami P, Amjadzadeh M, Nazari M

pubmed logopapersSep 24 2025
Colorectal cancer (CRC) is the third most common cause of cancer-related morbidity and mortality in the world. Radiomics and radiogenomics are utilized for the high-throughput quantification of features from medical images, providing non-invasive means to characterize cancer heterogeneity and gain insight into the underlying biology. Such radiomics-based artificial intelligence (AI)-methods have demonstrated great potential to improve the accuracy of CRC diagnosis and staging, to distinguish between benign and malignant lesions, to aid in the detection of lymph node and hepatic metastasis, and to predict the effects of therapy and prognosis for patients. This review presents the latest evidence on the clinical applications of radiomics models based on different imaging modalities in CRC. We also discuss the challenges facing clinical translation, including differences in image acquisition, issues related to reproducibility, a lack of standardization, and limited external validation. Given the progress of machine learning (ML) and deep learning (DL) algorithms, radiomics is expected to have an important effect on the personalized treatment of CRC and contribute to a more accurate and individualized clinical decision-making in the future.

From texture analysis to artificial intelligence: global research landscape and evolutionary trajectory of radiomics in hepatocellular carcinoma.

Teng X, Luo QN, Chen YD, Peng T

pubmed logopapersSep 24 2025
Hepatocellular carcinoma (HCC) poses a substantial global health burden with high morbidity and mortality rates. Radiomics, which extracts quantitative features from medical images to develop predictive models, has emerged as a promising non-invasive approach for HCC diagnosis and management. However, comprehensive analysis of research trends in this field remains limited. We conducted a systematic bibliometric analysis of radiomics applications in HCC using literature from the Web of Science Core Collection (January 2006-April 2025). Publications were analyzed using CiteSpace, VOSviewer, R, and Python scripts to evaluate publication patterns, citation metrics, institutional contributions, keyword evolution, and collaboration networks. Among 906 included publications, we observed exponential growth, particularly accelerating after 2019. A global landscape analysis revealed China as the leader in publication volume, while the USA acted as the primary international collaboration hub. Countries like South Korea and the UK demonstrated higher average citation impact. Sun Yat-sen University was the most productive institution. Research themes evolved from fundamental texture analysis and CT/MRI applications toward predicting microvascular invasion, assessing treatment response (especially TACE), and prognostic modeling, driven recently by the deep integration of artificial intelligence (AI) and deep learning. Co-citation analysis revealed core knowledge clusters spanning radiomics methodology, clinical management, and landmark applications, demonstrating the field's interdisciplinary nature. Radiomics in HCC represents a rapidly expanding, AI-driven field characterized by extensive multidisciplinary collaboration. Future priorities should emphasize standardization, large-scale multicenter validation, enhanced international cooperation, and clinical translation to maximize radiomics' potential in precision HCC oncology.

FetalDenseNet: multi-scale deep learning for enhanced early detection of fetal anatomical planes in prenatal ultrasound.

Dey SK, Howlader A, Haider MS, Saha T, Setu DM, Islam T, Siddiqi UR, Rahman MM

pubmed logopapersSep 24 2025
The study aims to improve the classification of fetal anatomical planes using Deep Learning (DL) methods to enhance the accuracy of fetal ultrasound interpretation. Five Convolutional Neural Network (CNN) architectures, such as VGG16, ResNet50, InceptionV3, DenseNet169, and MobileNetV2, are evaluated on a large-scale, clinically validated dataset of 12,400 ultrasound images from 1,792 patients. Preprocessing methods, including scaling, normalization, label encoding, and augmentation, are applied to the dataset, and the dataset is split into 80 % for training and 20 % for testing. Each model was fine-tuned and evaluated based on its classification accuracy for comparison. DenseNet169 achieved the highest classification accuracy of 92 % among all the tested models. The study shows that CNN-based models, particularly DenseNet169, significantly improve diagnostic accuracy in fetal ultrasound interpretation. This advancement reduces error rates and provides support for clinical decision-making in prenatal care.

Detection and classification of medical images using deep learning for chronic kidney disease.

Anoch B, Parthiban L

pubmed logopapersSep 24 2025
Chronic kidney disease (CKD) is an advancing disease which significantly impacts global healthcare, requiring early detection and prompt treatment is required to prevent its advancement to end-stage renal disease. Conventional diagnostic methods tend to be invasive, lengthy, and costly, creating a demand for automated, precise, and efficient solutions. This study proposes a novel technique for identifying and classifying CKD from medical images by utilizing a Convolutional Neural Network based Crow Search (CNN based CS) algorithm. The method employs sophisticated pre-processing techniques, including Z-score standardization, min-max normalization and robust scaling to improve the input data's quality. Selection of features is carried out using the chi-square test, and the Crow Search Algorithm (CSA) further optimizes the feature set for the improvement of accuracy classification and effectivess. The CNN architecture is employed to capture complex patterns using deep learning methods to accurately classify CKD in medical pictures. The model optimized and examined using an open access Kidney CT Scan data set. It achieved 99.05% accuracy, 99.03% Area under the Receiver Operating Characteristic Curve (AUC-ROC), and 99.01% Area under the precision-recall curve (PR-AUC), along with high precision (99.04%), recall (99.02%), and F1-score (99.00%). The results show that the CNN-based CS method delivers high accuracy and improved diagnostic precision related to conventional machine learning techniques. By incorporating CSA for feature optimization, the approach minimizes redundancy and improves model interpretability. This makes it a promising tool for automated CKD diagnosis, contributing to the development of AI-driven medical diagnostics and providing a scalable solution for early detection and management of CKD.

SHMoAReg: Spark Deformable Image Registration via Spatial Heterogeneous Mixture of Experts and Attention Heads

Yuxi Zheng, Jianhui Feng, Tianran Li, Marius Staring, Yuchuan Qiao

arxiv logopreprintSep 24 2025
Encoder-Decoder architectures are widely used in deep learning-based Deformable Image Registration (DIR), where the encoder extracts multi-scale features and the decoder predicts deformation fields by recovering spatial locations. However, current methods lack specialized extraction of features (that are useful for registration) and predict deformation jointly and homogeneously in all three directions. In this paper, we propose a novel expert-guided DIR network with Mixture of Experts (MoE) mechanism applied in both encoder and decoder, named SHMoAReg. Specifically, we incorporate Mixture of Attention heads (MoA) into encoder layers, while Spatial Heterogeneous Mixture of Experts (SHMoE) into the decoder layers. The MoA enhances the specialization of feature extraction by dynamically selecting the optimal combination of attention heads for each image token. Meanwhile, the SHMoE predicts deformation fields heterogeneously in three directions for each voxel using experts with varying kernel sizes. Extensive experiments conducted on two publicly available datasets show consistent improvements over various methods, with a notable increase from 60.58% to 65.58% in Dice score for the abdominal CT dataset. Furthermore, SHMoAReg enhances model interpretability by differentiating experts' utilities across/within different resolution layers. To the best of our knowledge, we are the first to introduce MoE mechanism into DIR tasks. The code will be released soon.

Role of artificial intelligence in screening and medical imaging of precancerous gastric diseases.

Kotelevets SM

pubmed logopapersSep 24 2025
Serological screening, endoscopic imaging, morphological visual verification of precancerous gastric diseases and changes in the gastric mucosa are the main stages of early detection, accurate diagnosis and preventive treatment of gastric precancer. Laboratory - serological, endoscopic and histological diagnostics are carried out by medical laboratory technicians, endoscopists, and histologists. Human factors have a very large share of subjectivity. Endoscopists and histologists are guided by the descriptive principle when formulating imaging conclusions. Diagnostic reports from doctors often result in contradictory and mutually exclusive conclusions. Erroneous results of diagnosticians and clinicians have fatal consequences, such as late diagnosis of gastric cancer and high mortality of patients. Effective population serological screening is only possible with the use of machine processing of laboratory test results. Currently, it is possible to replace subjective imprecise description of endoscopic and histological images by a diagnostician with objective, highly sensitive and highly specific visual recognition using convolutional neural networks with deep machine learning. There are many machine learning models to use. All machine learning models have predictive capabilities. Based on predictive models, it is necessary to identify the risk levels of gastric cancer in patients with a very high probability.
Page 7 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.