Sort by:
Page 207 of 3623611 results

Quantification of Optical Coherence Tomography Features in >3500 Patients with Inherited Retinal Disease Reveals Novel Genotype-Phenotype Associations

Woof, W. A., de Guimaraes, T. A. C., Al-Khuzaei, S., Daich Varela, M., Shah, M., Naik, G., Sen, S., Bagga, P., Chan, Y. W., Mendes, B. S., Lin, S., Ghoshal, B., Liefers, B., Fu, D. J., Georgiou, M., da Silva, A. S., Nguyen, Q., Liu, Y., Fujinami-Yokokawa, Y., Sumodhee, D., Furman, J., Patel, P. J., Moghul, I., Moosajee, M., Sallum, J., De Silva, S. R., Lorenz, B., Herrmann, P., Holz, F. G., Fujinami, K., Webster, A. R., Mahroo, O. A., Downes, S. M., Madhusudhan, S., Balaskas, K., Michaelides, M., Pontikos, N.

medrxiv logopreprintJul 3 2025
PurposeTo quantify spectral-domain optical coherence tomography (SD-OCT) images cross-sectionally and longitudinally in a large cohort of molecularly characterized patients with inherited retinal disease (IRDs) from the UK. DesignRetrospective study of imaging data. ParticipantsPatients with a clinical and molecularly confirmed diagnosis of IRD who have undergone macular SD-OCT imaging at Moorfields Eye Hospital (MEH) between 2011 and 2019. We retrospectively identified 4,240 IRD patients from the MEH database (198 distinct IRD genes), including 69,664 SD-OCT macular volumes. MethodsEight features of interest were defined: retina, fovea, intraretinal cystic spaces (ICS), subretinal fluid (SRF), subretinal hyper-reflective material (SHRM), pigment epithelium detachment (PED), ellipsoid zone loss (EZ-loss) and retinal pigment epithelium loss (RPE-loss). Manual annotations of five b-scans per SD-OCT volume was performed for the retinal features by four graders based on a defined grading protocol. A total of 1,749 b-scans from 360 SD-OCT volumes across 275 patients were annotated for the eight retinal features for training and testing of a neural-network-based segmentation model, AIRDetect-OCT, which was then applied to the entire imaging dataset. Main Outcome MeasuresPerformance of AIRDetect-OCT, comparing to inter-grader agreement was evaluated using Dice score on a held-out dataset. Feature prevalence, volume and area were analysed cross-sectionally and longitudinally. ResultsThe inter-grader Dice score for manual segmentation was [&ge;]90% for retina, ICS, SRF, SHRM and PED, >77% for both EZ-loss and RPE-loss. Model-grader agreement was >80% for segmentation of retina, ICS, SRF, SHRM, and PED, and >68% for both EZ-loss and RPE-loss. Automatic segmentation was applied to 272,168 b-scans across 7,405 SD-OCT volumes from 3,534 patients encompassing 176 unique genes. Accounting for age, male patients exhibited significantly more EZ-loss (19.6mm2 vs 17.9mm2, p<2.8x10-4) and RPE-loss (7.79mm2 vs 6.15mm2, p<3.2x10-6) than females. RPE-loss was significantly higher in Asian patients than other ethnicities (9.37mm2 vs 7.29mm2, p<0.03). ICS average total volume was largest in RS1 (0.47mm3) and NR2E3 (0.25mm3), SRF in BEST1 (0.21mm3) and PED in EFEMP1 (0.34mm3). BEST1 and PROM1 showed significantly different patterns of EZ-loss (p<10-4) and RPE-loss (p<0.02) comparing the dominant to the recessive forms. Sectoral analysis revealed significantly increased EZ-loss in the inferior quadrant compared to superior quadrant for RHO ({Delta}=-0.414 mm2, p=0.036) and EYS ({Delta}=-0.908 mm2, p=1.5x10-4). In ABCA4 retinopathy, more severe genotypes (group A) were associated with faster progression of EZ-loss (2.80{+/-}0.62 mm2/yr), whilst the p.(Gly1961Glu) variant (group D) was associated with slower progression (0.56 {+/-}0.18 mm2/yr). There were also sex differences within groups with males in group A experiencing significantly faster rates of progression of RPE-loss (2.48 {+/-}1.40 mm2/yr vs 0.87 {+/-}0.62 mm2/yr, p=0.047), but lower rates in groups B, C, and D. ConclusionsAIRDetect-OCT, a novel deep learning algorithm, enables large-scale OCT feature quantification in IRD patients uncovering cross-sectional and longitudinal phenotype correlations with demographic and genotypic parameters.

A novel few-shot learning framework for supervised diffeomorphic image registration network.

Chen K, Han H, Wei J, Zhang Y

pubmed logopapersJul 2 2025
Image registration is a key technique in image processing and analysis. Due to its high complexity, the traditional registration frameworks often fail to meet real-time demands in practice. To address the real-time demand, several deep learning networks for registration have been proposed, including the supervised and the unsupervised networks. Unsupervised networks rely on large amounts of training data to minimize specific loss functions, but the lack of physical information constraints results in the lower accuracy compared with the supervised networks. However, the supervised networks in medical image registration face two major challenges: physical mesh folding and the scarcity of labeled training data. To address these two challenges, we propose a novel few-shot learning framework for image registration. The framework contains two parts: random diffeomorphism generator (RDG) and a supervised few-shot learning network for image registration. By randomly generating a complex vector field, the RDG produces a series of diffeomorphism. With the help of diffeomorphism generated by RDG, one can use only a few image data (theoretically, one image data is enough) to generate a series of labels for training the supervised few-shot learning network. Concerning the elimination of the physical mesh folding phenomenon, in the proposed network, the loss function is only required to ensure the smoothness of deformation (no other control for mesh folding elimination is necessary). The experimental results indicate that the proposed method demonstrates superior performance in eliminating physical mesh folding when compared to other existing learning-based methods. Our code is available at this link https://github.com/weijunping111/RDG-TMI.git.

A Multi-Centric Anthropomorphic 3D CT Phantom-Based Benchmark Dataset for Harmonization

Mohammadreza Amirian, Michael Bach, Oscar Jimenez-del-Toro, Christoph Aberle, Roger Schaer, Vincent Andrearczyk, Jean-Félix Maestrati, Maria Martin Asiain, Kyriakos Flouris, Markus Obmann, Clarisse Dromain, Benoît Dufour, Pierre-Alexandre Alois Poletti, Hendrik von Tengg-Kobligk, Rolf Hügli, Martin Kretzschmar, Hatem Alkadhi, Ender Konukoglu, Henning Müller, Bram Stieltjes, Adrien Depeursinge

arxiv logopreprintJul 2 2025
Artificial intelligence (AI) has introduced numerous opportunities for human assistance and task automation in medicine. However, it suffers from poor generalization in the presence of shifts in the data distribution. In the context of AI-based computed tomography (CT) analysis, significant data distribution shifts can be caused by changes in scanner manufacturer, reconstruction technique or dose. AI harmonization techniques can address this problem by reducing distribution shifts caused by various acquisition settings. This paper presents an open-source benchmark dataset containing CT scans of an anthropomorphic phantom acquired with various scanners and settings, which purpose is to foster the development of AI harmonization techniques. Using a phantom allows fixing variations attributed to inter- and intra-patient variations. The dataset includes 1378 image series acquired with 13 scanners from 4 manufacturers across 8 institutions using a harmonized protocol as well as several acquisition doses. Additionally, we present a methodology, baseline results and open-source code to assess image- and feature-level stability and liver tissue classification, promoting the development of AI harmonization strategies.

3D MedDiffusion: A 3D Medical Latent Diffusion Model for Controllable and High-quality Medical Image Generation.

Wang H, Liu Z, Sun K, Wang X, Shen D, Cui Z

pubmed logopapersJul 2 2025
The generation of medical images presents significant challenges due to their high-resolution and three-dimensional nature. Existing methods often yield suboptimal performance in generating high-quality 3D medical images, and there is currently no universal generative framework for medical imaging. In this paper, we introduce a 3D Medical Latent Diffusion (3D MedDiffusion) model for controllable, high-quality 3D medical image generation. 3D MedDiffusion incorporates a novel, highly efficient Patch-Volume Autoencoder that compresses medical images into latent space through patch-wise encoding and recovers back into image space through volume-wise decoding. Additionally, we design a new noise estimator to capture both local details and global structural information during diffusion denoising process. 3D MedDiffusion can generate fine-detailed, high-resolution images (up to 512x512x512) and effectively adapt to various downstream tasks as it is trained on large-scale datasets covering CT and MRI modalities and different anatomical regions (from head to leg). Experimental results demonstrate that 3D MedDiffusion surpasses state-of-the-art methods in generative quality and exhibits strong generalizability across tasks such as sparse-view CT reconstruction, fast MRI reconstruction, and data augmentation for segmentationand classification. Source code and checkpoints are available at https://github.com/ShanghaiTech-IMPACT/3D-MedDiffusion.

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.

Habitat-Derived Radiomic Features of Planning Target Volume to Determine the Local Recurrence After Radiotherapy in Patients with Gliomas: A Feasibility Study.

Wang Y, Lin L, Hu Z, Wang H

pubmed logopapersJul 2 2025
To develop a machine learning-based predictive model for local recurrence after radiotherapy in patients with gliomas, with interpretability enhanced through SHapley Additive exPlanations (SHAP). We retrospectively enrolled 145 patients with pathologically confirmed gliomas who underwent brain radiotherapy (training: validation = 102:43). Physiological and structural magnetic resonance imaging (MRI) were used to define habitat regions. A total of 2153 radiomic features were extracted from each MRI sequence in each habitat region, respectively. Relief and Recursive Feature Elimination were used for radiomic feature selection. Support vector machine (SVM) and random forest models incorporating clinical and radiomic features were constructed for each habitat region. The SHAP method was used to explain the predictive model. In the training cohort and validation cohort, the Physiological_Habitat1 (e-THRIVE)_radiomic SVM model demonstrated the best AUC of 0.703 (95% CI 0.569-0.836) and 0.670 (95% CI 0.623-0.717) compared to the other radiomic models. The SHAP summary plot and SHAP force plot were used to interpret the best-performing Physiological_Habitat1 (e-THRIVE)_radiomic SVM model. Radiomic features derived from the Physiological_Habitat1 (e-THRIVE) were predictive of local recurrence in glioma patients following radiotherapy. The SHAP method provided insights into how the tumor microenvironment might influence the effectiveness of radiotherapy in postoperative gliomas.

Heterogeneity Habitats -Derived Radiomics of Gd-EOB-DTPA Enhanced MRI for Predicting Proliferation of Hepatocellular Carcinoma.

Sun S, Yu Y, Xiao S, He Q, Jiang Z, Fan Y

pubmed logopapersJul 2 2025
To construct and validate the optimal model for preoperative prediction of proliferative HCC based on habitat-derived radiomics features of Gd-EOB-DTPA-Enhanced MRI. A total of 187 patients who underwent Gd-EOB-DTPA-enhanced MRI before curative partial hepatectomy were divided into training (n=130, 50 proliferative and 80 nonproliferative HCC) and validation cohort (n=57, 25 proliferative and 32 nonproliferative HCC). Habitat subregion generation was performed using the Gaussian Mixture Model (GMM) clustering method to cluster all pixels to identify similar subregions within the tumor. Radiomic features were extracted from each tumor subregion in the arterial phase (AP) and hepatobiliary phase (HBP). Independent sample t tests, Pearson correlation coefficient, and Least Absolute Shrinkage and Selection Operator (LASSO) algorithm were performed to select the optimal features of subregions. After feature integration and selection, machine-learning classification models using the sci-kit-learn library were constructed. Receiver Operating Characteristic (ROC) curves and the DeLong test were performed to compare the identified performance for predicting proliferative HCC among these models. The optimal number of clusters was determined to be 3 based on the Silhouette coefficient. 20, 12, and 23 features were retained from the AP, HBP, and the combined AP and HBP habitat (subregions 1, 2, 3) radiomics features. Three models were constructed with these selected features in AP, HBP, and the combined AP and HBP habitat radiomics features. The ROC analysis and DeLong test show that the Naive Bayes model of AP and HBP habitat radiomics (AP-HBP-Hab-Rad) archived the best performance. Finally, the combined model using the Light Gradient Boosting Machine (LightGBM) algorithm, incorporating the AP-HBP-Hab-Rad, age, and AFP (Alpha-Fetoprotein), was identified as the optimal model for predicting proliferative HCC. For the training and validation cohort, the accuracy, sensitivity, specificity, and AUC were 0.923, 0.880, 0.950, 0.966 (95% CI: 0.937-0.994) and 0.825, 0.680, 0.937, 0.877 (95% CI: 0.786-0.969), respectively. In its validation cohort of the combined model, the AUC value was statistically higher than the other models (P<0.01). A combined model, including AP-HBP-Hab-Rad, serum AFP, and age using the LightGBM algorithm, can satisfactorily predict proliferative HCC preoperatively.

Multi-modal models using fMRI, urine and serum biomarkers for classification and risk prognosis in diabetic kidney disease.

Shao X, Xu H, Chen L, Bai P, Sun H, Yang Q, Chen R, Lin Q, Wang L, Li Y, Lin Y, Yu P

pubmed logopapersJul 2 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for non-invasive evaluation of micro-changes in the kidneys. This study aims to develop classification and prognostic models based on multi-modal data. A total of 172 participants were included, and high-resolution multi-parameter fMRI technology was employed to obtain T2-weighted imaging (T2WI), blood oxygen level dependent (BOLD), and diffusion tensor imaging (DTI) sequence images. Based on clinical indicators, fMRI markers, serum and urine biomarkers (CD300LF, CST4, MMRN2, SERPINA1, l-glutamic acid dimethyl ester and phosphatidylcholine), machine learning algorithms were applied to establish and validate classification diagnosis models (Models 1-6) and risk-prognostic models (Models A-E). Additionally, accuracy, sensitivity, specificity, precision, area under the curve (AUC) and recall were used to evaluate the predictive performance of the models. A total of six classification models were established. Model 5 (fMRI + clinical indicators) exhibited superior performance, with an accuracy of 0.833 (95% confidence interval [CI]: 0.653-0.944). Notably, the multi-modal model incorporating image, serum and urine multi-omics and clinical indicators (Model 6) demonstrated higher predictive performance, achieving an accuracy of 0.923 (95% CI: 0.749-0.991). Furthermore, a total of five prognostic models at 2-year and 3-year follow-up were established. The Model E exhibited superior performance, achieving AUC values of 0.975 at the 2-year follow-up and 0.932 at the 3-year follow-up. Furthermore, Model E can identify patients with a high-risk prognosis. In clinical practice, the multi-modal models presented in this study demonstrate potential to enhance clinical decision-making capabilities regarding patient classification and prognosis prediction.

Diagnostic performance of artificial intelligence based on contrast-enhanced computed tomography in pancreatic ductal adenocarcinoma: a systematic review and meta-analysis.

Yan G, Chen X, Wang Y

pubmed logopapersJul 2 2025
This meta-analysis systematically evaluated the diagnostic performance of artificial intelligence (AI) based on contrast-enhanced computed tomography (CECT) in detecting pancreatic ductal adenocarcinoma (PDAC). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines, a comprehensive literature search was conducted across PubMed, Embase, and Web of Science from inception to March 2025. Bivariate random-effects models pooled sensitivity, specificity, and area under the curve (AUC). Heterogeneity was quantified via I² statistics, with subgroup analyses examining sources of variability, including AI methodologies, model architectures, sample sizes, geographic distributions, control groups and tumor stages. Nineteen studies involving 5,986 patients in internal validation cohorts and 2,069 patients in external validation cohorts were included. AI models demonstrated robust diagnostic accuracy in internal validation, with pooled sensitivity of 0.94 (95% CI 0.89-0.96), specificity of 0.93 (95% CI 0.90-0.96), and AUC of 0.98 (95% CI 0.96-0.99). External validation revealed moderately reduced sensitivity (0.84; 95% CI 0.78-0.89) and AUC (0.94; 95% CI 0.92-0.96), while specificity remained comparable (0.93; 95% CI 0.87-0.96). Substantial heterogeneity (I² > 85%) was observed, predominantly attributed to methodological variations in AI architectures and disparities in cohort sizes. AI demonstrates excellent diagnostic performance for PDAC on CECT, achieving high sensitivity and specificity across validation scenarios. However, its efficacy varies significantly with clinical context and tumor stage. Therefore, prospective multicenter trials that utilize standardized protocols and diverse cohorts, including early-stage tumors and complex benign conditions, are essential to validate the clinical utility of AI.

Multimodal Generative Artificial Intelligence Model for Creating Radiology Reports for Chest Radiographs in Patients Undergoing Tuberculosis Screening.

Hong EK, Kim HW, Song OK, Lee KC, Kim DK, Cho JB, Kim J, Lee S, Bae W, Roh B

pubmed logopapersJul 2 2025
<b>Background:</b> Chest radiographs play a crucial role in tuberculosis screening in high-prevalence regions, although widespread radiographic screening requires expertise that may be unavailable in settings with limited medical resources. <b>Objectives:</b> To evaluate a multimodal generative artificial intelligence (AI) model for detecting tuberculosis-associated abnormalities on chest radiography in patients undergoing tuberculosis screening. <b>Methods:</b> This retrospective study evaluated 800 chest radiographs obtained from two public datasets originating from tuberculosis screening programs. A generative AI model was used to create free-text reports for the radiographs. AI-generated reports were classified in terms of presence versus absence and laterality of tuberculosis-related abnormalities. Two radiologists independently reviewed the radiographs for tuberculosis presence and laterality in separate sessions, without and with use of AI-generated reports and recorded if they would accept the report without modification. Two additional radiologists reviewed radiographs and clinical readings from the datasets to determine the reference standard. <b>Results:</b> By the reference standard, 422/800 radiographs were positive for tuberculosis-related abnormalities. For detection of tuberculosis-related abnormalities, sensitivity, specificity, and accuracy were 95.2%, 86.7%, and 90.8% for AI-generated reports; 93.1%, 93.6%, and 93.4% for reader 1 without AI-generated reports; 93.1%, 95.0%, and 94.1% for reader 1 with AI-generated reports; 95.8%, 87.2%, and 91.3% for reader 2 without AI-generated reports; and 95.8%, 91.5%, and 93.5% for reader 2 with AI-generated reports. Accuracy was significantly lower for AI-generated reports than for both readers alone (p<.001), but significantly higher with than without AI-generated reports for one reader (reader 1: p=.47; reader 2: p=.47). Localization performance was significantly lower (p<.001) for AI-generated reports (63.3%) than for reader 1 (79.9%) and reader 2 (77.9%) without AI-generated reports and did not significantly change for either reader with AI-generated reports (reader 1: 78.7%, p=.71; reader 2: 81.5%, p=.23). Among normal and abnormal radiographs, reader 1 accepted 91.7% and 52.4%, while reader 2 accepted 83.2% and 37.0%, respectively, of AI-generated reports. <b>Conclusion:</b> While AI-generated reports may augment radiologists' diagnostic assessments, the current model requires human oversight given inferior standalone performance. <b>Clinical Impact:</b> The generative AI model could have potential application to aid tuberculosis screening programs in medically underserved regions, although technical improvements remain required.
Page 207 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.