Sort by:
Page 202 of 6546537 results

Liu B, Jiang P, Wang Z, Wang X, Wang Z, Peng C, Liu Z, Lu C, Pan D, Shan X

pubmed logopapersSep 3 2025
Homogeneous AI assessment is required for CT-T staging of gastric cancer. To construct an End-to-End CT-based Deep Learning (DL) model for tumor T-staging in advanced gastric cancer. A retrospective study was conducted on 460 cases of presurgical CT patients with advanced gastric cancer between 2011 and 2024. A Three-dimensional (3D)-Convolution (Conv)-UNet based automatic segmentation model was employed to segment tumors, and a SmallFocusNet-based ternary classification model was built for CT-T staging. Finally, these models were integrated to create an end-to-end DL model. The segmentation model's performance was assessed using the Dice similarity coefficient (DSC), Intersection over Union (IoU) and 95 % Hausdorff Distance (HD_95), while the classification model's performance was measured with thearea under the Receiver Operating Characteristic curve (AUC), sensitivity, specificity, and F1-score.Eventually, the end-to-end DL model was compared with the radiologist using the McNemar test. The data were divided into Dataset 1(423 cases for training and test set, mean age, 65.0 years ± 9.46 [SD]) and Dataset 2(37 cases for independent validation set, mean age, 68.8 years ± 9.28 [SD]). For segmentation task, the model achieved a DSC of 0.860 ± 0.065, an IoU of 0.760 ± 0.096 in test set of Dataset 1, and a DSC of 0.870 ± 0.164, an IoU of 0.793 ± 0.168 in Dataset 2. For classification task,the model demonstrated a macro-average AUC of 0.882(95 % CI 0.812-0.926), an average sensitivity of 76.9 % (95 % CI 67.6 %-85.3 %) in test set of Dataset 1 and a macro-average AUC of 0.862(95 % CI 0.723-0.942), an average sensitivity of 76.3 % (95 % CI 59.8 %-90.0 %) in Dataset 2. Meanwhile, the DL model's performance was better than that of radiologist (Accuracy was 91.9 %vs82.1 %, P = 0.007). The end-to-end DL model for CT-T staging is highly accurate and consistent in pre-treatment staging of advanced gastric cancer.

Ichikawa K, Ronen S, Bishay R, Krishnan S, Benzing T, Kianoush S, Aldana-Bitar J, Cainzos-Achirica M, Feldman T, Fialkow J, Budoff MJ, Nasir K

pubmed logopapersSep 3 2025
Coronary computed tomography angiography (CTA)-derived plaque burden is associated with the risk of cardiovascular events and is expected to be used in clinical practice. Understanding the normative values of computed tomography-based quantitative plaque volume in the general population is clinically important for determining patient management. This study aimed to investigate the distribution of plaque volume in the general population and to develop nomograms using MiHEART (Miami Heart Study) at Baptist Health South Florida, a large community-based cohort study. The study included 2,301 asymptomatic subjects without cardiovascular disease enrolled in MiHEART. Quantitative assessment of plaque volume was performed by using artificial intelligence-guided quantitative coronary computed tomography angiography (AI-QCT) analysis. The percentiles of the plaque distribution were estimated with nonparametric techniques. Mean age of the participants was 53.5 years, and 50.4% were male. The median total plaque volume was 54 mm<sup>3</sup> (Q1-Q3: 16-126 mm<sup>3</sup>) and increased with age. Male subjects had greater median total plaque volume than female subjects (80 mm<sup>3</sup> [Q1-Q3: 31-181 mm<sup>3</sup>] vs 34 mm<sup>3</sup> [Q1-Q3: 9-85 mm<sup>3</sup>]; P < 0.001); there was no difference according to race/ethnicity (Hispanic 53 mm<sup>3</sup> [Q1-Q3: 14-119 mm<sup>3</sup>] vs non-Hispanic 54 mm<sup>3</sup> [Q1-Q3: 17-127 mm<sup>3</sup>]; P = 0.756). The prevalence of subjects with total plaque volume ≥20 mm<sup>3</sup> was 81.5% in male subjects and 61.9% in female subjects. Younger individuals had a greater percentage of noncalcified plaque. The large majority of study subjects had plaque detected by using AI-QCT. Furthermore, age- and sex-specific nomograms provided information on the plaque volume distribution in an asymptomatic population. (Miami Heart Study [MiHEART] at Baptist Health South Florida; NCT02508454).

Milani OH, Mills L, Nikho A, Tliba M, Ayyildiz H, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersSep 2 2025
The aim of this study was to develop, test and validate automated interpretable deep learning algorithms for the assessment and classification of the spheno-occipital synchondrosis (SOS) fusion stages from a cone beam computed tomography (CBCT). The sample consisted of 723 CBCT scans of orthodontic patients from private practices in the midwestern United States. The SOS fusion stages were classified by two orthodontists and an oral and maxillofacial radiologist. The advanced deep learning models employed consisted of ResNet, EfficientNet and ConvNeXt. Additionally, a new attention-based model, ConvNeXt + Conv Attention, was developed to enhance classification accuracy by integrating attention mechanisms for capturing subtle medical imaging features. Laslty, YOLOv11 was integrated for fully-automated region detection and segmentation. ConvNeXt + Conv Attention outperformed the other models and achieved a 88.94% accuracy with manual cropping and 82.49% accuracy in a fully automated workflow. This study introduces a novel artificial intelligence-based pipeline that reliably automates the classification of the SOS fusion stages using advanced deep learning models, with the highest accuracy achieved by ConvNext + Conv Attention. These models enhance the efficiency, scalability and consistency of SOS staging while minimising manual intervention from the clinician, underscoring the potential for AI-driven solutions in orthodontics and clinical workflows.

Zhang M, Liu C, Zhang Y, Wei H

pubmed logopapersSep 2 2025
Quantitative susceptibility mapping (QSM) is a useful magnetic resonance imaging technique. We aim to propose a deep learning (DL)-based method for QSM reconstruction that is robust to data perturbations. We developed Diffusion-QSM, a diffusion model-based method with a time-travel and resampling refinement module for high-quality QSM reconstruction. First, the diffusion prior is trained unconditionally on high-quality QSM images, without requiring explicit information about the measured tissue phase, thereby enhancing generalization performance. Subsequently, during inference, the physical constraints from the QSM forward model and measurement are integrated into the output of the diffusion model to guide the sampling process toward realistic image representations. In addition, a time-travel and resampling module is employed during the later sampling stage to refine the image quality, resulting in an improved reconstruction without significantly prolonging the time. Experimental results show that Diffusion-QSM outperforms traditional and unsupervised DL methods for QSM reconstruction using simulation, in vivo and ex vivo data and shows better generalization capability than supervised DL methods when processing out-of-distribution data. Diffusion-QSM successfully unifies data-driven diffusion priors and subjectspecific physics constraints, enabling generalizable, high-quality QSM reconstruction under diverse perturbations, including image contrast, resolution and scan direction. This work advances QSM reconstruction by bridging the generalization gap in deep learning. The excellent quality and generalization capability underscore its potential for various realistic applications.

Huang X, Li W, Wang Y, Wu Q, Li P, Xu K, Huang Y

pubmed logopapersSep 2 2025
This study aimed to develop a deep learning (DL) framework using registration-guided generative adversarial networks (RegGAN) to synthesize contrast-enhanced CT (Syn-CECT) from non-contrast CT (NCCT), enabling iodine-free esophageal cancer (EC) T-staging. A retrospective multicenter analysis included 1,092 EC patients (2013-2024) divided into training (N = 313), internal (N = 117), and external test cohorts (N = 116 and N = 546). RegGAN synthesized Syn-CECT by integrating registration and adversarial training to address NCCT-CECT misalignment. Tumor segmentation used CSSNet with hierarchical feature fusion, while T-staging employed a dual-path DL model combining radiomic features (from NCCT/Syn-CECT) and Vision Transformer-derived deep features. Performance was validated via quantitative metrics (NMAE, PSNR, SSIM), Dice scores, AUC, and reader studies comparing six clinicians with/without model assistance. RegGAN achieved Syn-CECT quality comparable to real CECT (NMAE = 0.1903, SSIM = 0.7723; visual scores: p ≥ 0.12). CSSNet produced accurate tumor segmentation (Dice = 0.89, 95% HD = 2.27 in external tests). The DL staging model outperformed machine learning (AUC = 0.7893-0.8360 vs. ≤ 0.8323), surpassing early-career clinicians (AUC = 0.641-0.757) and matching experts (AUC = 0.840). Syn-CECT-assisted clinicians improved diagnostic accuracy (AUC increase: ~ 0.1, p < 0.01), with decision curve analysis confirming clinical utility at > 35% risk threshold. The RegGAN-based framework eliminates contrast agents while maintaining diagnostic accuracy for EC segmentation (Dice > 0.88) and T-staging (AUC > 0.78). It offers a safe, cost-effective alternative for patients with iodine allergies or renal impairment and enhances diagnostic consistency across clinician experience levels. This approach addresses limitations of invasive staging and repeated contrast exposure, demonstrating transformative potential for resource-limited settings.

Almuqhim F, Saeed F

pubmed logopapersSep 2 2025
Harmonizing multisite functional magnetic resonance imaging (fMRI) data is crucial for eliminating site-specific variability that hinders the generalizability of machine learning models. Traditional harmonization techniques, such as ComBat, depend on additive and multiplicative factors, and may struggle to capture the non-linear interactions between scanner hardware, acquisition protocols, and signal variations between different imaging sites. In addition, these statistical techniques require data from all the sites during their model training which may have the unintended consequence of data leakage for ML models trained using this harmonized data. The ML models trained using this harmonized data may result in low reliability and reproducibility when tested on unseen data sets, limiting their applicability for general clinical usage. In this study, we propose Autoencoders (AEs) as an alternative for harmonizing multisite fMRI data. Our designed and developed framework leverages the non-linear representation learning capabilities of AEs to reduce site-specific effects while preserving biologically meaningful features. Our evaluation using Autism Brain Imaging Data Exchange I (ABIDE-I) dataset, containing 1,035 subjects collected from 17 centers demonstrates statistically significant improvements in leave-one-site-out (LOSO) cross-validation evaluations. All AE variants (AE, SAE, TAE, and DAE) significantly outperformed the baseline mode (p < 0.01), with mean accuracy improvements ranging from 3.41% to 5.04%. Our findings demonstrate the potential of AEs to harmonize multisite neuroimaging data effectively enabling robust downstream analyses across various neuroscience applications while reducing data-leakage, and preservation of neurobiological features. Our open-source code is made available at https://github.com/pcdslab/Autoencoder-fMRI-Harmonization .

Heng Y, Khan FG, Yinghua M, Khan A, Ali F, Khan N, Kwak D

pubmed logopapersSep 2 2025
Despite the widespread success of convolutional neural networks (CNNs) in general computer vision tasks, their application to complex medical image analysis faces persistent challenges. These include limited labeled data availability, which restricts model generalization; class imbalance, where minority classes are underrepresented and lead to biased predictions; and inadequate feature representation, since conventional CNNs often struggle to capture subtle patterns and intricate dependencies characteristic of medical imaging. To address these limitations, we propose CTNGAN, a unified framework that integrates generative modeling with Generative Adversarial Networks (GANs), contrastive learning, and Transformer architectures to enhance the robustness and accuracy of medical image analysis. Each component is designed to tackle a specific challenge: the GAN model mitigates data scarcity and imbalance, contrastive learning strengthens feature robustness against domain shifts, and the Transformer captures long-range spatial patterns. This tripartite integration not only overcomes the limitations of conventional CNNs but also achieves superior generalizability, as demonstrated by classification experiments on benchmark medical imaging datasets, with up to 98.5% accuracy and an F1-score of 0.968, outperforming existing methods. The framework's ability to jointly optimize data generation, feature discrimination, and contextual modeling establishes a new paradigm for accurate and reliable medical image diagnosis.

Shaikh IS, Milshteyn E, Chulsky S, Maclellan CJ, Soman S

pubmed logopapersSep 2 2025
2D T2 FSE is an essential routine spine MRI sequence, allowing assessment of fractures, soft tissues, and pathology. Fat suppression using a DIXON-type approach (2D FLEX) improves water/fat separation. Recently, a deep learning (DL) reconstruction (AIR™ Recon DL, GE HealthCare) became available for 2D FLEX, offering increased signal-to-noise ratio (SNR), reduced artifacts, and sharper images. This study aimed to compare DL-reconstructed versus non-DL-reconstructed spine 2D T2 FLEX images for diagnostic image quality and quantitative metrics at 1.5T. Forty-one patients with clinically indicated cervical or lumbar spine MRI were scanned between May and August 2023 on a 1.5T Voyager (GE HealthCare). A 2D T2 FLEX sequence was acquired, and DL-based reconstruction (noise reduction strength: 75%) was applied. Raw data were also reconstructed without DL. Three readers (CAQ-neuroradiologist, PGY-6 neuroradiology fellow, PGY-2 radiology resident) rated diagnostic preference (0 = non-DL, 1 = DL, 2 = equivalent) for 39 cases. Quantitative measures (SNR, total variation [TV], number of edges, and fat fraction [FF]) were compared using paired t-tests with significance set at p < .05. Among evaluations, 79.5% preferred DL, 11% found images equivalent, and 9.4% favored non-DL, with strong inter-rater agreement (p < .001, Fleiss' Kappa = 0.99). DL images had higher SNR, lower TV, and fewer edges (p < .001), indicating effective noise reduction. FF remained statistically unchanged in subcutaneous fat (p = .25) but differed slightly in vertebral bodies (1.4% difference, p = .01). DL reconstruction notably improved image quality by enhancing SNR and reducing noise without clinically meaningful changes in fat quantification. These findings support the use of DL-enhanced 2D T2 FLEX in routine spine imaging at 1.5T. Incorporating DL-based reconstruction into standard spine MRI protocols can increase diagnostic confidence and workflow efficiency. Further studies with larger cohorts and diverse pathologies are warranted to refine this approach and explore potential benefits for clinical decision-making.

Lu Z, Hu T, Oda M, Fuse Y, Saito R, Jinzaki M, Mori K

pubmed logopapersSep 2 2025
In this paper, we propose a novel generative model to produce high-quality SAH samples, enhancing SAH CT detection performance in imbalanced datasets. Previous methods, such as cost-sensitive learning and previous diffusion models, suffer from overfitting or noise-induced distortion, limiting their effectiveness. Accurate SAH sample generation is crucial for better detection. We propose the Worley-Perlin Diffusion Model (WPDM), leveraging Worley-Perlin noise to synthesize diverse, high-quality SAH images. WPDM addresses limitations of Gaussian noise (homogeneity) and Simplex noise (distortion), enhancing robustness for generating SAH images. Additionally, <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mtext>WPDM</mtext> <mtext>Fast</mtext></msub> </math> optimizes generation speed without compromising quality. WPDM effectively improved classification accuracy in datasets with varying imbalance ratios. Notably, a classifier trained with WPDM-generated samples achieved an F1-score of 0.857 on a 1:36 imbalance ratio, surpassing the state of the art by 2.3 percentage points. WPDM overcomes the limitations of Gaussian and Simplex noise-based models, generating high-quality, realistic SAH images. It significantly enhances classification performance in imbalanced settings, providing a robust solution for SAH CT detection.

Li C, Li J, Zhang J, Solomon E, Dimov AV, Spincemaille P, Nguyen TD, Prince MR, Wang Y

pubmed logopapersSep 2 2025
To develop a multiparametric free-breathing three-dimensional, whole-liver quantitative maps of water T<sub>1</sub>, water T<sub>2</sub>, fat fraction (FF) and R<sub>2</sub>*. A multi-echo 3D stack-of-spiral gradient-echo sequence with inversion recovery and T<sub>2</sub>-prep magnetization preparations was implemented for multiparametric MRI. Fingerprinting and a neural network based on implicit neural representation (FINR) were developed to simultaneously reconstruct the motion deformation fields, the static images, perform water-fat separation, and generate T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF maps. FINR performance was evaluated in 10 healthy subjects by comparison with quantitative maps generated using conventional breath-holding imaging. FINR consistently generated sharp images in all subjects free of motion artifacts. FINR showed minimal bias and narrow 95% limits of agreement for T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF values in the liver compared with conventional imaging. FINR training took about 3 h per subject, and FINR inference took less than 1 min to produce static images and motion deformation fields. FINR is a promising approach for 3D whole-liver T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF mapping in a single free-breathing continuous scan.
Page 202 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.