Sort by:
Page 17 of 99990 results

Navigator motion-resolved MR fingerprinting using implicit neural representation: Feasibility for free-breathing three-dimensional whole-liver multiparametric mapping.

Li C, Li J, Zhang J, Solomon E, Dimov AV, Spincemaille P, Nguyen TD, Prince MR, Wang Y

pubmed logopapersSep 2 2025
To develop a multiparametric free-breathing three-dimensional, whole-liver quantitative maps of water T<sub>1</sub>, water T<sub>2</sub>, fat fraction (FF) and R<sub>2</sub>*. A multi-echo 3D stack-of-spiral gradient-echo sequence with inversion recovery and T<sub>2</sub>-prep magnetization preparations was implemented for multiparametric MRI. Fingerprinting and a neural network based on implicit neural representation (FINR) were developed to simultaneously reconstruct the motion deformation fields, the static images, perform water-fat separation, and generate T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF maps. FINR performance was evaluated in 10 healthy subjects by comparison with quantitative maps generated using conventional breath-holding imaging. FINR consistently generated sharp images in all subjects free of motion artifacts. FINR showed minimal bias and narrow 95% limits of agreement for T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF values in the liver compared with conventional imaging. FINR training took about 3 h per subject, and FINR inference took less than 1 min to produce static images and motion deformation fields. FINR is a promising approach for 3D whole-liver T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF mapping in a single free-breathing continuous scan.

RegGAN-based contrast-free CT enhances esophageal cancer assessment: multicenter validation of automated tumor segmentation and T-staging.

Huang X, Li W, Wang Y, Wu Q, Li P, Xu K, Huang Y

pubmed logopapersSep 2 2025
This study aimed to develop a deep learning (DL) framework using registration-guided generative adversarial networks (RegGAN) to synthesize contrast-enhanced CT (Syn-CECT) from non-contrast CT (NCCT), enabling iodine-free esophageal cancer (EC) T-staging. A retrospective multicenter analysis included 1,092 EC patients (2013-2024) divided into training (N = 313), internal (N = 117), and external test cohorts (N = 116 and N = 546). RegGAN synthesized Syn-CECT by integrating registration and adversarial training to address NCCT-CECT misalignment. Tumor segmentation used CSSNet with hierarchical feature fusion, while T-staging employed a dual-path DL model combining radiomic features (from NCCT/Syn-CECT) and Vision Transformer-derived deep features. Performance was validated via quantitative metrics (NMAE, PSNR, SSIM), Dice scores, AUC, and reader studies comparing six clinicians with/without model assistance. RegGAN achieved Syn-CECT quality comparable to real CECT (NMAE = 0.1903, SSIM = 0.7723; visual scores: p ≥ 0.12). CSSNet produced accurate tumor segmentation (Dice = 0.89, 95% HD = 2.27 in external tests). The DL staging model outperformed machine learning (AUC = 0.7893-0.8360 vs. ≤ 0.8323), surpassing early-career clinicians (AUC = 0.641-0.757) and matching experts (AUC = 0.840). Syn-CECT-assisted clinicians improved diagnostic accuracy (AUC increase: ~ 0.1, p < 0.01), with decision curve analysis confirming clinical utility at > 35% risk threshold. The RegGAN-based framework eliminates contrast agents while maintaining diagnostic accuracy for EC segmentation (Dice > 0.88) and T-staging (AUC > 0.78). It offers a safe, cost-effective alternative for patients with iodine allergies or renal impairment and enhances diagnostic consistency across clinician experience levels. This approach addresses limitations of invasive staging and repeated contrast exposure, demonstrating transformative potential for resource-limited settings.

An MRI-pathology foundation model for noninvasive diagnosis and grading of prostate cancer.

Shao L, Liang C, Yan Y, Zhu H, Jiang X, Bao M, Zang P, Huang X, Zhou H, Nie P, Wang L, Li J, Zhang S, Ren S

pubmed logopapersSep 2 2025
Prostate cancer is a leading health concern for men, yet current clinical assessments of tumor aggressiveness rely on invasive procedures that often lead to inconsistencies. There remains a critical need for accurate, noninvasive diagnosis and grading methods. Here we developed a foundation model trained on multiparametric magnetic resonance imaging (MRI) and paired pathology data for noninvasive diagnosis and grading of prostate cancer. Our model, MRI-based Predicted Transformer for Prostate Cancer (MRI-PTPCa), was trained under contrastive learning on nearly 1.3 million image-pathology pairs from over 5,500 patients in discovery, modeling, external and prospective cohorts. During real-world testing, prediction of MRI-PTPCa demonstrated consistency with pathology and superior performance (area under the curve above 0.978; grading accuracy 89.1%) compared with clinical measures and other prediction models. This work introduces a scalable, noninvasive approach to prostate cancer diagnosis and grading, offering a robust tool to support clinical decision-making while reducing reliance on biopsies.

Decoding Fibrosis: Transcriptomic and Clinical Insights via AI-Derived Collagen Deposition Phenotypes in MASLD

Wojciechowska, M. K., Thing, M., Hu, Y., Mazzoni, G., Harder, L. M., Werge, M. P., Kimer, N., Das, V., Moreno Martinez, J., Prada-Medina, C. A., Vyberg, M., Goldin, R., Serizawa, R., Tomlinson, J., Douglas Gaalsgard, E., Woodcock, D. J., Hvid, H., Pfister, D. R., Jurtz, V. I., Gluud, L.-L., Rittscher, J.

medrxiv logopreprintSep 2 2025
Histological assessment is foundational to multi-omics studies of liver disease, yet conventional fibrosis staging lacks resolution, and quantitative metrics like collagen proportionate area (CPA) fail to capture tissue architecture. While recent AI-driven approaches offer improved precision, they are proprietary and not accessible to academic research. Here, we present a novel, interpretable AI-based framework for characterising liver fibrosis from picrosirius red (PSR)-stained slides. By identifying distinct data-driven collagen deposition phenotypes (CDPs) which capture distinct morphologies, our method substantially improves the sensitivity and specificity of downstream transcriptomic and proteomic analyses compared to CPA and traditional fibrosis scores. Pathway analysis reveals that CDPs 4 and 5 are associated with active extracellular matrix remodelling, while phenotype correlates highlight links to liver functional status. Importantly, we demonstrate that selected CDPs can predict clinical outcomes with similar accuracy to established fibrosis metrics. All models and tools are made freely available to support transparent and reproducible multi-omics pathology research. HighlightsO_LIWe present a set of data-driven collagen deposition phenotypes for analysing PSR-stained liver biopsies, offering a spatially informed alternative to conventional fibrosis staging and CPA available as open-source code. C_LIO_LIThe identified collagen deposition phenotypes enhance transcriptomic and proteomic signal detection, revealing active ECM remodelling and distinct functional tissue states. C_LIO_LISelected phenotypes predict clinical outcomes with performance comparable to fibrosis stage and CPA, highlighting their potential as candidate quantitative indicators of fibrosis severity. C_LI O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=98 SRC="FIGDIR/small/25334719v1_ufig1.gif" ALT="Figure 1"> View larger version (22K): [email protected]@1793532org.highwire.dtl.DTLVardef@93a0d8org.highwire.dtl.DTLVardef@24d289_HPS_FORMAT_FIGEXP M_FIG C_FIG

Deep learning model for predicting lymph node metastasis around rectal cancer based on rectal tumor core area and mesangial imaging features.

Guo L, Fu K, Wang W, Zhou L, Chen L, Jiang M

pubmed logopapersSep 1 2025
Assessing lymph node metastasis (LNM) involvement in patients with rectal cancer (RC) is fundamental in disease management. In this study, we used artificial intelligence (AI) technology to develop a segmentation model that automatically segments the tumor core area and mesangial tissue from magnetic resonance T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) images collected from 122 RC patients to improve the accuracy of LNM prediction, after which omics machine modeling was performed on the segmented ROI. An automatic segmentation model was developed using nn-UNet. This pipeline integrates deep learning (DL), specifically 3D U-Net, for semantic segmentation and image processing techniques such as resampling, normalization, connected component analysis, image registration, and radiomics features coupled with machine learning. The results showed that the DL segmentation method could effectively segment the tumor and mesangial areas from MR sequences (the median dice coefficient: 0.90 ± 0.08; mesorectum segmentation: 0.85 ± 0.36), and the radiological characteristics of rectal and mesangial tissues in T2WI and ADC images could help distinguish RC treatments. The nn-UNet model demonstrated promising preliminary results, achieving the highest area under the curve (AUC) values in various scenarios. In the evaluation encompassing both tumor lesions and mesorectum involvement, the model exhibited an AUC of 0.743, highlighting its strong discriminatory ability to predict a combined outcome involving both elements. Specifically targeting tumor lesions, the model achieved an AUC of 0.731, emphasizing its effectiveness in distinguishing between positive and negative cases of tumor lesions. In assessing the prediction of mesorectum involvement, the model displayed moderate predictive utility with an AUC of 0.753. The nn-UNet model demonstrated impressive performance across all evaluated scenarios, including combined tumor lesions and mesorectum involvement, tumor lesions alone, and mesorectum involvement alone. The online version contains supplementary material available at 10.1186/s12880-025-01878-9.

Deep learning-based automated assessment of hepatic fibrosis via magnetic resonance images and nonimage data.

Li W, Zhu Y, Zhao G, Chen X, Zhao X, Xu H, Che Y, Chen Y, Ye Y, Dou X, Wang H, Cheng J, Xie Q, Chen K

pubmed logopapersSep 1 2025
Accurate staging of hepatic fibrosis is critical for prognostication and management among patients with chronic liver disease, and noninvasive, efficient alternatives to biopsy are urgently needed. This study aimed to evaluate the performance of an automated deep learning (DL) algorithm for fibrosis staging and for differentiating patients with hepatic fibrosis from healthy individuals via magnetic resonance (MR) images with and without additional clinical data. A total of 500 patients from two medical centers were retrospectively analyzed. DL models were developed based on delayed-phase MR images to predict fibrosis stages. Additional models were constructed by integrating the DL algorithm with nonimaging variables, including serologic biomarkers [aminotransferase-to-platelet ratio index (APRI) and fibrosis index based on four factors (FIB-4)], viral status (hepatitis B and C), and MR scanner parameters. Diagnostic performance, was assessed via the area under the receiver operating characteristic curve (AUROC), and comparisons were through use of the DeLong test. Sensitivity and specificity of the DL and full models (DL plus all clinical features) were compared with those of experienced radiologists and serologic biomarkers via the McNemar test. In the test set, the full model achieved AUROC values of 0.99 [95% confidence interval (CI): 0.94-1.00], 0.98 (95% CI: 0.93-0.99), 0.90 (95% CI: 0.83-0.95), 0.81 (95% CI: 0.73-0.88), and 0.84 (95% CI: 0.76-0.90) for staging F0-4, F1-4, F2-4, F3-4, and F4, respectively. This model significantly outperformed the DL model in early-stage classification (F0-4 and F1-4). Compared with expert radiologists, it showed superior specificity for F0-4 and higher sensitivity across the other four classification tasks. Both the DL and full models showed significantly greater specificity than did the biomarkers for staging advanced fibrosis (F3-4 and F4). The proposed DL algorithm provides a noninvasive method for hepatic fibrosis staging and screening, outperforming both radiologists and conventional biomarkers, and may facilitate improved clinical decision-making.

Artificial intelligence-enhanced ultrasound imaging for thyroid nodule detection and malignancy classification: a study on YOLOv11.

Yang J, Luo Z, Wen Y, Zhang J

pubmed logopapersSep 1 2025
Thyroid nodules are a common clinical concern, with accurate diagnosis being critical for effective treatment and improved patient outcomes. Traditional ultrasound examinations rely heavily on the physician's experience, which can lead to diagnostic variability. The integration of artificial intelligence (AI) into medical imaging offers a promising solution for enhancing diagnostic accuracy and efficiency. This study aimed to evaluate the effectiveness of the You Only Look Once v. 11 (YOLOv11) model in detecting and classifying thyroid nodules through ultrasound images, with the goal of supporting real-time clinical decision-making and improving diagnostic workflows. We used the YOLOv11 model to analyze a dataset of 1,503 thyroid ultrasound images, divided into training (1,203 images), validation (150 images), and test (150 images) sets, comprising 742 benign and 778 malignant nodules. Advanced data augmentation and transfer learning techniques were applied to optimize model performance. Comparative analysis was conducted with other YOLO variants (YOLOv3 to YOLOv10) and residual network 50 (ResNet50) to assess their diagnostic capabilities. The YOLOv11 model exhibited superior performance in thyroid nodule detection as compared to other YOLO variants (from YOLOv3 to YOLOv10) and ResNet50. At an intersection over union (IoU) of 0.5, YOLOv11 achieved a precision (P) of 0.841 and recall (R) of 0.823, outperforming ResNet50's P of 0.8333 and R of 0.8025. Among the YOLO variants, YOLOv11 consistently achieved the highest P and R values. For benign nodules, YOLOv11 obtained a P of 0.835 and R of 0.833, while for malignant nodules, it reached a P of 0.846 and a R of 0.813. Within the YOLOv11 model itself, performance varied across different IoU thresholds (0.25, 0.5, 0.7, and 0.9). Lower IoU thresholds generally resulted in better performance metrics, with P and R values decreasing as the IoU threshold increased. YOLOv11 proved to be a powerful tool for thyroid nodule detection and malignancy classification, offering high P and real-time performance. These attributes are vital for dynamic ultrasound examinations and enhancing diagnostic efficiency. Future research will focus on expanding datasets and validating the model's clinical utility in real-time settings.

Deep Learning-Based Multimodal Prediction of NAC Response in LARC by Integrating MRI and Proteomics.

Li Y, Ding J, Du F, Wang Z, Liu Z, Liu Y, Zhou Y, Zhang Q

pubmed logopapersSep 1 2025
Locally advanced rectal cancer (LARC) exhibits significant heterogeneity in response to neoadjuvant chemotherapy (NAC), with poor responders facing delayed treatment and unnecessary toxicity. Although MRI provides spatial pathophysiological information and proteomics reveals molecular mechanisms, current single-modal approaches cannot integrate these complementary perspectives, resulting in limited predictive accuracy and biological insight. This retrospective study developed a multimodal deep learning framework using a cohort of 274 LARC patients treated with NAC (2012-2021). Graph neural networks analyzed proteomic profiles from FFPE tissues, incorporating KEGG/GO pathways and PPI networks, while a spatially enhanced 3D ResNet152 processed T2WI. A LightGBM classifier integrated both modalities with clinical features using zero-imputation for missing data. Model performance was assessed through AUC-ROC, decision curve analysis, and interpretability techniques (SHAP and Grad-CAM). The integrated model achieved superior NAC response prediction (test AUC 0.828, sensitivity 0.875, specificity 0.750), significantly outperforming single-modal approaches (MRI ΔAUC +0.109; proteomics ΔAUC +0.125). SHAP analysis revealed MRI-derived features contributed 57.7% of predictive power, primarily through peritumoral stromal heterogeneity quantification. Proteomics identified 10 key chemoresistance proteins, including CYBA, GUSB, ATP6AP2, DYNC1I2, DAD1, ACOX1, COPG1, FBP1, DHRS7, and SSR3. Decision curve analysis confirmed clinical utility across threshold probabilities (0-0.75). Our study established a novel MRI-proteomics integration framework for NAC response prediction, with MRI defining spatial resistance patterns and proteomics deciphering molecular drivers, enabling early organ preservation strategies. The zero-imputation design ensured deplorability in diverse clinical settings.

Multidisciplinary Consensus Prostate Contours on Magnetic Resonance Imaging: Educational Atlas and Reference Standard for Artificial Intelligence Benchmarking.

Song Y, Dornisch AM, Dess RT, Margolis DJA, Weinberg EP, Barrett T, Cornell M, Fan RE, Harisinghani M, Kamran SC, Lee JH, Li CX, Liss MA, Rusu M, Santos J, Sonn GA, Vidic I, Woolen SA, Dale AM, Seibert TM

pubmed logopapersSep 1 2025
Evaluation of artificial intelligence (AI) algorithms for prostate segmentation is challenging because ground truth is lacking. We aimed to: (1) create a reference standard data set with precise prostate contours by expert consensus, and (2) evaluate various AI tools against this standard. We obtained prostate magnetic resonance imaging cases from six institutions from the Qualitative Prostate Imaging Consortium. A panel of 4 experts (2 genitourinary radiologists and 2 prostate radiation oncologists) meticulously developed consensus prostate segmentations on axial T<sub>2</sub>-weighted series. We evaluated the performance of 6 AI tools (3 commercially available and 3 academic) using Dice scores, distance from reference contour, and volume error. The panel achieved consensus prostate segmentation on each slice of all 68 patient cases included in the reference data set. We present 2 patient examples to serve as contouring guides. Depending on the AI tool, median Dice scores (across patients) ranged from 0.80 to 0.94 for whole prostate segmentation. For a typical (median) patient, AI tools had a mean error over the prostate surface ranging from 1.3 to 2.4 mm. They maximally deviated 3.0 to 9.4 mm outside the prostate and 3.0 to 8.5 mm inside the prostate for a typical patient. Error in prostate volume measurement for a typical patient ranged from 4.3% to 31.4%. We established an expert consensus benchmark for prostate segmentation. The best-performing AI tools have typical accuracy greater than that reported for radiation oncologists using computed tomography scans (the most common clinical approach for radiation therapy planning). Physician review remains essential to detect occasional major errors.

Combining curriculum learning and weakly supervised attention for enhanced thyroid nodule assessment in ultrasound imaging.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersSep 1 2025
The accurate assessment of thyroid nodules, which are increasingly common with age and lifestyle factors, is essential for early malignancy detection. Ultrasound imaging, the primary diagnostic tool for this purpose, holds promise when paired with deep learning. However, challenges persist with small datasets, where conventional data augmentation can introduce noise and obscure essential diagnostic features. To address dataset imbalance and enhance model generalization, this study integrates curriculum learning with a weakly supervised attention network to improve diagnostic accuracy for thyroid nodule classification. This study integrates curriculum learning with attention-guided data augmentation to improve deep learning model performance in classifying thyroid nodules. Using verified datasets from Siriraj Hospital, the model was trained progressively, beginning with simpler images and gradually incorporating more complex cases. This structured learning approach is designed to enhance the model's diagnostic accuracy by refining its ability to distinguish benign from malignant nodules. Among the curriculum learning schemes tested, schematic IV achieved the best results, with a precision of 100% for benign and 70% for malignant nodules, a recall of 82% for benign and 100% for malignant, and F1-scores of 90% and 83%, respectively. This structured approach improved the model's diagnostic sensitivity and robustness. These findings suggest that automated thyroid nodule assessment, supported by curriculum learning, has the potential to complement radiologists in clinical practice, enhancing diagnostic accuracy and aiding in more reliable malignancy detection.
Page 17 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.