Sort by:
Page 24 of 99990 results

A fully automated AI-based method for tumour detection and quantification on [<sup>18</sup>F]PSMA-1007 PET-CT images in prostate cancer.

Trägårdh E, Ulén J, Enqvist O, Larsson M, Valind K, Minarik D, Edenbrandt L

pubmed logopapersAug 20 2025
In this study, we further developed an artificial intelligence (AI)-based method for the detection and quantification of tumours in the prostate, lymph nodes and bone in prostate-specific membrane antigen (PSMA)-targeting positron emission tomography with computed tomography (PET-CT) images. A total of 1064 [<sup>18</sup>F]PSMA-1007 PET-CT scans were used (approximately twice as many compared to our previous AI model), of which 120 were used as test set. Suspected lesions were manually annotated and used as ground truth. A convolutional neural network was developed and trained. The sensitivity and positive predictive value (PPV) were calculated using two sets of manual segmentations as reference. Results were also compared to our previously developed AI method. The correlation between manually and AI-based calculations of total lesion volume (TLV) and total lesion uptake (TLU) were calculated. The sensitivities of the AI method were 85% for prostate tumour/recurrence, 91% for lymph node metastases and 61% for bone metastases (82%, 86% and 70% for manual readings and 66%, 88% and 71% for the old AI method). The PPVs of the AI method were 85%, 83% and 58%, respectively (63%, 86% and 39% for manual readings, and 69%, 70% and 39% for the old AI method). The correlations between manual and AI-based calculations of TLV and TLU ranged from r = 0.62 to r = 0.96. The performance of the newly developed and fully automated AI-based method for detecting and quantifying prostate tumour and suspected lymph node and bone metastases increased significantly, especially the PPV. The AI method is freely available to other researchers ( www.recomia.org ).

[Preoperative discrimination of colorectal mucinous adenocarcinoma using enhanced CT-based radiomics and deep learning fusion model].

Wang BZ, Zhang X, Wang YL, Wang XY, Wang QG, Luo Z, Xu SL, Huang C

pubmed logopapersAug 20 2025
<b>Objective:</b> To develop a preoperative differentiation model for colorectal mucinous adenocarcinoma and non-mucinous adenocarcinoma using a combination of contrast-enhanced CT radiomics and deep learning methods. <b>Methods:</b> This is a retrospective case series study. Clinical data of colorectal cancer patients confirmed by postoperative pathological examination were retrospectively collected from January 2016 to December 2023 at Shanghai General Hospital Affiliated to Shanghai Jiao Tong University School of Medicine (Center 1, <i>n</i>=220) and the First Affiliated Hospital of Bengbu Medical University (Center 2, <i>n=</i>51). Among them, there were 108 patients diagnosed with mucinous adenocarcinoma, including 55 males and 53 females, with an age of (68.4±12.2) years (range: 38 to 96 years); and 163 patients diagnosed with non-mucinous adenocarcinoma, including 96 males and 67 females, with an age of (67.9±11.0) years (range: 43 to 94 years). The cases from Center 1 were divided into a training set (<i>n</i>=156) and an internal validation set (<i>n</i>=64) using stratified random sampling in a 7︰3 ratio, and the cases from Center 2 were used as an independent external validation set (<i>n</i>=51). Three-dimensional tumor volume of interest was manually segmented on venous-phase contrast-enhanced CT images. Radiomics features were extracted using PyRadiomics, and deep learning features were extracted using the ResNet-18 network. The two sets of features were then combined to form a joint feature set. The consistency of manual segmentation was assessed using the intraclass correlation coefficient. Feature dimensionality reduction was performed using the Mann-Whitney <i>U</i> test and the least absolute shrinkage and selection operator regression. Six machine learning algorithms were used to construct models based on radiomics features, deep learning features, and combined features, including support vector machine, Logistic regression, random forest, extreme gradient boosting, k-nearest neighbors, and decision tree. The discriminative performance of each model was evaluated using receiver operating characteristic curves, the area under the curve (AUC), DeLong test, and decision curve analysis. <b>Results:</b> After feature selection, 22 features with the most discriminative value were finally retained, among which 12 were traditional radiomics features and 10 were deep learning features. In the internal validation set, the Random Forest algorithm based on the combined features model achieved the best performance (AUC=0.938, 95%<i>CI:</i> 0.875 to 0.984), which was superior to the single-modality radiomics feature model (AUC=0.817, 95%<i>CI:</i> 0.702 to 0.913,<i>P</i>=0.048) and the deep learning feature model (AUC=0.832, 95%<i>CI:</i> 0.727 to 0.926,<i>P=</i>0.087); in the independent external validation set, the Random Forest algorithm with the combined features model maintained the highest discriminative performance (AUC=0.891, 95%<i>CI:</i> 0.791 to 0.969), which was superior to the single-modality radiomics feature model (AUC=0.770, 95%<i>CI:</i> 0.636 to 0.890,<i>P</i>=0.045) and the deep learning feature model (AUC=0.799, 95%<i>CI:</i> 0.652 to 0.911,<i>P</i>=0.169). <b>Conclusion:</b> The combined model based on radiomics and deep learning features from venous-phase enhanced CT demonstrates good performance in the preoperative differentiation of colorectal mucinous from non-mucinous adenocarcinoma.

[Digital and intelligent medicine empowering precision abdominal surgery:today and the future].

Dong Q, Wang JM, Xiu WL

pubmed logopapersAug 20 2025
The complex anatomical structure of abdominal organs demands high precision in surgical procedures, which also increases postoperative complication risks. Advancements in digital medicine have created new opportunities for precision surgery. This article summarizes the current applications of digital intelligence in precision abdominal surgery. The processing and real-time monitoring technologies of medical imaging provide powerful tools for accurate diagnosis and treatment. Meanwhile, big data analysis and precise classification capabilities of artificial intelligence further enhance diagnostic efficiency and safety. Additionally, the paper analyzes the advantages and limitations of digital intelligence in empowering precision abdominal surgery, while exploring future development directions.

From Slices to Structures: Unsupervised 3D Reconstruction of Female Pelvic Anatomy from Freehand Transvaginal Ultrasound

Max Krähenmann, Sergio Tascon-Morales, Fabian Laumer, Julia E. Vogt, Ece Ozkan

arxiv logopreprintAug 20 2025
Volumetric ultrasound has the potential to significantly improve diagnostic accuracy and clinical decision-making, yet its widespread adoption remains limited by dependence on specialized hardware and restrictive acquisition protocols. In this work, we present a novel unsupervised framework for reconstructing 3D anatomical structures from freehand 2D transvaginal ultrasound (TVS) sweeps, without requiring external tracking or learned pose estimators. Our method adapts the principles of Gaussian Splatting to the domain of ultrasound, introducing a slice-aware, differentiable rasterizer tailored to the unique physics and geometry of ultrasound imaging. We model anatomy as a collection of anisotropic 3D Gaussians and optimize their parameters directly from image-level supervision, leveraging sensorless probe motion estimation and domain-specific geometric priors. The result is a compact, flexible, and memory-efficient volumetric representation that captures anatomical detail with high spatial fidelity. This work demonstrates that accurate 3D reconstruction from 2D ultrasound images can be achieved through purely computational means, offering a scalable alternative to conventional 3D systems and enabling new opportunities for AI-assisted analysis and diagnosis.

Temporal footprint reduction via neural network denoising in 177Lu radioligand therapy.

Nzatsi MC, Varmenot N, Sarrut D, Delpon G, Cherel M, Rousseau C, Ferrer L

pubmed logopapersAug 20 2025
Internal vectorised therapies, particularly with [177Lu]-labelled agents, are increasingly used for metastatic prostate cancer and neuroendocrine tumours. However, routine dosimetry for organs-at-risk and tumours remains limited due to the complexity and time requirements of current protocols. We developed a Generative Adversarial Network (GAN) to transform rapid 6 s SPECT projections into synthetic 30 s-equivalent projections. SPECT data from twenty patients and phantom acquisitions were collected at multiple time-points. The GAN accurately predicted 30 s projections, enabling estimation of time-integrated activities in kidneys and liver with maximum errors below 6 % and 1 %, respectively, compared to standard acquisitions. For tumours and phantom spheres, results were more variable. On phantom data, GAN-inferred reconstructions showed lower biases for spheres of 20, 8, and 1 mL (8.2 %, 6.9 %, and 21.7 %) compared to direct 6 s acquisitions (12.4 %, 20.4 %, and 24.0 %). However, in patient lesions, 37 segmented tumours showed higher median discrepancies in cumulated activity for the GAN (15.4 %) than for the 6 s approach (4.1 %). Our preliminary results indicate that the GAN can provide reliable dosimetry for organs-at-risk, but further optimisation is needed for small lesion quantification. This approach could reduce SPECT acquisition time from 45 to 9 min for standard three-bed studies, potentially facilitating wider adoption of dosimetry in nuclear medicine and addressing challenges related to toxicity and cumulative absorbed doses in personalised radiopharmaceutical therapy.

Detection of neonatal pneumoperitoneum on radiographs using deep multi-task learning.

Park C, Choi J, Hwang J, Jeong H, Kim PH, Cho YA, Lee BS, Jung E, Kwon SH, Kim M, Jun H, Nam Y, Kim N, Yoon HM

pubmed logopapersAug 20 2025
Neonatal pneumoperitoneum is a life-threatening condition requiring prompt diagnosis, yet its subtle radiographic signs pose diagnostic challenges, especially in emergency settings. To develop and validate a deep multi-task learning model for diagnosing neonatal pneumoperitoneum on radiographs and to assess its clinical utility across clinicians of varying experience levels. Retrospective diagnostic study using internal and external datasets. Internal data were collected between January 1995 and August 2018, while external data were sourced from 11 neonatal intensive care units. Tertiary hospital and multicenter validation settings. Internal dataset: 204 neonates (546 radiographs), external dataset: 378 radiographs (125 pneumoperitoneum cases, 214 non-pneumoperitoneum cases). Radiographs were reviewed by two pediatric radiologists. A reader study involved 4 physicians with varying experience levels. A deep multi-task learning model combining classification and segmentation tasks for pneumoperitoneum detection. The primary outcomes included diagnostic accuracy, area under the receiver operating characteristic curve (AUC), and inter-reader agreement. AI-assisted and unassisted reader performance metrics were compared. The AI model achieved an AUC of 0.98 (95 % CI, 0.94-1.00) and accuracy of 94 % (95 % CI, 85.1-99.6) in internal validation, and AUC of 0.89 (95 % CI, 0.85-0.92) with accuracy of 84.1 % (95 % CI, 80.4-87.8) in external validation. AI assistance improved reader accuracy from 82.5 % to 86.6 % (p < .001) and inter-reader agreement (kappa increased from 0.33 to 0.71 to 0.54-0.86). The multi-task learning model demonstrated excellent diagnostic performance and improved clinicians' diagnostic accuracy and agreement, suggesting its potential to enhance care in neonatal intensive care settings. All code is available at https://github.com/brody9512/NEC_MTL.

Machine learning-assisted radiogenomic analysis for miR-15a expression prediction in renal cell carcinoma.

Mytsyk Y, Kowal P, Kobilnyk Y, Lesny M, Skrzypczyk M, Stroj D, Dosenko V, Kucheruk O

pubmed logopapersAug 20 2025
Renal cell carcinoma (RCC) is a prevalent malignancy with highly variable outcomes. MicroRNA-15a (miR-15a) has emerged as a promising prognostic biomarker in RCC, linked to angiogenesis, apoptosis, and proliferation. Radiogenomics integrates radiological features with molecular data to non-invasively predict biomarkers, offering valuable insights for precision medicine. This study aimed to develop a machine learning-assisted radiogenomic model to predict miR-15a expression in RCC. A retrospective analysis was conducted on 64 RCC patients who underwent preoperative multiphase contrast-enhanced CT or MRI. Radiological features, including tumor size, necrosis, and nodular enhancement, were evaluated. MiR-15a expression was quantified using real-time qPCR from archived tissue samples. Polynomial regression and Random Forest models were employed for prediction, and hierarchical clustering with K-means analysis was used for phenotypic stratification. Statistical significance was assessed using non-parametric tests and machine learning performance metrics. Tumor size was the strongest radiological predictor of miR-15a expression (adjusted R<sup>2</sup> = 0.8281, p < 0.001). High miR-15a levels correlated with aggressive features, including necrosis and nodular enhancement (p < 0.05), while lower levels were associated with cystic components and macroscopic fat. The Random Forest regression model explained 65.8% of the variance in miR-15a expression (R<sup>2</sup> = 0.658). For classification, the Random Forest classifier demonstrated exceptional performance, achieving an AUC of 1.0, a precision of 1.0, a recall of 0.9, and an F1-score of 0.95. Hierarchical clustering effectively segregated tumors into aggressive and indolent phenotypes, consistent with clinical expectations. Radiogenomic analysis using machine learning provides a robust, non-invasive approach to predicting miR-15a expression, enabling enhanced tumor stratification and personalized RCC management. These findings underscore the clinical utility of integrating radiological and molecular data, paving the way for broader adoption of precision medicine in oncology.

A machine learning-based decision support tool for standardizing intracavitary versus interstitial brachytherapy technique selection in high-dose-rate cervical cancer.

Kajikawa T, Masui K, Sakai K, Takenaka T, Suzuki G, Yoshino Y, Nemoto H, Yamazaki H, Yamada K

pubmed logopapersAug 20 2025
To develop and evaluate a machine-learning (ML) decision-support tool that standardizes selection of intracavitary brachytherapy (ICBT) versus hybrid intracavitary/interstitial brachytherapy (IC/ISBT) in high-dose-rate (HDR) cervical cancer. We retrospectively analyzed 159 HDR brachytherapy plans from 50 consecutive patients treated between April 2022 and June 2024. Brachytherapy techniques (ICBT or IC/ISBT) were determined by an experienced radiation oncologist using CT/MRI-based 3-D image-guided brachytherapy. For each plan, 144 shape- and distance-based geometric features describing the high-risk clinical target volume (HR-CTV), bladder, rectum, and applicator were extracted. Nested five-fold cross-validation combined minimum-redundancy-maximum-relevance feature selection with five classifiers (k-nearest neighbors, logistic regression, naïve Bayes, random forest, support-vector classifier) and two voting ensembles (hard and soft voting). Model performance was benchmarked against single-factor rules (HR-CTV > 30 cm³; maximum lateral HR-CTV-tandem distance > 25 mm). Logistic regression achieved the highest test accuracy 0.849 ± 0.023 and a mean area-under-the-curve (AUC) 0.903 ± 0.033, outperforming the volume rule and matching the distance rule's AUC 0.907 ± 0.057 while providing greater accuracy 0.805 ± 0.114. These differences were not statistically significant. Feature-importance analysis showed that the maximum HR-CTV-tandem lateral distance and the bladder's minimal short-axis length consistently dominated model decisions.​ CONCLUSIONS: A compact ML tool using two readily measurable geometric features can reliably assist clinicians in choosing between ICBT and IC/ISBT, thereby reducing inter-physician variability and promoting standardized HDR cervical brachytherapy technique selection.

Advanced liver fibrosis detection using a two-stage deep learning approach on standard T2-weighted MRI.

Gupta P, Singh S, Gulati A, Dutta N, Aggarwal Y, Kalra N, Premkumar M, Taneja S, Verma N, De A, Duseja A

pubmed logopapersAug 19 2025
To develop and validate a deep learning model for automated detection of advanced liver fibrosis using standard T2-weighted MRI. We utilized two datasets: the public CirrMRI600 + dataset (n = 374) containing T2-weighted MRI scans from patients with cirrhosis (n = 318) and healthy subjects (n = 56), and an in-house dataset of chronic liver disease patients (n = 187). A two-stage deep learning pipeline was developed: first, an automated liver segmentation model using nnU-Net architecture trained on CirrMRI600 + and then applied to segment livers in our in-house dataset; second, a Masked Attention ResNet classification model. For classification model training, patients with liver stiffness measurement (LSM) > 12 kPa were classified as advanced fibrosis (n = 104). In contrast, healthy subjects from CirrMRI600 + and patients with LSM ≤ 12 kPa were classified as non-advanced fibrosis (n = 116). Model validation was exclusively performed on a separate test set of 23 patients with histopathological confirmation of the degree of fibrosis (METAVIR ≥ F3 indicating advanced fibrosis). We additionally compared our two-stage approach with direct classification without segmentation, and evaluated alternative architectures including DenseNet121 and SwinTransformer. The liver segmentation model performed excellently on the test set (mean Dice score: 0.960 ± 0.009; IoU: 0.923 ± 0.016). On the pathologically confirmed independent test set (n = 23), our two-stage model achieved strong diagnostic performance (sensitivity: 0.778, specificity: 0.800, AUC: 0.811, accuracy: 0.783), significantly outperforming direct classification without segmentation (AUC: 0.743). Classification performance was highly dependent on segmentation quality, with cases having excellent segmentation (Score 1) showing higher accuracy (0.818) than those with poor segmentation (Score 3, accuracy: 0.625). Alternative architectures with masked attention showed comparable but slightly lower performance (DenseNet121: AUC 0.795; SwinTransformer: AUC 0.782). Our fully automated deep learning pipeline effectively detects advanced liver fibrosis using standard non-contrast T2-weighted MRI, potentially offering a non-invasive alternative to current diagnostic approaches. The segmentation-first approach provides significant performance gains over direct classification.

A Multimodal Large Language Model as an End-to-End Classifier of Thyroid Nodule Malignancy Risk: Usability Study.

Sng GGR, Xiang Y, Lim DYZ, Tung JYM, Tan JH, Chng CL

pubmed logopapersAug 19 2025
Thyroid nodules are common, with ultrasound imaging as the primary modality for their assessment. Risk stratification systems like the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) have been developed but suffer from interobserver variability and low specificity. Artificial intelligence, particularly large language models (LLMs) with multimodal capabilities, presents opportunities for efficient end-to-end diagnostic processes. However, their clinical utility remains uncertain. This study evaluates the accuracy and consistency of multimodal LLMs for thyroid nodule risk stratification using the ACR TI-RADS system, examining the effects of model fine-tuning, image annotation, prompt engineering, and comparing open-source versus commercial models. In total, 3 multimodal vision-language models were evaluated: Microsoft's open-source Large Language and Visual Assistant (LLaVA) model, its medically fine-tuned variant (Large Language and Vision Assistant for bioMedicine [LLaVA-Med]), and OpenAI's commercial o3 model. A total of 192 thyroid nodules from publicly available ultrasound image datasets were assessed. Each model was evaluated using 2 prompts (basic and modified) and 2 image scenarios (unlabeled vs radiologist-annotated), yielding 6912 responses. Model outputs were compared with expert ratings for accuracy and consistency. Statistical comparisons included Chi-square tests, Mann-Whitney U tests, and Fleiss' kappa for interrater reliability. Overall, 88.4% (6110/6912) of responses were valid, with the o3 model producing the highest validity rate (2273/2304, 98.6%), followed by LLaVA (2108/2304, 91.5%) and LLaVA-Med (1729/2304, 75%; P<.001). The o3 model demonstrated the highest accuracy overall, achieving up to 57.3% accuracy in Thyroid Imaging Reporting and Data System (TI-RADS) classification, although still remaining suboptimal. Labeled images improved accuracy marginally in nodule margin assessment only when evaluating LLaVA models (407/768, 53% to 447/768, 58.2%; P=.04). Prompt engineering improved accuracy for composition (649/1,152, 56.3% vs 483/1152, 41.9%; P<.001), but significantly reduced accuracy for shape, margins, and overall classification. Consistency was the highest with the o3 model (up to 85.4%), but was comparable for LLaVA and significantly improved with image labeling and modified prompts across multiple TI-RADS categories (P<.001). Subgroup analysis for o3 alone showed prompt engineering did not affect accuracy significantly but markedly improved consistency across all TI-RADS categories (up to 97.1% for shape, P<.001). Interrater reliability was consistently poor across all combinations (Fleiss' kappa<0.60). The study demonstrates the comparative advantages and limitations of multimodal LLMs for thyroid nodule risk stratification. While the commercial model (o3) consistently outperformed open-source models in accuracy and consistency, even the best-performing model outputs remained suboptimal for direct clinical deployment. Prompt engineering significantly enhanced output consistency, particularly in the commercial model. These findings underline the importance of strategic model optimization techniques and highlight areas requiring further development before multimodal LLMs can be reliably used in clinical thyroid imaging workflows.
Page 24 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.