Sort by:
Page 74 of 3493486 results

Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI.

Zhang Z, Hides JA, De Martino E, Millner J, Tuxworth G

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPM). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MRIs. A total of 1,302 MRIs from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and Intraclass Correlation Coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MRIs. ©RSNA, 2025.

Characterizing the Impact of Training Data on Generalizability: Application in Deep Learning to Estimate Lung Nodule Malignancy Risk.

Obreja B, Bosma J, Venkadesh KV, Saghir Z, Prokop M, Jacobs C

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To investigate the relationship between training data volume and performance of a deep learning AI algorithm developed to assess the malignancy risk of pulmonary nodules detected on low-dose CT scans in lung cancer screening. Materials and Methods This retrospective study used a dataset of 16077 annotated nodules (1249 malignant, 14828 benign) from the National Lung Screening Trial (NLST) to systematically train an AI algorithm for pulmonary nodule malignancy risk prediction across various stratified subsets ranging from 1.25% to the full dataset. External testing was conducted using data from the Danish Lung Cancer Screening Trial (DLCST) to determine the amount of training data at which the performance of the AI was statistically non-inferior to the AI trained on the full NLST cohort. A size-matched cancer-enriched subset of DLCST, where each malignant nodule had been paired in diameter with the closest two benign nodules, was used to investigate the amount of training data at which the performance of the AI algorithm was statistically non-inferior to the average performance of 11 clinicians. Results The external testing set included 599 participants (mean age 57.65 (SD 4.84) for females and mean age 59.03 (SD 4.94) for males) with 883 nodules (65 malignant, 818 benign). The AI achieved a mean AUC of 0.92 [95% CI: 0.88, 0.96] on the DLCST cohort when trained on the full NLST dataset. Training with 80% of NLST data resulted in non-inferior performance (mean AUC 0.92 [95%CI: 0.89, 0.96], <i>P</i> = .005). On the size-matched DLCST subset (59 malignant, 118 benign), the AI reached non-inferior clinician-level performance (mean AUC 0.82 [95% CI: 0.77, 0.86]) with 20% of the training data (<i>P</i> = .02). Conclusion The deep learning AI algorithm demonstrated excellent performance in assessing pulmonary nodule malignancy risk, achieving clinical level performance with a fraction of the training data and reaching peak performance before utilizing the full dataset. ©RSNA, 2025.

Sarcopenia Assessment Using Fully Automated Deep Learning Predicts Cardiac Allograft Survival in Heart Transplant Recipients.

Lang FM, Liu J, Clerkin KJ, Driggin EA, Einstein AJ, Sayer GT, Takeda K, Uriel N, Summers RM, Topkara VK

pubmed logopapersAug 20 2025
Sarcopenia is associated with adverse outcomes in patients with end-stage heart failure. Muscle mass can be quantified via manual segmentation of computed tomography images, but this approach is time-consuming and subject to interobserver variability. We sought to determine whether fully automated assessment of radiographic sarcopenia by deep learning would predict heart transplantation outcomes. This retrospective study included 164 adult patients who underwent heart transplantation between January 2013 and December 2022. A deep learning-based tool was utilized to automatically calculate cross-sectional skeletal muscle area at the T11, T12, and L1 levels on chest computed tomography. Radiographic sarcopenia was defined as skeletal muscle index (skeletal muscle area divided by height squared) in the lowest sex-specific quartile. The study population had a mean age of 53±14 years and was predominantly male (75%) with a nonischemic cause (73%). Mean skeletal muscle index was 28.3±7.6 cm<sup>2</sup>/m<sup>2</sup> for females versus 33.1±8.1 cm<sup>2</sup>/m<sup>2</sup> for males (<i>P</i><0.001). Cardiac allograft survival was significantly lower in heart transplant recipients with versus without radiographic sarcopenia at T11 (90% versus 98% at 1 year, 83% versus 97% at 3 years, log-rank <i>P</i>=0.02). After multivariable adjustment, radiographic sarcopenia at T11 was associated with an increased risk of cardiac allograft loss or death (hazard ratio, 3.86 [95% CI, 1.35-11.0]; <i>P</i>=0.01). Patients with radiographic sarcopenia also had a significantly increased hospital length of stay (28 [interquartile range, 19-33] versus 20 [interquartile range, 16-31] days; <i>P</i>=0.046). Fully automated quantification of radiographic sarcopenia using pretransplant chest computed tomography successfully predicts cardiac allograft survival. By avoiding interobserver variability and accelerating computation, this approach has the potential to improve candidate selection and outcomes in heart transplantation.

Physician-in-the-Loop Active Learning in Radiology Artificial Intelligence Workflows: Opportunities, Challenges, and Future Directions.

Luo M, Yousefirizi F, Rouzrokh P, Jin W, Alberts I, Gowdy C, Bouchareb Y, Hamarneh G, Klyuzhin I, Rahmim A

pubmed logopapersAug 20 2025
Artificial intelligence (AI) is being explored for a growing range of applications in radiology, including image reconstruction, image segmentation, synthetic image generation, disease classification, worklist triage, and examination scheduling. However, training accurate AI models typically requires substantial amounts of expert-labeled data, which can be time-consuming and expensive to obtain. Active learning offers a potential strategy for mitigating the impacts of such labeling requirements. In contrast with other machine-learning approaches used for data-limited situations, active learning aims to produce labeled datasets by identifying the most informative or uncertain data for human annotation, thereby reducing labeling burden to improve model performance under constrained datasets. This Review explores the application of active learning to radiology AI, focusing on the role of active learning in reducing the resources needed to train radiology AI models while enhancing physician-AI interaction and collaboration. We discuss how active learning can be incorporated into radiology workflows to promote physician-in-the-loop AI systems, presenting key active learning concepts and use cases for radiology-based tasks, including through literature-based examples. Finally, we provide summary recommendations for the integration of active learning in radiology workflows while highlighting relevant opportunities, challenges, and future directions.

MedImg: An Integrated Database for Public Medical Image.

Zhong B, Fan R, Ma Y, Ji X, Cui Q, Cui C

pubmed logopapersAug 20 2025
The advancements in deep learning algorithms for medical image analysis have garnered significant attention in recent years. While several studies show promising results, with models achieving or even surpassing human performance, translating these advancements into clinical practice is still accompanied by various challenges. A primary obstacle lies in the availability of large-scale, well-characterized datasets for validating the generalization of approaches. To address this challenge, we curated a diverse collection of medical image datasets from multiple public sources, containing 105 datasets and a total of 1,995,671 images. These images span 14 modalities, including X-ray, computed tomography, magnetic resonance imaging, optical coherence tomography, ultrasound, and endoscopy, and originate from 13 organs, such as the lung, brain, eye, and heart. Subsequently, we constructed an online database, MedImg, which incorporates and systematically organizes these medical images to facilitate data accessibility. MedImg serves as an intuitive and open-access platform for facilitating research in deep learning-based medical image analysis, accessible at https://www.cuilab.cn/medimg/.

[The application effect of Generative Pre-Treatment Tool of Skeletal Pathology in functional lumbar spine radiographic analysis].

Yilihamu Y, Zhao K, Zhong H, Feng SQ

pubmed logopapersAug 20 2025
<b>Objective:</b> To investigate the application effectiveness of the artificial intelligence(AI) based Generative Pre-treatment tool of Skeletal Pathology (GPTSP) in measuring functional lumbar radiographic examinations. <b>Methods:</b> This is a retrospective case series study,reviewing the clinical and imaging data of 34 patients who underwent lumbar dynamic X-ray radiography at Department of Orthopedics, the Second Hospital of Shandong University from September 2021 to June 2023. Among the patients, 13 were male and 21 were female, with an age of (68.0±8.0) years (range:55 to 88 years). The AI model of the GPTSP system was built upon a multi-dimensional constrained loss function constructed based on the YOLOv8 model, incorporating Kullback-Leibler divergence to quantify the anatomical distribution deviation of lumbar intervertebral space detection boxes, along with the introduction of a global dynamic attention mechanism. It can identify lumbar vertebral body edge points and measure lumbar intervertebral space. Furthermore, spondylolisthesis index, lumbar index, and lumbar intervertebral angles were measured using three methods: manual measurement by doctors, predefined annotated measurement, and AI-assisted measurement. The consistency between the doctors and the AI model was analyzed through intra-class correlation coefficient (ICC) and Kappa coefficient. <b>Results:</b> AI-assisted physician measurement time was (1.5±0.1) seconds (range: 1.3 to 1.7 seconds), which was shorter than the manual measurement time ((2 064.4±108.2) seconds,range: 1 768.3 to 2 217.6 seconds) and the pre-defined annotation measurement time ((602.0±48.9) seconds,range: 503.9 to 694.4 seconds). Kappa values between physicians' diagnoses and AI model's diagnoses (based on GPTSP platform) for the lumbar slip index, lumbar index, and intervertebral angles measured by three methods were 0.95, 0.92, and 0.82 (all <i>P</i><0.01), with ICC values consistently exceeding 0.90, indicating high consistency. Based on the doctor's manual measurement, compared with the predefined label measurement, altering AI assistance, doctors measurement with average annotation errors reduced from 2.52 mm (range: 0.01 to 6.78 mm) to 1.47 mm(range: 0 to 5.03 mm). <b>Conclusions:</b> The GPTSP system enhanced efficiency in functional lumbar analysis. AI model demonstrated high consistency in annotation and measurement results, showing strong potential to serve as a reliable clinical auxiliary tool.

[Preoperative discrimination of colorectal mucinous adenocarcinoma using enhanced CT-based radiomics and deep learning fusion model].

Wang BZ, Zhang X, Wang YL, Wang XY, Wang QG, Luo Z, Xu SL, Huang C

pubmed logopapersAug 20 2025
<b>Objective:</b> To develop a preoperative differentiation model for colorectal mucinous adenocarcinoma and non-mucinous adenocarcinoma using a combination of contrast-enhanced CT radiomics and deep learning methods. <b>Methods:</b> This is a retrospective case series study. Clinical data of colorectal cancer patients confirmed by postoperative pathological examination were retrospectively collected from January 2016 to December 2023 at Shanghai General Hospital Affiliated to Shanghai Jiao Tong University School of Medicine (Center 1, <i>n</i>=220) and the First Affiliated Hospital of Bengbu Medical University (Center 2, <i>n=</i>51). Among them, there were 108 patients diagnosed with mucinous adenocarcinoma, including 55 males and 53 females, with an age of (68.4±12.2) years (range: 38 to 96 years); and 163 patients diagnosed with non-mucinous adenocarcinoma, including 96 males and 67 females, with an age of (67.9±11.0) years (range: 43 to 94 years). The cases from Center 1 were divided into a training set (<i>n</i>=156) and an internal validation set (<i>n</i>=64) using stratified random sampling in a 7︰3 ratio, and the cases from Center 2 were used as an independent external validation set (<i>n</i>=51). Three-dimensional tumor volume of interest was manually segmented on venous-phase contrast-enhanced CT images. Radiomics features were extracted using PyRadiomics, and deep learning features were extracted using the ResNet-18 network. The two sets of features were then combined to form a joint feature set. The consistency of manual segmentation was assessed using the intraclass correlation coefficient. Feature dimensionality reduction was performed using the Mann-Whitney <i>U</i> test and the least absolute shrinkage and selection operator regression. Six machine learning algorithms were used to construct models based on radiomics features, deep learning features, and combined features, including support vector machine, Logistic regression, random forest, extreme gradient boosting, k-nearest neighbors, and decision tree. The discriminative performance of each model was evaluated using receiver operating characteristic curves, the area under the curve (AUC), DeLong test, and decision curve analysis. <b>Results:</b> After feature selection, 22 features with the most discriminative value were finally retained, among which 12 were traditional radiomics features and 10 were deep learning features. In the internal validation set, the Random Forest algorithm based on the combined features model achieved the best performance (AUC=0.938, 95%<i>CI:</i> 0.875 to 0.984), which was superior to the single-modality radiomics feature model (AUC=0.817, 95%<i>CI:</i> 0.702 to 0.913,<i>P</i>=0.048) and the deep learning feature model (AUC=0.832, 95%<i>CI:</i> 0.727 to 0.926,<i>P=</i>0.087); in the independent external validation set, the Random Forest algorithm with the combined features model maintained the highest discriminative performance (AUC=0.891, 95%<i>CI:</i> 0.791 to 0.969), which was superior to the single-modality radiomics feature model (AUC=0.770, 95%<i>CI:</i> 0.636 to 0.890,<i>P</i>=0.045) and the deep learning feature model (AUC=0.799, 95%<i>CI:</i> 0.652 to 0.911,<i>P</i>=0.169). <b>Conclusion:</b> The combined model based on radiomics and deep learning features from venous-phase enhanced CT demonstrates good performance in the preoperative differentiation of colorectal mucinous from non-mucinous adenocarcinoma.

From Slices to Structures: Unsupervised 3D Reconstruction of Female Pelvic Anatomy from Freehand Transvaginal Ultrasound

Max Krähenmann, Sergio Tascon-Morales, Fabian Laumer, Julia E. Vogt, Ece Ozkan

arxiv logopreprintAug 20 2025
Volumetric ultrasound has the potential to significantly improve diagnostic accuracy and clinical decision-making, yet its widespread adoption remains limited by dependence on specialized hardware and restrictive acquisition protocols. In this work, we present a novel unsupervised framework for reconstructing 3D anatomical structures from freehand 2D transvaginal ultrasound (TVS) sweeps, without requiring external tracking or learned pose estimators. Our method adapts the principles of Gaussian Splatting to the domain of ultrasound, introducing a slice-aware, differentiable rasterizer tailored to the unique physics and geometry of ultrasound imaging. We model anatomy as a collection of anisotropic 3D Gaussians and optimize their parameters directly from image-level supervision, leveraging sensorless probe motion estimation and domain-specific geometric priors. The result is a compact, flexible, and memory-efficient volumetric representation that captures anatomical detail with high spatial fidelity. This work demonstrates that accurate 3D reconstruction from 2D ultrasound images can be achieved through purely computational means, offering a scalable alternative to conventional 3D systems and enabling new opportunities for AI-assisted analysis and diagnosis.

TCFNet: Bidirectional face-bone transformation via a Transformer-based coarse-to-fine point movement network

Runshi Zhang, Bimeng Jie, Yang He, Junchen Wang

arxiv logopreprintAug 20 2025
Computer-aided surgical simulation is a critical component of orthognathic surgical planning, where accurately simulating face-bone shape transformations is significant. The traditional biomechanical simulation methods are limited by their computational time consumption levels, labor-intensive data processing strategies and low accuracy. Recently, deep learning-based simulation methods have been proposed to view this problem as a point-to-point transformation between skeletal and facial point clouds. However, these approaches cannot process large-scale points, have limited receptive fields that lead to noisy points, and employ complex preprocessing and postprocessing operations based on registration. These shortcomings limit the performance and widespread applicability of such methods. Therefore, we propose a Transformer-based coarse-to-fine point movement network (TCFNet) to learn unique, complicated correspondences at the patch and point levels for dense face-bone point cloud transformations. This end-to-end framework adopts a Transformer-based network and a local information aggregation network (LIA-Net) in the first and second stages, respectively, which reinforce each other to generate precise point movement paths. LIA-Net can effectively compensate for the neighborhood precision loss of the Transformer-based network by modeling local geometric structures (edges, orientations and relative position features). The previous global features are employed to guide the local displacement using a gated recurrent unit. Inspired by deformable medical image registration, we propose an auxiliary loss that can utilize expert knowledge for reconstructing critical organs.Compared with the existing state-of-the-art (SOTA) methods on gathered datasets, TCFNet achieves outstanding evaluation metrics and visualization results. The code is available at https://github.com/Runshi-Zhang/TCFNet.

Temporal footprint reduction via neural network denoising in 177Lu radioligand therapy.

Nzatsi MC, Varmenot N, Sarrut D, Delpon G, Cherel M, Rousseau C, Ferrer L

pubmed logopapersAug 20 2025
Internal vectorised therapies, particularly with [177Lu]-labelled agents, are increasingly used for metastatic prostate cancer and neuroendocrine tumours. However, routine dosimetry for organs-at-risk and tumours remains limited due to the complexity and time requirements of current protocols. We developed a Generative Adversarial Network (GAN) to transform rapid 6 s SPECT projections into synthetic 30 s-equivalent projections. SPECT data from twenty patients and phantom acquisitions were collected at multiple time-points. The GAN accurately predicted 30 s projections, enabling estimation of time-integrated activities in kidneys and liver with maximum errors below 6 % and 1 %, respectively, compared to standard acquisitions. For tumours and phantom spheres, results were more variable. On phantom data, GAN-inferred reconstructions showed lower biases for spheres of 20, 8, and 1 mL (8.2 %, 6.9 %, and 21.7 %) compared to direct 6 s acquisitions (12.4 %, 20.4 %, and 24.0 %). However, in patient lesions, 37 segmented tumours showed higher median discrepancies in cumulated activity for the GAN (15.4 %) than for the 6 s approach (4.1 %). Our preliminary results indicate that the GAN can provide reliable dosimetry for organs-at-risk, but further optimisation is needed for small lesion quantification. This approach could reduce SPECT acquisition time from 45 to 9 min for standard three-bed studies, potentially facilitating wider adoption of dosimetry in nuclear medicine and addressing challenges related to toxicity and cumulative absorbed doses in personalised radiopharmaceutical therapy.
Page 74 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.