Sort by:
Page 42 of 100995 results

Identifying signatures of image phenotypes to track treatment response in liver disease.

Perkonigg M, Bastati N, Ba-Ssalamah A, Mesenbrink P, Goehler A, Martic M, Zhou X, Trauner M, Langs G

pubmed logopapersJul 21 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary in a randomized controlled trial cohort of patients with nonalcoholic steatohepatitis. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method in a separate replication cohort to demonstrate the applicability of the proposed method.

Artificial intelligence-generated apparent diffusion coefficient (AI-ADC) maps for prostate gland assessment: a multi-reader study.

Ozyoruk KB, Harmon SA, Yilmaz EC, Huang EP, Gelikman DG, Gaur S, Giganti F, Law YM, Margolis DJ, Jadda PK, Raavi S, Gurram S, Wood BJ, Pinto PA, Choyke PL, Turkbey B

pubmed logopapersJul 21 2025
To compare the quality of AI-ADC maps and standard ADC maps in a multi-reader study. Multi-reader study included 74 consecutive patients (median age = 66 years, [IQR = 57.25-71.75 years]; median PSA = 4.30 ng/mL [IQR = 1.33-7.75 ng/mL]) with suspected or confirmed PCa, who underwent mpMRI between October 2023 and January 2024. The study was conducted in two rounds, separated by a 4-week wash-out period. In each round, four readers evaluated T2W-MRI and standard or AI-generated ADC (AI-ADC) maps. Fleiss' kappa, quadratic-weighted Cohen's kappa statistics were used to assess inter-reader agreement. Linear mixed effect models were employed to compare the quality evaluation of standard versus AI-ADC maps. AI-ADC maps exhibited significantly enhanced imaging quality compared to standard ADC maps with higher ratings in windowing ease (β = 0.67 [95% CI 0.30-1.04], p < 0.05), prostate boundary delineation (β = 1.38 [95% CI 1.03-1.73], p < 0.001), reductions in distortion (β = 1.68 [95% CI 1.30-2.05], p < 0.001), noise (β = 0.56 [95% CI 0.24-0.88], p < 0.001). AI-ADC maps reduced reacquisition requirements for all readers (β = 2.23 [95% CI 1.69-2.76], p < 0.001), supporting potential workflow efficiency gains. No differences were observed between AI-ADC and standard ADC maps' inter-reader agreement. Our multi-reader study demonstrated that AI-ADC maps improved prostate boundary delineation, had lower image noise, fewer distortions, and higher overall image quality compared to ADC maps. Question Can we synthesize apparent diffusion coefficient (ADC) maps with AI to achieve higher quality maps? Findings On average, readers rated quality factors of AI-ADC maps higher than ADC maps in 34.80% of cases, compared to 5.07% for ADC (p < 0.01). Clinical relevance AI-ADC maps may serve as a reliable diagnostic support tool thanks to their high quality, particularly when the acquired ADC maps include artifacts.

Regularized Low-Rank Adaptation for Few-Shot Organ Segmentation

Ghassen Baklouti, Julio Silva-Rodríguez, Jose Dolz, Houda Bahig, Ismail Ben Ayed

arxiv logopreprintJul 21 2025
Parameter-efficient fine-tuning (PEFT) of pre-trained foundation models is increasingly attracting interest in medical imaging due to its effectiveness and computational efficiency. Among these methods, Low-Rank Adaptation (LoRA) is a notable approach based on the assumption that the adaptation inherently occurs in a low-dimensional subspace. While it has shown good performance, its implementation requires a fixed and unalterable rank, which might be challenging to select given the unique complexities and requirements of each medical imaging downstream task. Inspired by advancements in natural image processing, we introduce a novel approach for medical image segmentation that dynamically adjusts the intrinsic rank during adaptation. Viewing the low-rank representation of the trainable weight matrices as a singular value decomposition, we introduce an l_1 sparsity regularizer to the loss function, and tackle it with a proximal optimizer. The regularizer could be viewed as a penalty on the decomposition rank. Hence, its minimization enables to find task-adapted ranks automatically. Our method is evaluated in a realistic few-shot fine-tuning setting, where we compare it first to the standard LoRA and then to several other PEFT methods across two distinguishable tasks: base organs and novel organs. Our extensive experiments demonstrate the significant performance improvements driven by our method, highlighting its efficiency and robustness against suboptimal rank initialization. Our code is publicly available: https://github.com/ghassenbaklouti/ARENA

A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT

Tanjin Taher Toma, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Wesley Jong, Darwish Alabyad, Vivek Batheja, Abhishek Jha, Mayank Patel, Darko Pucar, Jayadira del Rivero, Karel Pacak, Ronald M. Summers

arxiv logopreprintJul 21 2025
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context in deep learning models to achieve precise PCC segmentation, supporting clinical assessment and longitudinal monitoring.

Advances in IPMN imaging: deep learning-enhanced HASTE improves lesion assessment.

Kolck J, Pivetta F, Hosse C, Cao H, Fehrenbach U, Malinka T, Wagner M, Walter-Rittel T, Geisel D

pubmed logopapersJul 21 2025
The prevalence of asymptomatic pancreatic cysts is increasing due to advances in imaging techniques. Among these, intraductal papillary mucinous neoplasms (IPMNs) are most common, with potential for malignant transformation, often necessitating close follow-up. This study evaluates novel MRI techniques for the assessment of IPMN. From May to December 2023, 59 patients undergoing abdominal MRI were retrospectively enrolled. Examinations were conducted on 3-Tesla scanners using a Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) and standard HASTE (HASTE<sub>S</sub>) sequence. Two readers assessed minimum detectable lesion size and lesion-to-parenchyma contrast quantitatively, and qualitative assessments focused on image quality. Statistical analyses included the Wilcoxon signed-rank and chi-squared tests. HASTE<sub>DL</sub> demonstrated superior overall image quality (p < 0.001), with higher sharpness and contrast ratings (p < 0.001, p = 0.112). HASTE<sub>DL</sub> showed enhanced conspicuity of IPMN (p < 0.001) and lymph nodes (p < 0.001), with more frequent visualization of IPMN communication with the pancreatic duct (p < 0.001). Visualization of complex features (dilated pancreatic duct, septa, and mural nodules) was superior in HASTE<sub>DL</sub> (p < 0.001). The minimum detectable cyst size was significantly smaller for HASTE<sub>DL</sub> (4.17 mm ± 3.00 vs. 5.51 mm ± 4.75; p < 0.001). Inter-reader agreement was for (к 0.936) for HASTE<sub>DL</sub>, slightly lower (к 0.885) for HASTE<sub>S</sub>. HASTE<sub>DL</sub> in IPMN imaging provides superior image quality and significantly reduced scan times. Given the increasing prevalence of IPMN and the ensuing clinical need for fast and precise imaging, HASTE<sub>DL</sub> improves the availability and quality of patient care. Question Are there advantages of deep-learning-accelerated MRI in imaging and assessing intraductal papillary mucinous neoplasms (IPMN)? Findings Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) demonstrated superior image quality, improved conspicuity of "worrisome features" and detection of smaller cysts, with significantly reduced scan times. Clinical relevance HASTEDL provides faster, high-quality MRI imaging, enabling improved diagnostic accuracy and timely risk stratification for IPMN, potentially enhancing patient care and addressing the growing clinical demand for efficient imaging of IPMN.

Latent Space Synergy: Text-Guided Data Augmentation for Direct Diffusion Biomedical Segmentation

Muhammad Aqeel, Maham Nazir, Zanxi Ruan, Francesco Setti

arxiv logopreprintJul 21 2025
Medical image segmentation suffers from data scarcity, particularly in polyp detection where annotation requires specialized expertise. We present SynDiff, a framework combining text-guided synthetic data generation with efficient diffusion-based segmentation. Our approach employs latent diffusion models to generate clinically realistic synthetic polyps through text-conditioned inpainting, augmenting limited training data with semantically diverse samples. Unlike traditional diffusion methods requiring iterative denoising, we introduce direct latent estimation enabling single-step inference with T x computational speedup. On CVC-ClinicDB, SynDiff achieves 96.0% Dice and 92.9% IoU while maintaining real-time capability suitable for clinical deployment. The framework demonstrates that controlled synthetic augmentation improves segmentation robustness without distribution shift. SynDiff bridges the gap between data-hungry deep learning models and clinical constraints, offering an efficient solution for deployment in resourcelimited medical settings.

Ultra-low dose imaging in a standard axial field-of-view PET.

Lima T, Gomes CV, Fargier P, Strobel K, Leimgruber A

pubmed logopapersJul 21 2025
Though ultra-low dose (ULD) imaging offers notable benefits, its widespread clinical adoption faces challenges. Long-axial field-of-view (LAFOV) PET/CT systems are expensive and scarce, while artificial intelligence (AI) shows great potential but remains largely limited to specific systems and is not yet widely used in clinical practice. However, integrating AI techniques and technological advancements into ULD imaging is helping bridge the gap between standard axial field-of-view (SAFOV) and LAFOV PET/CT systems. This paper offers an initial evaluation of ULD capabilities using one of the latest SAFOV PET/CT device. A patient injected with 16.4 MBq <sup>18</sup>F-FDG underwent a local protocol consisting of a dynamic acquisition (first 30 min) of the abdominal section and a static whole body 74 min post-injection on a GE Omni PET/CT. From the acquired images we computed the dosimetry and compared clinical output from kidney function and brain uptake to kidney model and normal databases, respectively. The effective PET dose for this patient was 0.27 ± 0.01 mSv and the absorbed doses were 0.56 mGy, 0.89 mGy and 0.20 mGy, respectively to the brain, heart, and kidneys. The recorded kidney concentration closely followed the kidney model, matching the increase and decrease in activity concentration over time. Normal values for the z-score were observed for the brain uptake, indicating typical brain function and activity patterns consistent with healthy individuals. The signal to noise ration obtained in this study (13.1) was comparable to the LAFOV reported values. This study shows promising capabilities of ultra-low-dose imaging in SAFOV PET devices, previously deemed unattainable with SAFOV PET imaging.

[A multi-feature fusion-based model for fetal orientation classification from intrapartum ultrasound videos].

Zheng Z, Yang X, Wu S, Zhang S, Lyu G, Liu P, Wang J, He S

pubmed logopapersJul 20 2025
To construct an intelligent analysis model for classifying fetal orientation during intrapartum ultrasound videos based on multi-feature fusion. The proposed model consists of the Input, Backbone Network and Classification Head modules. The Input module carries out data augmentation to improve the sample quality and generalization ability of the model. The Backbone Network was responsible for feature extraction based on Yolov8 combined with CBAM, ECA, PSA attention mechanism and AIFI feature interaction module. The Classification Head consists of a convolutional layer and a softmax function to output the final probability value of each class. The images of the key structures (the eyes, face, head, thalamus, and spine) were annotated with frames by physicians for model training to improve the classification accuracy of the anterior occipital, posterior occipital, and transverse occipital orientations. The experimental results showed that the proposed model had excellent performance in the tire orientation classification task with the classification accuracy reaching 0.984, an area under the PR curve (average accuracy) of 0.993, and area under the ROC curve of 0.984, and a kappa consistency test score of 0.974. The prediction results by the deep learning model were highly consistent with the actual classification results. The multi-feature fusion model proposed in this study can efficiently and accurately classify fetal orientation in intrapartum ultrasound videos.

2.5D Deep Learning-Based Prediction of Pathological Grading of Clear Cell Renal Cell Carcinoma Using Contrast-Enhanced CT: A Multicenter Study.

Yang Z, Jiang H, Shan S, Wang X, Kou Q, Wang C, Jin P, Xu Y, Liu X, Zhang Y, Zhang Y

pubmed logopapersJul 19 2025
To develop and validate a deep learning model based on arterial phase-enhanced CT for predicting the pathological grading of clear cell renal cell carcinoma (ccRCC). Data from 564 patients diagnosed with ccRCC from five distinct hospitals were retrospectively analyzed. Patients from centers 1 and 2 were randomly divided into a training set (n=283) and an internal test set (n=122). Patients from centers 3, 4, and 5 served as external validation sets 1 (n=60), 2 (n=38), and 3 (n=61), respectively. A 2D model, a 2.5D model (three-slice input), and a radiomics-based multi-layer perceptron (MLP) model were developed. Model performance was evaluated using the area under the curve (AUC), accuracy, and sensitivity. The 2.5D model outperformed the 2D and MLP models. Its AUCs were 0.959 (95% CI: 0.9438-0.9738) for the training set, 0.879 (95% CI: 0.8401-0.9180) for the internal test set, and 0.870 (95% CI: 0.8076-0.9334), 0.862 (95% CI: 0.7581-0.9658), and 0.849 (95% CI: 0.7766-0.9216) for the three external validation sets, respectively. The corresponding accuracy values were 0.895, 0.836, 0.827, 0.825, and 0.839. Compared to the MLP model, the 2.5D model achieved significantly higher AUCs (increases of 0.150 [p<0.05], 0.112 [p<0.05], and 0.088 [p<0.05]) and accuracies (increases of 0.077 [p<0.05], 0.075 [p<0.05], and 0.101 [p<0.05]) in the external validation sets. The 2.5D model based on 2.5D CT image input demonstrated improved predictive performance for the WHO/ISUP grading of ccRCC.

Artificial intelligence-based models for quantification of intra-pancreatic fat deposition and their clinical relevance: a systematic review of imaging studies.

Joshi T, Virostko J, Petrov MS

pubmed logopapersJul 19 2025
High intra-pancreatic fat deposition (IPFD) plays an important role in diseases of the pancreas. The intricate anatomy of the pancreas and the surrounding structures has historically made IPFD quantification a challenging measurement to make accurately on radiological images. To take on the challenge, automated IPFD quantification methods using artificial intelligence (AI) have recently been deployed. The aim was to benchmark the current knowledge on the use of AI-based models to measure IPFD automatedly. The search was conducted in the MEDLINE, Embase, Scopus, and IEEE Xplore databases. Studies were eligible if they used AI for both segmentation of the pancreas and quantification of IPFD. The ground truth was manual segmentation by radiologists. When possible, data were pooled statistically using a random-effects model. A total of 12 studies (10 cross-sectional and 2 longitudinal) encompassing more than 50 thousand people were included. Eight of the 12 studies used MRI, whereas four studies employed CT. U-Net model and nnU-Net model were the most frequently used AI-based models. The pooled Dice similarity coefficient of AI-based models in quantifying IPFD was 82.3% (95% confidence interval, 73.5 to 91.1%). The clinical application of AI-based models showed the relevance of high IPFD to acute pancreatitis, pancreatic cancer, and type 2 diabetes mellitus. Current AI-based models for IPFD quantification are suboptimal, as the dissimilarity between AI-based and manual quantification of IPFD is not negligible. Future advancements in fully automated measurements of IPFD will accelerate the accumulation of robust, large-scale evidence on the role of high IPFD in pancreatic diseases. KEY POINTS: Question What is the current evidence on the performance and clinical applicability of artificial intelligence-based models for automated quantification of intra-pancreatic fat deposition? Findings The nnU-Net model achieved the highest Dice similarity coefficient among MRI-based studies, whereas the nnTransfer model demonstrated the highest Dice similarity coefficient in CT-based studies. Clinical relevance Standardisation of reporting on artificial intelligence-based models for the quantification of intra-pancreatic fat deposition will be essential to enhancing the clinical applicability and reliability of artificial intelligence in imaging patients with diseases of the pancreas.
Page 42 of 100995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.