Sort by:
Page 15 of 73728 results

DualSwinUnet++: An enhanced Swin-Unet architecture with dual decoders for PTMC segmentation.

Dialameh M, Rajabzadeh H, Sadeghi-Goughari M, Sim JS, Kwon HJ

pubmed logopapersJul 22 2025
Precise segmentation of papillary thyroid microcarcinoma (PTMC) during ultrasound-guided radiofrequency ablation (RFA) is critical for effective treatment but remains challenging due to acoustic artifacts, small lesion size, and anatomical variability. In this study, we propose DualSwinUnet++, a dual-decoder transformer-based architecture designed to enhance PTMC segmentation by incorporating thyroid gland context. DualSwinUnet++ employs independent linear projection heads for each decoder and a residual information flow mechanism that passes intermediate features from the first (thyroid) decoder to the second (PTMC) decoder via concatenation and transformation. These design choices allow the model to condition tumor prediction explicitly on gland morphology without shared gradient interference. Trained on a clinical ultrasound dataset with 691 annotated RFA images and evaluated against state-of-the-art models, DualSwinUnet++ achieves superior Dice and Jaccard scores while maintaining sub-200ms inference latency. The results demonstrate the model's suitability for near real-time surgical assistance and its effectiveness in improving segmentation accuracy in challenging PTMC cases.

MAN-GAN: a mask-adaptive normalization based generative adversarial networks for liver multi-phase CT image generation.

Zhao W, Chen W, Fan L, Shang Y, Wang Y, Situ W, Li W, Liu T, Yuan Y, Liu J

pubmed logopapersJul 22 2025
Liver multiphase enhanced computed tomography (MPECT) is vital in clinical practice, but its utility is limited by various factors. We aimed to develop a deep learning network capable of automatically generating MPECT images from standard non-contrast CT scans. Dataset 1 included 374 patients and was divided into three parts: a training set, a validation set and a test set. Dataset 2 included 144 patients with one specific liver disease and was used as an internal test dataset. We further collected another dataset comprising 83 patients for external validation. Then, we propose a Mask-Adaptive Normalization-based Generative Adversarial Network with Cycle-Consistency Loss (MAN-GAN) to achieve non-contrast CT to MPECT translation. To assess the efficiency of MAN-GAN, we conducted a comparative analysis with state-of-the-art methods commonly employed in diverse medical image synthesis tasks. Moreover, two subjective radiologist evaluation studies were performed to verify the clinical usefulness of the generated images. MAN-GAN outperformed the baseline network and other state-of-the-art methods in all generations of the three phases. These results were verified in internal and external datasets. According to radiological evaluation, the image quality of generated three phase images are all above average. Moreover, the similarities between real images and generated images in all three phases are satisfactory. MAN-GAN demonstrates the feasibility of liver MPECT image translation based on non-contrast images and achieves state-of-the-art performance via the subtraction strategy. It has great potential for solving the dilemma of liver CT contrast canning and aiding further liver interaction clinical scenarios.

Identifying signatures of image phenotypes to track treatment response in liver disease.

Perkonigg M, Bastati N, Ba-Ssalamah A, Mesenbrink P, Goehler A, Martic M, Zhou X, Trauner M, Langs G

pubmed logopapersJul 21 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary in a randomized controlled trial cohort of patients with nonalcoholic steatohepatitis. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method in a separate replication cohort to demonstrate the applicability of the proposed method.

Advances in IPMN imaging: deep learning-enhanced HASTE improves lesion assessment.

Kolck J, Pivetta F, Hosse C, Cao H, Fehrenbach U, Malinka T, Wagner M, Walter-Rittel T, Geisel D

pubmed logopapersJul 21 2025
The prevalence of asymptomatic pancreatic cysts is increasing due to advances in imaging techniques. Among these, intraductal papillary mucinous neoplasms (IPMNs) are most common, with potential for malignant transformation, often necessitating close follow-up. This study evaluates novel MRI techniques for the assessment of IPMN. From May to December 2023, 59 patients undergoing abdominal MRI were retrospectively enrolled. Examinations were conducted on 3-Tesla scanners using a Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) and standard HASTE (HASTE<sub>S</sub>) sequence. Two readers assessed minimum detectable lesion size and lesion-to-parenchyma contrast quantitatively, and qualitative assessments focused on image quality. Statistical analyses included the Wilcoxon signed-rank and chi-squared tests. HASTE<sub>DL</sub> demonstrated superior overall image quality (p < 0.001), with higher sharpness and contrast ratings (p < 0.001, p = 0.112). HASTE<sub>DL</sub> showed enhanced conspicuity of IPMN (p < 0.001) and lymph nodes (p < 0.001), with more frequent visualization of IPMN communication with the pancreatic duct (p < 0.001). Visualization of complex features (dilated pancreatic duct, septa, and mural nodules) was superior in HASTE<sub>DL</sub> (p < 0.001). The minimum detectable cyst size was significantly smaller for HASTE<sub>DL</sub> (4.17 mm ± 3.00 vs. 5.51 mm ± 4.75; p < 0.001). Inter-reader agreement was for (к 0.936) for HASTE<sub>DL</sub>, slightly lower (к 0.885) for HASTE<sub>S</sub>. HASTE<sub>DL</sub> in IPMN imaging provides superior image quality and significantly reduced scan times. Given the increasing prevalence of IPMN and the ensuing clinical need for fast and precise imaging, HASTE<sub>DL</sub> improves the availability and quality of patient care. Question Are there advantages of deep-learning-accelerated MRI in imaging and assessing intraductal papillary mucinous neoplasms (IPMN)? Findings Deep-Learning Accelerated Half-Fourier Single-Shot Turbo Spin-Echo (HASTE<sub>DL</sub>) demonstrated superior image quality, improved conspicuity of "worrisome features" and detection of smaller cysts, with significantly reduced scan times. Clinical relevance HASTEDL provides faster, high-quality MRI imaging, enabling improved diagnostic accuracy and timely risk stratification for IPMN, potentially enhancing patient care and addressing the growing clinical demand for efficient imaging of IPMN.

Artificial intelligence-generated apparent diffusion coefficient (AI-ADC) maps for prostate gland assessment: a multi-reader study.

Ozyoruk KB, Harmon SA, Yilmaz EC, Huang EP, Gelikman DG, Gaur S, Giganti F, Law YM, Margolis DJ, Jadda PK, Raavi S, Gurram S, Wood BJ, Pinto PA, Choyke PL, Turkbey B

pubmed logopapersJul 21 2025
To compare the quality of AI-ADC maps and standard ADC maps in a multi-reader study. Multi-reader study included 74 consecutive patients (median age = 66 years, [IQR = 57.25-71.75 years]; median PSA = 4.30 ng/mL [IQR = 1.33-7.75 ng/mL]) with suspected or confirmed PCa, who underwent mpMRI between October 2023 and January 2024. The study was conducted in two rounds, separated by a 4-week wash-out period. In each round, four readers evaluated T2W-MRI and standard or AI-generated ADC (AI-ADC) maps. Fleiss' kappa, quadratic-weighted Cohen's kappa statistics were used to assess inter-reader agreement. Linear mixed effect models were employed to compare the quality evaluation of standard versus AI-ADC maps. AI-ADC maps exhibited significantly enhanced imaging quality compared to standard ADC maps with higher ratings in windowing ease (β = 0.67 [95% CI 0.30-1.04], p < 0.05), prostate boundary delineation (β = 1.38 [95% CI 1.03-1.73], p < 0.001), reductions in distortion (β = 1.68 [95% CI 1.30-2.05], p < 0.001), noise (β = 0.56 [95% CI 0.24-0.88], p < 0.001). AI-ADC maps reduced reacquisition requirements for all readers (β = 2.23 [95% CI 1.69-2.76], p < 0.001), supporting potential workflow efficiency gains. No differences were observed between AI-ADC and standard ADC maps' inter-reader agreement. Our multi-reader study demonstrated that AI-ADC maps improved prostate boundary delineation, had lower image noise, fewer distortions, and higher overall image quality compared to ADC maps. Question Can we synthesize apparent diffusion coefficient (ADC) maps with AI to achieve higher quality maps? Findings On average, readers rated quality factors of AI-ADC maps higher than ADC maps in 34.80% of cases, compared to 5.07% for ADC (p < 0.01). Clinical relevance AI-ADC maps may serve as a reliable diagnostic support tool thanks to their high quality, particularly when the acquired ADC maps include artifacts.

Ultra-low dose imaging in a standard axial field-of-view PET.

Lima T, Gomes CV, Fargier P, Strobel K, Leimgruber A

pubmed logopapersJul 21 2025
Though ultra-low dose (ULD) imaging offers notable benefits, its widespread clinical adoption faces challenges. Long-axial field-of-view (LAFOV) PET/CT systems are expensive and scarce, while artificial intelligence (AI) shows great potential but remains largely limited to specific systems and is not yet widely used in clinical practice. However, integrating AI techniques and technological advancements into ULD imaging is helping bridge the gap between standard axial field-of-view (SAFOV) and LAFOV PET/CT systems. This paper offers an initial evaluation of ULD capabilities using one of the latest SAFOV PET/CT device. A patient injected with 16.4 MBq <sup>18</sup>F-FDG underwent a local protocol consisting of a dynamic acquisition (first 30 min) of the abdominal section and a static whole body 74 min post-injection on a GE Omni PET/CT. From the acquired images we computed the dosimetry and compared clinical output from kidney function and brain uptake to kidney model and normal databases, respectively. The effective PET dose for this patient was 0.27 ± 0.01 mSv and the absorbed doses were 0.56 mGy, 0.89 mGy and 0.20 mGy, respectively to the brain, heart, and kidneys. The recorded kidney concentration closely followed the kidney model, matching the increase and decrease in activity concentration over time. Normal values for the z-score were observed for the brain uptake, indicating typical brain function and activity patterns consistent with healthy individuals. The signal to noise ration obtained in this study (13.1) was comparable to the LAFOV reported values. This study shows promising capabilities of ultra-low-dose imaging in SAFOV PET devices, previously deemed unattainable with SAFOV PET imaging.

Regularized Low-Rank Adaptation for Few-Shot Organ Segmentation

Ghassen Baklouti, Julio Silva-Rodríguez, Jose Dolz, Houda Bahig, Ismail Ben Ayed

arxiv logopreprintJul 21 2025
Parameter-efficient fine-tuning (PEFT) of pre-trained foundation models is increasingly attracting interest in medical imaging due to its effectiveness and computational efficiency. Among these methods, Low-Rank Adaptation (LoRA) is a notable approach based on the assumption that the adaptation inherently occurs in a low-dimensional subspace. While it has shown good performance, its implementation requires a fixed and unalterable rank, which might be challenging to select given the unique complexities and requirements of each medical imaging downstream task. Inspired by advancements in natural image processing, we introduce a novel approach for medical image segmentation that dynamically adjusts the intrinsic rank during adaptation. Viewing the low-rank representation of the trainable weight matrices as a singular value decomposition, we introduce an l_1 sparsity regularizer to the loss function, and tackle it with a proximal optimizer. The regularizer could be viewed as a penalty on the decomposition rank. Hence, its minimization enables to find task-adapted ranks automatically. Our method is evaluated in a realistic few-shot fine-tuning setting, where we compare it first to the standard LoRA and then to several other PEFT methods across two distinguishable tasks: base organs and novel organs. Our extensive experiments demonstrate the significant performance improvements driven by our method, highlighting its efficiency and robustness against suboptimal rank initialization. Our code is publicly available: https://github.com/ghassenbaklouti/ARENA

Latent Space Synergy: Text-Guided Data Augmentation for Direct Diffusion Biomedical Segmentation

Muhammad Aqeel, Maham Nazir, Zanxi Ruan, Francesco Setti

arxiv logopreprintJul 21 2025
Medical image segmentation suffers from data scarcity, particularly in polyp detection where annotation requires specialized expertise. We present SynDiff, a framework combining text-guided synthetic data generation with efficient diffusion-based segmentation. Our approach employs latent diffusion models to generate clinically realistic synthetic polyps through text-conditioned inpainting, augmenting limited training data with semantically diverse samples. Unlike traditional diffusion methods requiring iterative denoising, we introduce direct latent estimation enabling single-step inference with T x computational speedup. On CVC-ClinicDB, SynDiff achieves 96.0% Dice and 92.9% IoU while maintaining real-time capability suitable for clinical deployment. The framework demonstrates that controlled synthetic augmentation improves segmentation robustness without distribution shift. SynDiff bridges the gap between data-hungry deep learning models and clinical constraints, offering an efficient solution for deployment in resourcelimited medical settings.

A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT

Tanjin Taher Toma, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Wesley Jong, Darwish Alabyad, Vivek Batheja, Abhishek Jha, Mayank Patel, Darko Pucar, Jayadira del Rivero, Karel Pacak, Ronald M. Summers

arxiv logopreprintJul 21 2025
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context in deep learning models to achieve precise PCC segmentation, supporting clinical assessment and longitudinal monitoring.

A Study of Anatomical Priors for Deep Learning-Based Segmentation of Pheochromocytoma in Abdominal CT

Tanjin Taher Toma, Tejas Sudharshan Mathai, Bikash Santra, Pritam Mukherjee, Jianfei Liu, Wesley Jong, Darwish Alabyad, Vivek Batheja, Abhishek Jha, Mayank Patel, Darko Pucar, Jayadira del Rivero, Karel Pacak, Ronald M. Summers

arxiv logopreprintJul 21 2025
Accurate segmentation of pheochromocytoma (PCC) in abdominal CT scans is essential for tumor burden estimation, prognosis, and treatment planning. It may also help infer genetic clusters, reducing reliance on expensive testing. This study systematically evaluates anatomical priors to identify configurations that improve deep learning-based PCC segmentation. We employed the nnU-Net framework to evaluate eleven annotation strategies for accurate 3D segmentation of pheochromocytoma, introducing a set of novel multi-class schemes based on organ-specific anatomical priors. These priors were derived from adjacent organs commonly surrounding adrenal tumors (e.g., liver, spleen, kidney, aorta, adrenal gland, and pancreas), and were compared against a broad body-region prior used in previous work. The framework was trained and tested on 105 contrast-enhanced CT scans from 91 patients at the NIH Clinical Center. Performance was measured using Dice Similarity Coefficient (DSC), Normalized Surface Distance (NSD), and instance-wise F1 score. Among all strategies, the Tumor + Kidney + Aorta (TKA) annotation achieved the highest segmentation accuracy, significantly outperforming the previously used Tumor + Body (TB) annotation across DSC (p = 0.0097), NSD (p = 0.0110), and F1 score (25.84% improvement at an IoU threshold of 0.5), measured on a 70-30 train-test split. The TKA model also showed superior tumor burden quantification (R^2 = 0.968) and strong segmentation across all genetic subtypes. In five-fold cross-validation, TKA consistently outperformed TB across IoU thresholds (0.1 to 0.5), reinforcing its robustness and generalizability. These findings highlight the value of incorporating relevant anatomical context into deep learning models to achieve precise PCC segmentation, offering a valuable tool to support clinical assessment and longitudinal disease monitoring in PCC patients.
Page 15 of 73728 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.