Sort by:
Page 36 of 74733 results

Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy.

Tan S, He J, Cui M, Gao Y, Sun D, Xie Y, Cai J, Zaki N, Qin W

pubmed logopapersJul 1 2025
Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model's capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.

Thin-slice T<sub>2</sub>-weighted images and deep-learning-based super-resolution reconstruction: improved preoperative assessment of vascular invasion for pancreatic ductal adenocarcinoma.

Zhou X, Wu Y, Qin Y, Song C, Wang M, Cai H, Zhao Q, Liu J, Wang J, Dong Z, Luo Y, Peng Z, Feng ST

pubmed logopapersJun 30 2025
To evaluate the efficacy of thin-slice T<sub>2</sub>-weighted imaging (T<sub>2</sub>WI) and super-resolution reconstruction (SRR) for preoperative assessment of vascular invasion in pancreatic ductal adenocarcinoma (PDAC). Ninety-five PDACs with preoperative MRI were retrospectively enrolled as a training set, with non-reconstructed T<sub>2</sub>WI (NRT<sub>2</sub>) in different slice thicknesses (NRT<sub>2</sub>-3, 3 mm; NRT<sub>2</sub>-5, ≥ 5 mm). A prospective test set was collected with NRT<sub>2</sub>-5 (n = 125) only. A deep-learning network was employed to generate reconstructed super-resolution T<sub>2</sub>WI (SRT<sub>2</sub>) in different slice thicknesses (SRT<sub>2</sub>-3, 3 mm; SRT<sub>2</sub>-5, ≥ 5 mm). Image quality was assessed, including the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and signal-intensity ratio (SIR<sub>t/p</sub>, tumor/pancreas; SIR<sub>t/b</sub>, tumor/background). Diagnostic efficacy for vascular invasion was evaluated using the area under the curve (AUC) and compared across different slice thicknesses before and after reconstruction. SRT<sub>2</sub>-5 demonstrated higher SNR and SIR<sub>t/p</sub> compared to NRT<sub>2</sub>-5 (74.18 vs 72.46; 1.42 vs 1.30; p < 0.05). SRT<sub>2</sub>-3 showed increased SIR<sub>t/p</sub> and SIR<sub>t/b</sub> over NRT<sub>2</sub>-3 (1.35 vs 1.31; 2.73 vs 2.58; p < 0.05). SRT<sub>2</sub>-5 showed higher CNR, SIR<sub>t/p</sub> and SIR<sub>t/b</sub> than NRT<sub>2</sub>-3 (p < 0.05). NRT<sub>2</sub>-3 outperformed NRT<sub>2</sub>-5 in evaluating venous invasion (AUC: 0.732 vs 0.597, p = 0.021). SRR improved venous assessment (AUC: NRT<sub>2</sub>-3, 0.927 vs 0.732; NRT<sub>2</sub>-5, 0.823 vs 0.597; p < 0.05), and SRT<sub>2</sub>-5 exhibits comparable efficacy to NRT<sub>2</sub>-3 in venous assessment (AUC: 0.823 vs 0.732, p = 0.162). Thin-slice T<sub>2</sub>WI and SRR effectively improve the image quality and diagnostic efficacy for assessing venous invasion in PDAC. Thick-slice T<sub>2</sub>WI with SRR is a potential alternative to thin-slice T<sub>2</sub>WI. Both thin-slice T<sub>2</sub>-WI and SRR effectively improve image quality and diagnostic performance, providing valuable options for optimizing preoperative vascular assessment in PDAC. Non-invasive and accurate assessment of vascular invasion supports treatment planning and avoids futile surgery. Vascular invasion evaluation is critical for the surgical eligibility of PDAC. SRR improved image quality and vascular assessment in T<sub>2</sub>WI. Utilizing thin-slice T<sub>2</sub>WI and SRR aids in clinical decision making for PDAC.

In-silico CT simulations of deep learning generated heterogeneous phantoms.

Salinas CS, Magudia K, Sangal A, Ren L, Segars PW

pubmed logopapersJun 30 2025
Current virtual imaging phantoms primarily emphasize geometric&#xD;accuracy of anatomical structures. However, to enhance realism, it is also important&#xD;to incorporate intra-organ detail. Because biological tissues are heterogeneous in&#xD;composition, virtual phantoms should reflect this by including realistic intra-organ&#xD;texture and material variation.&#xD;We propose training two 3D Double U-Net conditional generative adversarial&#xD;networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs&#xD;found within the torso. The model was trained on 378 CT image-segmentation&#xD;pairs taken from a publicly available dataset with 18 additional pairs reserved for&#xD;testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT&#xD;simulation platform.&#xD;Results showed that the deep learning model was able to synthesize realistic&#xD;heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were&#xD;compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06&#xD;HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR)&#xD;were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy&#xD;between the generated and actual distribution was 0.0016. These metrics marked&#xD;an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current&#xD;homogeneous texture methods. The generated phantoms that underwent a virtual&#xD;CT scan had a closer visual resemblance to the true CT scan compared to the previous&#xD;method.&#xD;The resulting heterogeneous phantoms offer a significant step toward more realistic&#xD;in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity&#xD;to true anatomical variation.

Deep Learning-Based Semantic Segmentation for Real-Time Kidney Imaging and Measurements with Augmented Reality-Assisted Ultrasound

Gijs Luijten, Roberto Maria Scardigno, Lisle Faray de Paiva, Peter Hoyer, Jens Kleesiek, Domenico Buongiorno, Vitoantonio Bevilacqua, Jan Egger

arxiv logopreprintJun 30 2025
Ultrasound (US) is widely accessible and radiation-free but has a steep learning curve due to its dynamic nature and non-standard imaging planes. Additionally, the constant need to shift focus between the US screen and the patient poses a challenge. To address these issues, we integrate deep learning (DL)-based semantic segmentation for real-time (RT) automated kidney volumetric measurements, which are essential for clinical assessment but are traditionally time-consuming and prone to fatigue. This automation allows clinicians to concentrate on image interpretation rather than manual measurements. Complementing DL, augmented reality (AR) enhances the usability of US by projecting the display directly into the clinician's field of view, improving ergonomics and reducing the cognitive load associated with screen-to-patient transitions. Two AR-DL-assisted US pipelines on HoloLens-2 are proposed: one streams directly via the application programming interface for a wireless setup, while the other supports any US device with video output for broader accessibility. We evaluate RT feasibility and accuracy using the Open Kidney Dataset and open-source segmentation models (nnU-Net, Segmenter, YOLO with MedSAM and LiteMedSAM). Our open-source GitHub pipeline includes model implementations, measurement algorithms, and a Wi-Fi-based streaming solution, enhancing US training and diagnostics, especially in point-of-care settings.

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.

Enhanced abdominal multi-organ segmentation with 3D UNet and UNet +  + deep neural networks utilizing the MONAI framework.

Tejashwini PS, Thriveni J, Venugopal KR

pubmed logopapersJun 30 2025
Accurate segmentation of organs in the abdomen is a primary requirement for any medical analysis and treatment planning. In this study, we propose an approach based on 3D UNet and UNet +  + architectures implemented in the MONAI framework for addressing challenges that arise due to anatomical variability, complex shape rendering of organs, and noise in CT/MRI scans. The models can analyze information in three dimensions from volumetric data, making use of skip and dense connections, and optimizing the parameters using Secretary Bird Optimization (SBO), which together help in better feature extraction and boundary delineation of the structures of interest across sets of multi-organ tissues. The developed model's performance was evaluated on multiple datasets, ranging from Pancreas-CT to Liver-CT and BTCV. The results indicated that on the Pancreas-CT dataset, a DSC of 94.54% was achieved for 3D UNet, while a slightly higher DSC of 95.62% was achieved for 3D UNet +  +. Both models performed well on the Liver-CT dataset, with 3D UNet acquiring a DSC score of 95.67% and 3D UNet +  + a DSC score of 97.36%. And in the case of the BTCV dataset, both models had DSC values ranging from 93.42 to 95.31%. These results demonstrate the robustness and efficiency of the models presented for clinical applications and medical research in multi-organ segmentation. This study validates the proposed architectures, underpinning and accentuating accuracy in medical imaging, creating avenues for scalable solutions for complex abdominal-imaging tasks.

Diffusion Model-based Data Augmentation Method for Fetal Head Ultrasound Segmentation

Fangyijie Wang, Kevin Whelan, Félix Balado, Guénolé Silvestre, Kathleen M. Curran

arxiv logopreprintJun 30 2025
Medical image data is less accessible than in other domains due to privacy and regulatory constraints. In addition, labeling requires costly, time-intensive manual image annotation by clinical experts. To overcome these challenges, synthetic medical data generation offers a promising solution. Generative AI (GenAI), employing generative deep learning models, has proven effective at producing realistic synthetic images. This study proposes a novel mask-guided GenAI approach using diffusion models to generate synthetic fetal head ultrasound images paired with segmentation masks. These synthetic pairs augment real datasets for supervised fine-tuning of the Segment Anything Model (SAM). Our results show that the synthetic data captures real image features effectively, and this approach reaches state-of-the-art fetal head segmentation, especially when trained with a limited number of real image-mask pairs. In particular, the segmentation reaches Dice Scores of 94.66\% and 94.38\% using a handful of ultrasound images from the Spanish and African cohorts, respectively. Our code, models, and data are available on GitHub.

Enhancing weakly supervised data augmentation networks for thyroid nodule assessment using traditional and doppler ultrasound images.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersJun 30 2025
Thyroid ultrasound (US) is an essential tool for detecting and characterizing thyroid nodules. In this study, we propose an innovative approach to enhance thyroid nodule assessment by integrating Doppler US images with grayscale US images through weakly supervised data augmentation networks (WSDAN). Our method reduces background noise by replacing inefficient augmentation strategies, such as random cropping, with an advanced technique guided by bounding boxes derived from Doppler US images. This targeted augmentation significantly improves model performance in both classification and localization of thyroid nodules. The training dataset comprises 1288 paired grayscale and Doppler US images, with an additional 190 pairs used for three-fold cross-validation. To evaluate the model's efficacy, we tested it on a separate set of 190 grayscale US images. Compared to five state-of-the-art models and the original WSDAN, our Enhanced WSDAN model achieved superior performance. For classification, it reached an accuracy of 91%. For localization, it achieved Dice and Jaccard indices of 75% and 87%, respectively, demonstrating its potential as a valuable clinical tool.

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

Cost-effectiveness analysis of artificial intelligence (AI) in earlier detection of liver lesions in cirrhotic patients at risk of hepatocellular carcinoma in Italy.

Maas L, Contreras-Meca C, Ghezzo S, Belmans F, Corsi A, Cant J, Vos W, Bobowicz M, Rygusik M, Laski DK, Annemans L, Hiligsmann M

pubmed logopapersJun 30 2025
Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide and the third most common cause of cancer-related death. Cirrhosis is a major contributing factor, accounting for over 90% of HCC cases. With the high mortality rate of HCC, earlier detection of HCC is critical. When added to magnetic resonance imaging (MRI), artificial intelligence (AI) has been shown to improve HCC detection. Nonetheless, to date no cost-effectiveness analyses have been conducted on an AI tool to enhance earlier HCC detection. This study reports on the cost-effectiveness of detection of liver lesions with AI improved MRI in the surveillance for HCC in patients with a cirrhotic liver compared to usual care (UC). The model structure included a decision tree followed by a state-transition Markov model from an Italian healthcare perspective. Lifetime costs and quality-adjusted life years (QALY) were simulated in cirrhotic patients at risk of HCC. One-way sensitivity analyses and two-way sensitivity analyses were performed. Results were presented as incremental cost-effectiveness ratios (ICER). For patients receiving UC, the average lifetime costs per 1,000 patients were €16,604,800 compared to €16,610,250 for patients receiving the AI approach. With a QALY gained of 0.55 and incremental costs of €5,000 for every 1,000 patients, the ICER was €9,888 per QALY gained, indicating cost-effectiveness with the willingness-to-pay threshold of €33,000/QALY gained. Main drivers of cost-effectiveness included the cost and performance (sensitivity and specificity) of the AI tool. This study suggests that an AI-based approach to earlier detect HCC in cirrhotic patients can be cost-effective. By incorporating cost-effective AI-based approaches in clinical practice, patient outcomes and healthcare efficiency are improved.
Page 36 of 74733 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.