Sort by:
Page 22 of 74733 results

Deep Learning-Accelerated Prostate MRI: Improving Speed, Accuracy, and Sustainability.

Reschke P, Koch V, Gruenewald LD, Bachir AA, Gotta J, Booz C, Alrahmoun MA, Strecker R, Nickel D, D'Angelo T, Dahm DM, Konrad P, Solim LA, Holzer M, Al-Saleh S, Scholtz JE, Sommer CM, Hammerstingl RM, Eichler K, Vogl TJ, Leistner DM, Haberkorn SM, Mahmoudi S

pubmed logopapersJul 14 2025
This study aims to evaluate the effectiveness of a deep learning (DL)-enhanced four-fold parallel acquisition technique (P4) in improving prostate MR image quality while optimizing scan efficiency compared to the traditional two-fold parallel acquisition technique (P2). Patients undergoing prostate MRI with DL-enhanced acquisitions were analyzed from January 2024 to July 2024. The participants prospectively received T2-weighted sequences in all imaging planes using both P2 and P4. Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR). Significant differences in contrast and gray-level properties between P2 and P4 were identified through radiomics analysis (p <.05). A total of 51 participants (mean age 69.4 years ± 10.5 years) underwent P2 and P4 imaging. P4 demonstrated higher CNR and SNR values compared to P2 (p <.001). P4 was consistently rated superior to P2, demonstrating enhanced image quality and greater diagnostic precision across all evaluated categories (p <.001). Furthermore, radiomics analysis confirmed that P4 significantly altered structural and textural differentiation in comparison to P2. The P4 protocol reduced T2w scan times by 50.8%, from 11:48 min to 5:48 min (p <.001). In conclusion, P4 imaging enhances diagnostic quality and reduces scan times, improving workflow efficiency, and potentially contributing to a more patient-centered and sustainable radiology practice.

A Lightweight and Robust Framework for Real-Time Colorectal Polyp Detection Using LOF-Based Preprocessing and YOLO-v11n

Saadat Behzadi, Danial Sharifrazi, Bita Mesbahzadeh, Javad Hassannataj Joloudarid, Roohallah Alizadehsani

arxiv logopreprintJul 14 2025
Objectives: Timely and accurate detection of colorectal polyps plays a crucial role in diagnosing and preventing colorectal cancer, a major cause of mortality worldwide. This study introduces a new, lightweight, and efficient framework for polyp detection that combines the Local Outlier Factor (LOF) algorithm for filtering noisy data with the YOLO-v11n deep learning model. Study design: An experimental study leveraging deep learning and outlier removal techniques across multiple public datasets. Methods: The proposed approach was tested on five diverse and publicly available datasets: CVC-ColonDB, CVC-ClinicDB, Kvasir-SEG, ETIS, and EndoScene. Since these datasets originally lacked bounding box annotations, we converted their segmentation masks into suitable detection labels. To enhance the robustness and generalizability of our model, we apply 5-fold cross-validation and remove anomalous samples using the LOF method configured with 30 neighbors and a contamination ratio of 5%. Cleaned data are then fed into YOLO-v11n, a fast and resource-efficient object detection architecture optimized for real-time applications. We train the model using a combination of modern augmentation strategies to improve detection accuracy under diverse conditions. Results: Our approach significantly improves polyp localization performance, achieving a precision of 95.83%, recall of 91.85%, F1-score of 93.48%, [email protected] of 96.48%, and [email protected]:0.95 of 77.75%. Compared to previous YOLO-based methods, our model demonstrates enhanced accuracy and efficiency. Conclusions: These results suggest that the proposed method is well-suited for real-time colonoscopy support in clinical settings. Overall, the study underscores how crucial data preprocessing and model efficiency are when designing effective AI systems for medical imaging.

Impact of three-dimensional prostate models during robot-assisted radical prostatectomy on surgical margins and functional outcomes.

Khan N, Prezzi D, Raison N, Shepherd A, Antonelli M, Byrne N, Heath M, Bunton C, Seneci C, Hyde E, Diaz-Pinto A, Macaskill F, Challacombe B, Noel J, Brown C, Jaffer A, Cathcart P, Ciabattini M, Stabile A, Briganti A, Gandaglia G, Montorsi F, Ourselin S, Dasgupta P, Granados A

pubmed logopapersJul 13 2025
Robot-assisted radical prostatectomy (RARP) is the standard surgical procedure for the treatment of prostate cancer. RARP requires a trade-off between performing a wider resection in order to reduce the risk of positive surgical margins (PSMs) and performing minimal resection of the nerve bundles that determine functional outcomes, such as incontinence and potency, which affect patients' quality of life. In order to achieve favourable outcomes, a precise understanding of the three-dimensional (3D) anatomy of the prostate, nerve bundles and tumour lesion is needed. This is the protocol for a single-centre feasibility study including a prospective two-arm interventional group (a 3D virtual and a 3D printed prostate model), and a prospective control group. The primary endpoint will be PSM status and the secondary endpoint will be functional outcomes, including incontinence and sexual function. The study will consist of a total of 270 patients: 54 patients will be included in each of the interventional groups (3D virtual, 3D printed models), 54 in the retrospective control group and 108 in the prospective control group. Automated segmentation of prostate gland and lesions will be conducted on multiparametric magnetic resonance imaging (mpMRI) using 'AutoProstate' and 'AutoLesion' deep learning approaches, while manual annotation of the neurovascular bundles, urethra and external sphincter will be conducted on mpMRI by a radiologist. This will result in masks that will be post-processed to generate 3D printed/virtual models. Patients will be allocated to either interventional arm and the surgeon will be given either a 3D printed or a 3D virtual model at the start of the RARP procedure. At the 6-week follow-up, the surgeon will meet with the patient to present PSM status and capture functional outcomes from the patient via questionnaires. We will capture these measures as endpoints for analysis. These questionnaires will be re-administered at 3, 6 and 12 months postoperatively.

Enhanced Detection of Prostate Cancer Lesions on Biparametric MRI Using Artificial Intelligence: A Multicenter, Fully-crossed, Multi-reader Multi-case Trial.

Xing Z, Chen J, Pan L, Huang D, Qiu Y, Sheng C, Zhang Y, Wang Q, Cheng R, Xing W, Ding J

pubmed logopapersJul 11 2025
To assess artificial intelligence (AI)'s added value in detecting prostate cancer lesions on MRI by comparing radiologists' performance with and without AI assistance. A fully-crossed multi-reader multi-case clinical trial was conducted across three institutions with 10 non-expert radiologists. Biparametric MRI cases comprising T2WI, diffusion-weighted images, and apparent diffusion coefficient were retrospectively collected. Three reading modes were evaluated: AI alone, radiologists alone (unaided), and radiologists with AI (aided). Aided and unaided readings were compared using the Dorfman-Berbaum-Metz method. Reference standards were established by senior radiologists based on pathological reports. Performance was quantified via sensitivity, specificity, and area under the alternative free-response receiver operating characteristic curve (AFROC-AUC). Among 407 eligible male patients (69.5±9.3years), aided reading significantly improved lesion-level sensitivity from 67.3% (95% confidence intervals [CI]: 58.8%, 75.8%) to 85.5% (95% CI: 81.3%, 89.7%), with a substantial difference of 18.2% (95% CI: 10.7%, 25.7%, p<0.001). Case-level specificity increased from 75.9% (95% CI: 68.7%, 83.1%) to 79.5% (95% CI: 74.1%, 84.8%), demonstrating non-inferiority (p<0.001). AFROC-AUC was also higher for aided than unaided reading (86.9% vs 76.1%, p<0.001). AI alone achieved robust performance (AFROC-AUC=83.1%, 95%CI: 79.7%, 86.6%), with lesion-level sensitivity of 88.4% (95% CI: 84.0%, 92.0%) and case-level specificity of 77.8% (95% CI: 71.5%, 83.3%). Subgroup analysis revealed improved detection for lesions with smaller size and lower prostate imaging reporting and data system scores. AI-aided reading significantly enhances lesion detection compared to unaided reading, while AI alone also demonstrates high diagnostic accuracy.

Effect of data-driven motion correction for respiratory movement on lesion detectability in PET-CT: a phantom study.

de Winter MA, Gevers R, Lavalaye J, Habraken JBA, Maspero M

pubmed logopapersJul 11 2025
While data-driven motion correction (DDMC) techniques have proven to enhance the visibility of lesions affected by motion, their impact on overall detectability remains unclear. This study investigates whether DDMC improves lesion detectability in PET-CT using FDG-18F. A moving platform simulated respiratory motion in a NEMA-IEC body phantom with varying amplitudes (0, 7, 10, 20, 30 mm) and target-to-background ratios (2, 5, 10.5). Scans were reconstructed with and without DDMC, and the spherical targets' maximal and mean recovery coefficient (RC) and contrast-to-noise ratio (CNR) were measured. DDMC results in higher RC values in the target spheres. CNR values increase for small, high-motion affected targets but decrease for larger spheres with smaller amplitudes. A sub-analysis shows that DDMC increases the contrast of the sphere along with a 36% increase in background noise. While DDMC significantly enhances contrast (RC), its impact on detectability (CNR) is less profound due to increased background noise. CNR improves for small targets with high motion amplitude, potentially enhancing the detectability of low-uptake lesions. Given that the increased background noise may reduce detectability for targets unaffected by motion, we suggest that DDMC reconstructions are used best in addition to non-DDMC reconstructions.

Diffusion-weighted imaging in rectal cancer MRI from theory to practice.

Mayumi Takamune D, Miranda J, Mariussi M, Reif de Paula T, Mazaheri Y, Younus E, Jethwa KR, Knudsen CC, Bizinoto V, Cardoso D, de Arimateia Batista Araujo-Filho J, Sparapan Marques CF, Higa Nomura C, Horvat N

pubmed logopapersJul 11 2025
Diffusion-weighted imaging (DWI) has become a cornerstone of high-resolution rectal MRI, providing critical functional information that complements T2-weighted imaging (T2WI) throughout the management of rectal cancer. From baseline staging to restaging after neoadjuvant therapy and longitudinal surveillance during nonoperative management or post-surgical follow-up, DWI improves tumor detection, characterizes treatment response, and facilitates early identification of tumor regrowth or recurrence. This review offers a comprehensive overview of DWI in rectal cancer, emphasizing its technical characteristics, optimal acquisition strategies, and integration with qualitative and quantitative interpretive frameworks. The manuscript also addresses interpretive pitfalls, highlights emerging techniques such as intravoxel incoherent motion (IVIM), diffusion kurtosis imaging (DKI), and small field-of-view DWI, and explores the growing role of radiomics and artificial intelligence in advancing precision imaging. DWI, when rigorously implemented and interpreted, enhances the accuracy, reproducibility, and clinical utility of rectal MRI.

Interpretable MRI Subregional Radiomics-Deep Learning Model for Preoperative Lymphovascular Invasion Prediction in Rectal Cancer: A Dual-Center Study.

Huang T, Zeng Y, Jiang R, Zhou Q, Wu G, Zhong J

pubmed logopapersJul 11 2025
Develop a fusion model based on explainable machine learning, combining multiparametric MRI subregional radiomics and deep learning, to preoperatively predict the lymphovascular invasion status in rectal cancer. We collected data from RC patients with histopathological confirmation from two medical centers, with 301 patients used as a training set and 75 patients as an external validation set. Using K-means clustering techniques, we meticulously divided the tumor areas into multiple subregions and extracted crucial radiomic features from them. Additionally, we employed an advanced Vision Transformer (ViT) deep learning model to extract features. These features were integrated to construct the SubViT model. To better understand the decision-making process of the model, we used the Shapley Additive Properties (SHAP) tool to evaluate the model's interpretability. Finally, we comprehensively assessed the performance of the SubViT model through receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and the Delong test, comparing it with other models. In this study, the SubViT model demonstrated outstanding predictive performance in the training set, achieving an area under the curve (AUC) of 0.934 (95% confidence interval: 0.9074 to 0.9603). It also performed well in the external validation set, with an AUC of 0.884 (95% confidence interval: 0.8055 to 0.9616), outperforming both subregion radiomics and imaging-based models. Furthermore, decision curve analysis (DCA) indicated that the SubViT model provides higher clinical utility compared to other models. As an advanced composite model, the SubViT model demonstrated its efficiency in the non-invasive assessment of local vascular invasion (LVI) in rectal cancer.

Intelligent quality assessment of ultrasound images for fetal nuchal translucency measurement during the first trimester of pregnancy based on deep learning models.

Liu L, Wang T, Zhu W, Zhang H, Tian H, Li Y, Cai W, Yang P

pubmed logopapersJul 10 2025
As increased nuchal translucency (NT) thickness is notably associated with fetal chromosomal abnormalities, structural defects, and genetic syndromes, accurate measurement of NT thickness is crucial for the screening of fetal abnormalities during the first trimester. We aimed to develop a model for quality assessment of ultrasound images for precise measurement of fetal NT thickness. We collected 2140 ultrasound images of midsagittal sections of the fetal face between 11 and 14 weeks of gestation. Several image segmentation models were trained, and the one exhibiting the highest DSC and HD 95 was chosen to automatically segment the ROI. The radiomics features and deep transfer learning (DTL) features were extracted and selected to construct radiomics and DTL models. Feature screening was conducted using the <i>t</i>-test, Mann-Whitney <i>U</i>-test, Spearman’s rank correlation analysis, and LASSO. We also developed early fusion and late fusion models to integrate the advantages of radiomics and DTL models. The optimal model was compared with junior radiologists. We used SHapley Additive exPlanations (SHAP) to investigate the model’s interpretability. The DeepLabV3 ResNet achieved the best segmentation performance (DSC: 98.07 ± 0.02%, HD 95: 0.75 ± 0.15 mm). The feature fusion model demonstrated the optimal performance (AUC: 0.978, 95% CI: 0.965–0.990, accuracy: 93.2%, sensitivity: 93.1%, specificity: 93.4%, PPV: 93.5%, NPV: 93.0%, precision: 93.5%). This model exhibited more reliable performance compared to junior radiologists and significantly improved the capabilities of junior radiologists. The SHAP summary plot showed DTL features were the most important features for feature fusion model. The proposed models innovatively bridge the gaps in previous studies, achieving intelligent quality assessment of ultrasound images for NT measurement and highly accurate automatic segmentation of ROIs. These models are potential tools to enhance quality control for fetal ultrasound examinations, streamline clinical workflows, and improve the professional skills of less-experienced radiologists. The online version contains supplementary material available at 10.1186/s12884-025-07863-y.

MeD-3D: A Multimodal Deep Learning Framework for Precise Recurrence Prediction in Clear Cell Renal Cell Carcinoma (ccRCC)

Hasaan Maqsood, Saif Ur Rehman Khan

arxiv logopreprintJul 10 2025
Accurate prediction of recurrence in clear cell renal cell carcinoma (ccRCC) remains a major clinical challenge due to the disease complex molecular, pathological, and clinical heterogeneity. Traditional prognostic models, which rely on single data modalities such as radiology, histopathology, or genomics, often fail to capture the full spectrum of disease complexity, resulting in suboptimal predictive accuracy. This study aims to overcome these limitations by proposing a deep learning (DL) framework that integrates multimodal data, including CT, MRI, histopathology whole slide images (WSI), clinical data, and genomic profiles, to improve the prediction of ccRCC recurrence and enhance clinical decision-making. The proposed framework utilizes a comprehensive dataset curated from multiple publicly available sources, including TCGA, TCIA, and CPTAC. To process the diverse modalities, domain-specific models are employed: CLAM, a ResNet50-based model, is used for histopathology WSIs, while MeD-3D, a pre-trained 3D-ResNet18 model, processes CT and MRI images. For structured clinical and genomic data, a multi-layer perceptron (MLP) is used. These models are designed to extract deep feature embeddings from each modality, which are then fused through an early and late integration architecture. This fusion strategy enables the model to combine complementary information from multiple sources. Additionally, the framework is designed to handle incomplete data, a common challenge in clinical settings, by enabling inference even when certain modalities are missing.

Data Extraction and Curation from Radiology Reports for Pancreatic Cyst Surveillance Using Large Language Models.

Choubey AP, Eguia E, Hollingsworth A, Chatterjee S, D'Angelica MI, Jarnagin WR, Wei AC, Schattner MA, Do RKG, Soares KC

pubmed logopapersJul 10 2025
Manual curation of radiographic features in pancreatic cyst registries for data abstraction and longitudinal evaluation is time consuming and limits widespread implementation. We examined the feasibility and accuracy of using large language models (LLMs) to extract clinical variables from radiology reports. A single center retrospective study included patients under surveillance for pancreatic cysts. Nine radiographic elements used to monitor cyst progression were included: cyst size, main pancreatic duct (MPD) size (continuous variable), number of lesions, MPD dilation ≥5mm (categorical), branch duct dilation, presence of solid component, calcific lesion, pancreatic atrophy, and pancreatitis. LLMs (GPT) on the OpenAI GPT-4 platform were employed to extract elements of interest with a zero-shot learning approach using prompting to facilitate annotation without any training data. A manually annotated institutional cyst database was used as the ground truth (GT) for comparison. Overall, 3198 longitudinal scans from 991 patients were included. GPT successfully extracted the selected radiographic elements with high accuracy. Among categorical variables, accuracy ranged from 97% for solid component to 99% for calcific lesions. In the continuous variables, accuracy varied from 92% for cyst size to 97% for MPD size. However, Cohen's Kappa was higher for cyst size (0.92) compared to MPD size (0.82). Lowest accuracy (81%) was noted in the multi-class variable for number of cysts. LLM can accurately extract and curate data from radiology reports for pancreatic cyst surveillance and can be reliably used to assemble longitudinal databases. Future application of this work may potentiate the development of artificial intelligence-based surveillance models.
Page 22 of 74733 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.