Sort by:
Page 184 of 3363359 results

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.

Improving Clinical Utility of Fetal Cine CMR Using Deep Learning Super-Resolution.

Vollbrecht TM, Hart C, Katemann C, Isaak A, Voigt MB, Pieper CC, Kuetting D, Geipel A, Strizek B, Luetkens JA

pubmed logopapersJun 26 2025
Fetal cardiovascular magnetic resonance is an emerging tool for prenatal congenital heart disease assessment, but long acquisition times and fetal movements limit its clinical use. This study evaluates the clinical utility of deep learning super-resolution reconstructions for rapidly acquired, low-resolution fetal cardiovascular magnetic resonance. This prospective study included participants with fetal congenital heart disease undergoing fetal cardiovascular magnetic resonance in the third trimester of pregnancy, with axial cine images acquired at normal resolution and low resolution. Low-resolution cine data was subsequently reconstructed using a deep learning super-resolution framework (cine<sub>DL</sub>). Acquisition times, apparent signal-to-noise ratio, contrast-to-noise ratio, and edge rise distance were assessed. Volumetry and functional analysis were performed. Qualitative image scores were rated on a 5-point Likert scale. Cardiovascular structures and pathological findings visible in cine<sub>DL</sub> images only were assessed. Statistical analysis included the Student paired <i>t</i> test and the Wilcoxon test. A total of 42 participants were included (median gestational age, 35.9 weeks [interquartile range (IQR), 35.1-36.4]). Cine<sub>DL</sub> acquisition was faster than cine images acquired at normal resolution (134±9.6 s versus 252±8.8 s; <i>P</i><0.001). Quantitative image quality metrics and image quality scores for cine<sub>DL</sub> were higher or comparable with those of cine images acquired at normal-resolution images (eg, fetal motion, 4.0 [IQR, 4.0-5.0] versus 4.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Nonpatient-related artifacts (eg, backfolding) were more pronounced in Cine<sub>DL</sub> compared with cine images acquired at normal-resolution images (4.0 [IQR, 4.0-5.0] versus 5.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Volumetry and functional results were comparable. Cine<sub>DL</sub> revealed additional structures in 10 of 42 fetuses (24%) and additional pathologies in 5 of 42 fetuses (12%), including partial anomalous pulmonary venous connection. Deep learning super-resolution reconstructions of low-resolution acquisitions shorten acquisition times and achieve diagnostic quality comparable with standard images, while being less sensitive to fetal bulk movements, leading to additional diagnostic findings. Therefore, deep learning super-resolution may improve the clinical utility of fetal cardiovascular magnetic resonance for accurate prenatal assessment of congenital heart disease.

Harnessing Generative AI for Lung Nodule Spiculation Characterization.

Wang Y, Patel C, Tchoua R, Furst J, Raicu D

pubmed logopapersJun 26 2025
Spiculation, characterized by irregular, spike-like projections from nodule margins, serves as a crucial radiological biomarker for malignancy assessment and early cancer detection. These distinctive stellate patterns strongly correlate with tumor invasiveness and are vital for accurate diagnosis and treatment planning. Traditional computer-aided diagnosis (CAD) systems are limited in their capability to capture and use these patterns given their subtlety, difficulty in quantifying them, and small datasets available to learn these patterns. To address these challenges, we propose a novel framework leveraging variational autoencoders (VAE) to discover, extract, and vary disentangled latent representations of lung nodule images. By gradually varying the latent representations of non-spiculated nodule images, we generate augmented datasets containing spiculated nodule variations that, we hypothesize, can improve the diagnostic classification of lung nodules. Using the National Institutes of Health/National Cancer Institute Lung Image Database Consortium (LIDC) dataset, our results show that incorporating these spiculated image variations into the classification pipeline significantly improves spiculation detection performance up to 7.53%. Notably, this enhancement in spiculation detection is achieved while preserving the classification performance of non-spiculated cases. This approach effectively addresses class imbalance and enhances overall classification outcomes. The gradual attenuation of spiculation characteristics demonstrates our model's ability to both capture and generate clinically relevant semantic features in an algorithmic manner. These findings suggest that the integration of semantic-based latent representations into CAD models not only enhances diagnostic accuracy but also provides insights into the underlying morphological progression of spiculated nodules, enabling more informed and clinically meaningful AI-driven support systems.

Development, deployment, and feature interpretability of a three-class prediction model for pulmonary diseases.

Cao Z, Xu G, Gao Y, Xu J, Tian F, Shi H, Yang D, Xie Z, Wang J

pubmed logopapersJun 26 2025
To develop a high-performance machine learning model for predicting and interpreting features of pulmonary diseases. This retrospective study analyzed clinical and imaging data from patients with non-small cell lung cancer (NSCLC), granulomatous inflammation, and benign tumors, collected across multiple centers from January 2015 to October 2023. Data from two hospitals in Anhui Province were split into a development set (n = 1696) and a test set (n = 424) in an 8:2 ratio, with an external validation set (n = 909) from Zhejiang Province. Features with p < 0.05 from univariate analyses were selected using the Boruta algorithm for input into Random Forest (RF) and XGBoost models. Model efficacy was assessed using receiver operating characteristic (ROC) analysis. A total of 3030 patients were included: 2269 with NSCLC, 529 with granulomatous inflammation, and 232 with benign tumors. The Obuchowski indices for RF and XGBoost in the test set were 0.7193 (95% CI: 0.6567-0.7812) and 0.8282 (95% CI: 0.7883-0.8650), respectively. In the external validation set, indices were 0.7932 (95% CI: 0.7572-0.8250) for RF and 0.8074 (95% CI: 0.7740-0.8387) for XGBoost. XGBoost achieved better accuracy in both the test (0.81) and external validation (0.79) sets. Calibration Curve and Decision Curve Analysis (DCA) showed XGBoost offered higher net clinical benefit. The XGBoost model outperforms RF in the three-class classification of lung diseases. XGBoost surpasses Random Forest in accurately classifying NSCLC, granulomatous inflammation, and benign tumors, offering superior clinical utility via multicenter data. Lung cancer classification model has broad clinical applicability. XGBoost outperforms random forests using CT imaging data. XGBoost model can be deployed on a website for clinicians.

Automated breast ultrasound features associated with diagnostic performance of Multiview convolutional neural network according to radiologists' experience.

Choi EJ, Wang Y, Choi H, Youk JH, Byon JH, Choi S, Ko S, Jin GY

pubmed logopapersJun 26 2025
To investigate automated breast ultrasound (ABUS) features affecting the use of Multiview convolutional neural network (CNN) for breast lesions according to radiologists' experience. A total of 656 breast lesions (152 malignant and 504 benign lesions) were included and reviewed by six radiologists for background echotexture, glandular tissue component (GTC), and lesion type and size without as well as with Multiview CNN. The sensitivity, specificity, and the area under the receiver operating curve (AUC) for ABUS features were compared between two sessions according to radiologists' experience. Radiology residents showed significant AUC improvement with the Multiview CNN for mass (0.81 to 0.91, P=0.003) and non-mass lesions (0.56 to 0.90, P=0.007), all background echotextures (homogeneous-fat: 0.84 to 0.94, P=0.04; homogeneous-fibroglandular: 0.85 to 0.93, P=0.01; heterogeneous: 0.68 to 0.88, P=0.002), all GTC levels (minimal: 0.86 to 0.93, P=0.001; mild: 0.82 to 0.94, P=0.003; moderate: 0.75 to 0.88, P=0.01; marked: 0.68 to 0.89, P<0.001), and lesions ≤10mm (≤5 mm: 0.69 to 0.86, P<0.001; 6-10 mm: 0.83 to 0.92, P<0.001). Breast specialists showed significant AUC improvement with the Multiview CNN in heterogeneous echotexture (0.90 to 0.95, P=0.03), marked GTC (0.88 to 0.95, P<0.001), and lesions ≤10mm (≤5 mm: 0.89 to 0.93, P=0.02; 6-10 mm: 0.95 to 0.98, P=0.01). With the Multiview CNN, the performance of ABUS in radiology residents was improved regardless of lesion type, background echotexture, or GTC. For breast lesions smaller than 10 mm, both radiology residents and breast specialists showed better performance of ABUS.

Dose-aware denoising diffusion model for low-dose CT.

Kim S, Kim BJ, Baek J

pubmed logopapersJun 26 2025
Low-dose computed tomography (LDCT) denoising plays an important role in medical imaging for reducing the radiation dose to patients. Recently, various data-driven and diffusion-based deep learning (DL) methods have been developed and shown promising results in LDCT denoising. However, challenges remain in ensuring generalizability to different datasets and mitigating uncertainty from stochastic sampling. In this paper, we introduce a novel dose-aware diffusion model that effectively reduces CT image noise while maintaining structural fidelity and being generalizable to different dose levels.&#xD;Approach: Our approach employs a physics-based forward process with continuous timesteps, enabling flexible representation of diverse noise levels. We incorporate a computationally efficient noise calibration module in our diffusion framework that resolves misalignment between intermediate results and their corresponding timesteps. Furthermore, we present a simple yet effective method for estimating appropriate timesteps for unseen LDCT images, allowing generalization to an unknown, arbitrary dose levels.&#xD;Main Results: Both qualitative and quantitative evaluation results on Mayo Clinic datasets show that the proposed method outperforms existing denoising methods in preserving the noise texture and restoring anatomical structures. The proposed method also shows consistent results on different dose levels and an unseen dataset.&#xD;Significance: We propose a novel dose-aware diffusion model for LDCT denoising, aiming to address the generalization and uncertainty issues of existing diffusion-based DL methods. Our experimental results demonstrate the effectiveness of the proposed method across different dose levels. We expect that our approach can provide a clinically practical solution for LDCT denoising with its high structural fidelity and computational efficiency.

A machine learning model integrating clinical-radiomics-deep learning features accurately predicts postoperative recurrence and metastasis of primary gastrointestinal stromal tumors.

Xie W, Zhang Z, Sun Z, Wan X, Li J, Jiang J, Liu Q, Yang G, Fu Y

pubmed logopapersJun 26 2025
Post-surgical prediction of recurrence or metastasis for primary gastrointestinal stromal tumors (GISTs) remains challenging. We aim to develop individualized clinical follow-up strategies for primary GIST patients, such as shortening follow-up time or extending drug administration based on the clinical deep learning radiomics model (CDLRM). The clinical information on primary GISTs was collected from two independent centers. Postoperative recurrence or metastasis in GIST patients was defined as the endpoint of the study. A total of nine machine learning models were established based on the selected features. The performance of the models was assessed by calculating the area under the curve (AUC). The CDLRM with the best predictive performance was constructed. Decision curve analysis (DCA) and calibration curves were analyzed separately. Ultimately, our model was applied to the high-potential malignant group vs the low-malignant-potential group. The optimal clinical application scenarios of the model were further explored by comparing the DCA performance of the two subgroups. A total of 526 patients, 260 men and 266 women, with a mean age of 62 years, were enrolled in the study. CDLRM performed excellently with AUC values of 0.999, 0.963, and 0.995 for the training, external validation, and aggregated sets, respectively. The calibration curve indicated that CDLRM was in good agreement between predicted and observed probabilities in the validation cohort. The results of DCA's performance in different subgroups show that it was more clinically valuable in populations with high malignant potential. CDLRM could help the development of personalized treatment and improved follow-up of patients with a high probability of recurrence or metastasis in the future. This model utilizes imaging features extracted from CT scans (including radiomic features and deep features) and clinical data to accurately predict postoperative recurrence and metastasis in patients with primary GISTs, which has a certain auxiliary role in clinical decision-making. We developed and validated a model to predict recurrence or metastasis in patients taking oral imatinib after GIST. We demonstrate that CT image features were associated with recurrence or metastases. The model had good predictive performance and clinical benefit.

Deep learning-based contour propagation in magnetic resonance imaging-guided radiotherapy of lung cancer patients.

Wei C, Eze C, Klaar R, Thorwarth D, Warda C, Taugner J, Hörner-Rieber J, Regnery S, Jaekel O, Weykamp F, Palacios MA, Marschner S, Corradini S, Belka C, Kurz C, Landry G, Rabe M

pubmed logopapersJun 26 2025
Fast and accurate organ-at-risk (OAR) and gross tumor volume (GTV) contour propagation methods are needed to improve the efficiency of magnetic resonance (MR) imaging-guided radiotherapy. We trained deformable image registration networks to accurately propagate contours from planning to fraction MR images.&#xD;Approach: Data from 140 stage 1-2 lung cancer patients treated at a 0.35T MR-Linac were split into 102/17/21 for training/validation/testing. Additionally, 18 central lung tumor patients, treated at a 0.35T MR-Linac externally, and 14 stage 3 lung cancer patients from a phase 1 clinical trial, treated at 0.35T or 1.5T MR-Linacs at three institutions, were used for external testing. Planning and fraction images were paired (490 pairs) for training. Two hybrid transformer-convolutional neural network TransMorph models with mean squared error (MSE), Dice similarity coefficient (DSC), and regularization losses (TM_{MSE+Dice}) or MSE and regularization losses (TM_{MSE}) were trained to deformably register planning to fraction images. The TransMorph models predicted diffeomorphic dense displacement fields. Multi-label images including seven thoracic OARs and the GTV were propagated to generate fraction segmentations. Model predictions were compared with contours obtained through B-spline, vendor registration and the auto-segmentation method nnUNet. Evaluation metrics included the DSC and Hausdorff distance percentiles (50th and 95th) against clinical contours.&#xD;Main results: TM_{MSE+Dice} and TM_{MSE} achieved mean OARs/GTV DSCs of 0.90/0.82 and 0.90/0.79 for the internal and 0.84/0.77 and 0.85/0.76 for the central lung tumor external test data. On stage 3 data, TM_{MSE+Dice} achieved mean OARs/GTV DSCs of 0.87/0.79 and 0.83/0.78 for the 0.35 T MR-Linac datasets, and 0.87/0.75 for the 1.5 T MR-Linac dataset. TM_{MSE+Dice} and TM_{MSE} had significantly higher geometric accuracy than other methods on external data. No significant difference between TM_{MSE+Dice} and TM_{MSE} was found.&#xD;Significance: TransMorph models achieved time-efficient segmentation of fraction MRIs with high geometrical accuracy and accurately segmented images obtained at different field strengths.

Deep Learning MRI Models for the Differential Diagnosis of Tumefactive Demyelination versus <i>IDH</i> Wild-Type Glioblastoma.

Conte GM, Moassefi M, Decker PA, Kosel ML, McCarthy CB, Sagen JA, Nikanpour Y, Fereidan-Esfahani M, Ruff MW, Guido FS, Pump HK, Burns TC, Jenkins RB, Erickson BJ, Lachance DH, Tobin WO, Eckel-Passow JE

pubmed logopapersJun 26 2025
Diagnosis of tumefactive demyelination can be challenging. The diagnosis of indeterminate brain lesions on MRI often requires tissue confirmation via brain biopsy. Noninvasive methods for accurate diagnosis of tumor and nontumor etiologies allows for tailored therapy, optimal tumor control, and a reduced risk of iatrogenic morbidity and mortality. Tumefactive demyelination has imaging features that mimic <i>isocitrate dehydrogenase</i> wild-type glioblastoma (<i>IDH</i>wt GBM). We hypothesized that deep learning applied to postcontrast T1-weighted (T1C) and T2-weighted (T2) MRI can discriminate tumefactive demyelination from <i>IDH</i>wt GBM. Patients with tumefactive demyelination (<i>n</i> = 144) and <i>IDH</i>wt GBM (<i>n</i> = 455) were identified by clinical registries. A 3D DenseNet121 architecture was used to develop models to differentiate tumefactive demyelination and <i>IDH</i>wt GBM by using both T1C and T2 MRI, as well as only T1C and only T2 images. A 3-stage design was used: 1) model development and internal validation via 5-fold cross validation by using a sex-, age-, and MRI technology-matched set of tumefactive demyelination and <i>IDH</i>wt GBM, 2) validation of model specificity on independent <i>IDH</i>wt GBM, and 3) prospective validation on tumefactive demyelination and <i>IDH</i>wt GBM. Stratified area under the receiver operating curves (AUROCs) were used to evaluate model performance stratified by sex, age at diagnosis, MRI scanner strength, and MRI acquisition. The deep learning model developed by using both T1C and T2 images had a prospective validation AUROC of 88% (95% CI: 0.82-0.95). In the prospective validation stage, a model score threshold of 0.28 resulted in 91% sensitivity of correctly classifying tumefactive demyelination and 80% specificity (correctly classifying <i>IDH</i>wt GBM). Stratified AUROCs demonstrated that model performance may be improved if thresholds were chosen stratified by age and MRI acquisition. MRI can provide the basis for applying deep learning models to aid in the differential diagnosis of brain lesions. Further validation is needed to evaluate how well the model generalizes across institutions, patient populations, and technology, and to evaluate optimal thresholds for classification. Next steps also should incorporate additional tumor etiologies such as CNS lymphoma and brain metastases.

Epicardial adipose tissue, myocardial remodelling and adverse outcomes in asymptomatic aortic stenosis: a post hoc analysis of a randomised controlled trial.

Geers J, Manral N, Razipour A, Park C, Tomasino GF, Xing E, Grodecki K, Kwiecinski J, Pawade T, Doris MK, Bing R, White AC, Droogmans S, Cosyns B, Slomka PJ, Newby DE, Dweck MR, Dey D

pubmed logopapersJun 26 2025
Epicardial adipose tissue represents a metabolically active visceral fat depot that is in direct contact with the left ventricular myocardium. While it is associated with coronary artery disease, little is known regarding its role in aortic stenosis. We sought to investigate the association of epicardial adipose tissue with aortic stenosis severity and progression, myocardial remodelling and function, and mortality in asymptomatic patients with aortic stenosis. In a post hoc analysis of 124 patients with asymptomatic mild-to-severe aortic stenosis participating in a prospective clinical trial, baseline epicardial adipose tissue was quantified on CT angiography using fully automated deep learning-enabled software. Aortic stenosis disease severity was assessed at baseline and 1 year. The primary endpoint was all-cause mortality. Neither epicardial adipose tissue volume nor attenuation correlated with aortic stenosis severity or subsequent disease progression as assessed by echocardiography or CT (p>0.05 for all). Epicardial adipose tissue volume correlated with plasma cardiac troponin concentration (r=0.23, p=0.009), left ventricular mass (r=0.46, p<0.001), ejection fraction (r=-0.28, p=0.002), global longitudinal strain (r=0.28, p=0.017), and left atrial volume (r=0.39, p<0.001). During the median follow-up of 48 (IQR 26-73) months, a total of 23 (18%) patients died. In multivariable analysis, both epicardial adipose tissue volume (HR 1.82, 95% CI 1.10 to 3.03; p=0.021) and plasma cardiac troponin concentration (HR 1.47, 95% CI 1.13 to 1.90; p=0.004) were associated with all-cause mortality, after adjustment for age, body mass index and left ventricular ejection fraction. Patients with epicardial adipose tissue volume >90 mm<sup>3</sup> had 3-4 times higher risk of death (adjusted HR 3.74, 95% CI 1.08 to 12.96; p=0.037). Epicardial adipose tissue volume does not associate with aortic stenosis severity or its progression but does correlate with blood and imaging biomarkers of impaired myocardial health. The latter may explain the association of epicardial adipose tissue volume with an increased risk of all-cause mortality in patients with asymptomatic aortic stenosis. gov (NCT02132026).
Page 184 of 3363359 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.