Sort by:
Page 8 of 30297 results

The utility of low-dose pre-operative CT of ovarian tumor with artificial intelligence iterative reconstruction for diagnosing peritoneal invasion, lymph node and hepatic metastasis.

Cai X, Han J, Zhou W, Yang F, Liu J, Wang Q, Li R

pubmed logopapersMay 13 2025
Diagnosis of peritoneal invasion, lymph node metastasis, and hepatic metastasis is crucial in the decision-making process of ovarian tumor treatment. This study aimed to test the feasibility of low-dose abdominopelvic CT with an artificial intelligence iterative reconstruction (AIIR) for diagnosing peritoneal invasion, lymph node metastasis, and hepatic metastasis in pre-operative imaging of ovarian tumor. This study prospectively enrolled 88 patients with pathology-confirmed ovarian tumors, where routine-dose CT at portal venous phase (120 kVp/ref. 200 mAs) with hybrid iterative reconstruction (HIR) was followed by a low-dose scan (120 kVp/ref. 40 mAs) with AIIR. The performance of diagnosing peritoneal invasion and lymph node metastasis was assessed using receiver operating characteristic (ROC) analysis with pathological results serving as the reference. The hepatic parenchymal metastases were diagnosed and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured. The perihepatic structures were also scored on the clarity of porta hepatis, gallbladder fossa and intersegmental fissure. The effective dose of low-dose CT was 79.8% lower than that of routine-dose scan (2.64 ± 0.46 vs. 13.04 ± 2.25 mSv, p < 0.001). The low-dose AIIR showed similar area under the ROC curve (AUC) with routine-dose HIR for diagnosing both peritoneal invasion (0.961 vs. 0.960, p = 0.734) and lymph node metastasis (0.711 vs. 0.715, p = 0.355). The 10 hepatic parenchymal metastases were all accurately diagnosed on the two image sets. The low-dose AIIR exhibited higher SNR and CNR for hepatic parenchymal metastases and superior clarity for perihepatic structures. In low-dose pre-operative CT of ovarian tumor, AIIR delivers similar diagnostic accuracy for peritoneal invasion, lymph node metastasis, and hepatic metastasis, as compared to routine-dose abdominopelvic CT. It is feasible and diagnostically safe to apply up to 80% dose reduction in CT imaging of ovarian tumor by using AIIR.

Segmentation of renal vessels on non-enhanced CT images using deep learning models.

Zhong H, Zhao Y, Zhang Y

pubmed logopapersMay 13 2025
To evaluate the possibility of performing renal vessel reconstruction on non-enhanced CT images using deep learning models. 177 patients' CT scans in the non-enhanced phase, arterial phase and venous phase were chosen. These data were randomly divided into the training set (n = 120), validation set (n = 20) and test set (n = 37). In training set and validation set, a radiologist marked out the right renal arteries and veins on non-enhanced CT phase images using contrast phases as references. Trained deep learning models were tested and evaluated on the test set. A radiologist performed renal vessel reconstruction on the test set without the contrast phase reference, and the results were used for comparison. Reconstruction using the arterial phase and venous phase was used as the gold standard. Without the contrast phase reference, both radiologist and model could accurately identify artery and vein main trunk. The accuracy was 91.9% vs. 97.3% (model vs. radiologist) in artery and 91.9% vs. 100% in vein, the difference was insignificant. The model had difficulty identify accessory arteries, the accuracy was significantly lower than radiologist (44.4% vs. 77.8%, p = 0.044). The model also had lower accuracy in accessory veins, but the difference was insignificant (64.3% vs. 85.7%, p = 0.094). Deep learning models could accurately recognize the right renal artery and vein main trunk, and accuracy was comparable to that of radiologists. Although the current model still had difficulty recognizing small accessory vessels, further training and model optimization would solve these problems.

Evaluation of an artificial intelligence noise reduction tool for conventional X-ray imaging - a visual grading study of pediatric chest examinations at different radiation dose levels using anthropomorphic phantoms.

Hultenmo M, Pernbro J, Ahlin J, Bonnier M, Båth M

pubmed logopapersMay 13 2025
Noise reduction tools developed with artificial intelligence (AI) may be implemented to improve image quality and reduce radiation dose, which is of special interest in the more radiosensitive pediatric population. The aim of the present study was to examine the effect of the AI-based intelligent noise reduction (INR) on image quality at different dose levels in pediatric chest radiography. Anteroposterior and lateral images of two anthropomorphic phantoms were acquired with both standard noise reduction and INR at different dose levels. In total, 300 anteroposterior and 420 lateral images were included. Image quality was evaluated by three experienced pediatric radiologists. Gradings were analyzed with visual grading characteristics (VGC) resulting in area under the VGC curve (AUC<sub>VGC</sub>) values and associated confidence intervals (CI). Image quality of different anatomical structures and overall clinical image quality were statistically significantly better in the anteroposterior INR images than in the corresponding standard noise reduced images at each dose level. Compared with reference anteroposterior images at a dose level of 100% with standard noise reduction, the image quality of the anteroposterior INR images was graded as significantly better at dose levels of ≥ 80%. Statistical significance was also achieved at lower dose levels for some structures. The assessments of the lateral images showed similar trends but with fewer significant results. The results of the present study indicate that the AI-based INR may potentially be used to improve image quality at a specific dose level or to reduce dose and maintain the image quality in pediatric chest radiography.

Development of a deep learning method for phase retrieval image enhancement in phase contrast microcomputed tomography.

Ding XF, Duan X, Li N, Khoz Z, Wu FX, Chen X, Zhu N

pubmed logopapersMay 13 2025
Propagation-based imaging (one method of X-ray phase contrast imaging) with microcomputed tomography (PBI-µCT) offers the potential to visualise low-density materials, such as soft tissues and hydrogel constructs, which are difficult to be identified by conventional absorption-based contrast µCT. Conventional µCT reconstruction produces edge-enhanced contrast (EEC) images which preserve sharp boundaries but are susceptible to noise and do not provide consistent grey value representation for the same material. Meanwhile, phase retrieval (PR) algorithms can convert edge enhanced contrast to area contrast to improve signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) but usually results to over-smoothing, thus creating inaccuracies in quantitative analysis. To alleviate these problems, this study developed a deep learning-based method called edge view enhanced phase retrieval (EVEPR), by strategically integrating the complementary spatial features of denoised EEC and PR images, and further applied this method to segment the hydrogel constructs in vivo and ex vivo. EVEPR used paired denoised EEC and PR images to train a deep convolutional neural network (CNN) on a dataset-to-dataset basis. The CNN had been trained on important high-frequency details, for example, edges and boundaries from the EEC image and area contrast from PR images. The CNN predicted result showed enhanced area contrast beyond conventional PR algorithms while improving SNR and CNR. The enhanced CNR especially allowed for the image to be segmented with greater efficiency. EVEPR was applied to in vitro and ex vivo PBI-µCT images of low-density hydrogel constructs. The enhanced visibility and consistency of hydrogel constructs was essential for segmenting such material which usually exhibit extremely poor contrast. The EVEPR images allowed for more accurate segmentation with reduced manual adjustments. The efficiency in segmentation allowed for the generation of a sizeable database of segmented hydrogel scaffolds which were used in conventional data-driven segmentation applications. EVEPR was demonstrated to be a robust post-image processing method capable of significantly enhancing image quality by training a CNN on paired denoised EEC and PR images. This method not only addressed the common issues of over-smoothing and noise susceptibility in conventional PBI-µCT image processing but also allowed for efficient and accurate in vitro and ex vivo image processing applications of low-density materials.

Deep Learning-Derived Cardiac Chamber Volumes and Mass From PET/CT Attenuation Scans: Associations With Myocardial Flow Reserve and Heart Failure.

Hijazi W, Shanbhag A, Miller RJH, Kavanagh PB, Killekar A, Lemley M, Wopperer S, Knight S, Le VT, Mason S, Acampa W, Rosamond T, Dey D, Berman DS, Chareonthaitawee P, Di Carli MF, Slomka PJ

pubmed logopapersMay 13 2025
Computed tomography (CT) attenuation correction scans are an intrinsic part of positron emission tomography (PET) myocardial perfusion imaging using PET/CT, but anatomic information is rarely derived from these ultralow-dose CT scans. We aimed to assess the association between deep learning-derived cardiac chamber volumes (right atrial, right ventricular, left ventricular, and left atrial) and mass (left ventricular) from these scans with myocardial flow reserve and heart failure hospitalization. We included 18 079 patients with consecutive cardiac PET/CT from 6 sites. A deep learning model estimated cardiac chamber volumes and left ventricular mass from computed tomography attenuation correction imaging. Associations between deep learning-derived CT mass and volumes with heart failure hospitalization and reduced myocardial flow reserve were assessed in a multivariable analysis. During a median follow-up of 4.3 years, 1721 (9.5%) patients experienced heart failure hospitalization. Patients with 3 or 4 abnormal chamber volumes were 7× more likely to be hospitalized for heart failure compared with patients with normal volumes. In adjusted analyses, left atrial volume (hazard ratio [HR], 1.25 [95% CI, 1.19-1.30]), right atrial volume (HR, 1.29 [95% CI, 1.23-1.35]), right ventricular volume (HR, 1.25 [95% CI, 1.20-1.31]), left ventricular volume (HR, 1.27 [95% CI, 1.23-1.35]), and left ventricular mass (HR, 1.25 [95% CI, 1.18-1.32]) were independently associated with heart failure hospitalization. In multivariable analyses, left atrial volume (odds ratio, 1.14 [95% CI, 1.0-1.19]) and ventricular mass (odds ratio, 1.12 [95% CI, 1.6-1.17]) were independent predictors of reduced myocardial flow reserve. Deep learning-derived chamber volumes and left ventricular mass from computed tomography attenuation correction were predictive of heart failure hospitalization and reduced myocardial flow reserve in patients undergoing cardiac PET perfusion imaging. This anatomic data can be routinely reported along with other PET/CT parameters to improve risk prediction.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results

Meritxell Riera-Marin, Sikha O K, Julia Rodriguez-Comas, Matthias Stefan May, Zhaohong Pan, Xiang Zhou, Xiaokun Liang, Franciskus Xaverius Erick, Andrea Prenner, Cedric Hemon, Valentin Boussot, Jean-Louis Dillenseger, Jean-Claude Nunes, Abdul Qayyum, Moona Mazher, Steven A Niederer, Kaisar Kushibar, Carlos Martin-Isla, Petia Radeva, Karim Lekadir, Theodore Barfoot, Luis C. Garcia Peraza Herrera, Ben Glocker, Tom Vercauteren, Lucas Gago, Justin Englemann, Joy-Marie Kleiss, Anton Aubanell, Andreu Antolin, Javier Garcia-Lopez, Miguel A. Gonzalez Ballester, Adrian Galdran

arxiv logopreprintMay 13 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Congenital Heart Disease recognition using Deep Learning/Transformer models

Aidar Amangeldi, Vladislav Yarovenko, Angsar Taigonyrov

arxiv logopreprintMay 13 2025
Congenital Heart Disease (CHD) remains a leading cause of infant morbidity and mortality, yet non-invasive screening methods often yield false negatives. Deep learning models, with their ability to automatically extract features, can assist doctors in detecting CHD more effectively. In this work, we investigate the use of dual-modality (sound and image) deep learning methods for CHD diagnosis. We achieve 73.9% accuracy on the ZCHSound dataset and 80.72% accuracy on the DICOM Chest X-ray dataset.

Unsupervised Out-of-Distribution Detection in Medical Imaging Using Multi-Exit Class Activation Maps and Feature Masking

Yu-Jen Chen, Xueyang Li, Yiyu Shi, Tsung-Yi Ho

arxiv logopreprintMay 13 2025
Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models in medical imaging applications. This work is motivated by the observation that class activation maps (CAMs) for in-distribution (ID) data typically emphasize regions that are highly relevant to the model's predictions, whereas OOD data often lacks such focused activations. By masking input images with inverted CAMs, the feature representations of ID data undergo more substantial changes compared to those of OOD data, offering a robust criterion for differentiation. In this paper, we introduce a novel unsupervised OOD detection framework, Multi-Exit Class Activation Map (MECAM), which leverages multi-exit CAMs and feature masking. By utilizing mult-exit networks that combine CAMs from varying resolutions and depths, our method captures both global and local feature representations, thereby enhancing the robustness of OOD detection. We evaluate MECAM on multiple ID datasets, including ISIC19 and PathMNIST, and test its performance against three medical OOD datasets, RSNA Pneumonia, COVID-19, and HeadCT, and one natural image OOD dataset, iSUN. Comprehensive comparisons with state-of-the-art OOD detection methods validate the effectiveness of our approach. Our findings emphasize the potential of multi-exit networks and feature masking for advancing unsupervised OOD detection in medical imaging, paving the way for more reliable and interpretable models in clinical practice.

A Deep Learning-Driven Framework for Inhalation Injury Grading Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries face a challenge in clinical diagnosis and grading due to the limitations of traditional methods, such as Abbreviated Injury Score (AIS), which rely on subjective assessments and show weak correlations with clinical outcomes. This study introduces a novel deep learning-based framework for grading inhalation injuries using bronchoscopy images with the duration of mechanical ventilation as an objective metric. To address the scarcity of medical imaging data, we propose enhanced StarGAN, a generative model that integrates Patch Loss and SSIM Loss to improve synthetic images' quality and clinical relevance. The augmented dataset generated by enhanced StarGAN significantly improved classification performance when evaluated using the Swin Transformer, achieving an accuracy of 77.78%, an 11.11% improvement over the original dataset. Image quality was assessed using the Fr\'echet Inception Distance (FID), where Enhanced StarGAN achieved the lowest FID of 30.06, outperforming baseline models. Burn surgeons confirmed the realism and clinical relevance of the generated images, particularly the preservation of bronchial structures and color distribution. These results highlight the potential of enhanced StarGAN in addressing data limitations and improving classification accuracy for inhalation injury grading.

An incremental algorithm for non-convex AI-enhanced medical image processing

Elena Morotti

arxiv logopreprintMay 13 2025
Solving non-convex regularized inverse problems is challenging due to their complex optimization landscapes and multiple local minima. However, these models remain widely studied as they often yield high-quality, task-oriented solutions, particularly in medical imaging, where the goal is to enhance clinically relevant features rather than merely minimizing global error. We propose incDG, a hybrid framework that integrates deep learning with incremental model-based optimization to efficiently approximate the $\ell_0$-optimal solution of imaging inverse problems. Built on the Deep Guess strategy, incDG exploits a deep neural network to generate effective initializations for a non-convex variational solver, which refines the reconstruction through regularized incremental iterations. This design combines the efficiency of Artificial Intelligence (AI) tools with the theoretical guarantees of model-based optimization, ensuring robustness and stability. We validate incDG on TpV-regularized optimization tasks, demonstrating its effectiveness in medical image deblurring and tomographic reconstruction across diverse datasets, including synthetic images, brain CT slices, and chest-abdomen scans. Results show that incDG outperforms both conventional iterative solvers and deep learning-based methods, achieving superior accuracy and stability. Moreover, we confirm that training incDG without ground truth does not significantly degrade performance, making it a practical and powerful tool for solving non-convex inverse problems in imaging and beyond.
Page 8 of 30297 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.