Sort by:
Page 89 of 1421416 results

Automation in tibial implant loosening detection using deep-learning segmentation.

Magg C, Ter Wee MA, Buijs GS, Kievit AJ, Schafroth MU, Dobbe JGG, Streekstra GJ, Sánchez CI, Blankevoort L

pubmed logopapersJun 27 2025
Patients with recurrent complaints after total knee arthroplasty may suffer from aseptic implant loosening. Current imaging modalities do not quantify looseness of knee arthroplasty components. A recently developed and validated workflow quantifies the tibial component displacement relative to the bone from CT scans acquired under valgus and varus load. The 3D analysis approach includes segmentation and registration of the tibial component and bone. In the current approach, the semi-automatic segmentation requires user interaction, adding complexity to the analysis. The research question is whether the segmentation step can be fully automated while keeping outcomes indifferent. In this study, different deep-learning (DL) models for fully automatic segmentation are proposed and evaluated. For this, we employ three different datasets for model development (20 cadaveric CT pairs and 10 cadaveric CT scans) and evaluation (72 patient CT pairs). Based on the performance on the development dataset, the final model was selected, and its predictions replaced the semi-automatic segmentation in the current approach. Implant displacement was quantified by the rotation about the screw-axis, maximum total point motion, and mean target registration error. The displacement parameters of the proposed approach showed a statistically significant difference between fixed and loose samples in a cadaver dataset, as well as between asymptomatic and loose samples in a patient dataset, similar to the outcomes of the current approach. The methodological error calculated on a reproducibility dataset showed values that were not statistically significant different between the two approaches. The results of the proposed and current approaches showed excellent reliability for one and three operators on two datasets. The conclusion is that a full automation in knee implant displacement assessment is feasible by utilizing a DL-based segmentation model while maintaining the capability of distinguishing between fixed and loose implants.

Improving radiology reporting accuracy: use of GPT-4 to reduce errors in reports.

Mayes CJ, Reyes C, Truman ME, Dodoo CA, Adler CR, Banerjee I, Khandelwal A, Alexander LF, Sheedy SP, Thompson CP, Varner JA, Zulfiqar M, Tan N

pubmed logopapersJun 27 2025
Radiology reports are essential for communicating imaging findings to guide diagnosis and treatment. Although most radiology reports are accurate, errors can occur in the final reports due to high workloads, use of dictation software, and human error. Advanced artificial intelligence models, such as GPT-4, show potential as tools to improve report accuracy. This retrospective study evaluated how GPT-4 performed in detecting and correcting errors in finalized radiology reports in real-world settings for abdominopelvic computed tomography (CT) reports. We evaluated finalized CT abdominopelvic reports from a tertiary health system by using GPT-4 with zero-shot learning techniques. Six radiologists each reviewed 100 of their finalized reports (randomly selected), evaluating GPT-4's suggested revisions for agreement, acceptance, and clinical impact. The radiologists' responses were compared by years in practice and sex. GPT-4 identified issues and suggested revisions for 91% of the 600 reports; most revisions addressed grammar (74%). The radiologists agreed with 27% of the revisions and accepted 23%. Most revisions were rated as having no (44%) or low (46%) clinical impact. Potential harm was rare (8%), with only 2 cases of potentially severe harm. Radiologists with less experience (≤ 7 years of practice) were more likely to agree with the revisions suggested by GPT-4 than those with more experience (34% vs. 20%, P = .003) and accepted a greater percentage of the revisions (32% vs. 15%, P = .003). Although GPT-4 showed promise in identifying errors and improving the clarity of finalized radiology reports, most errors were categorized as minor, with no or low clinical impact. Collectively, the radiologists accepted 23% of the suggested revisions in their finalized reports. This study highlights the potential of GPT-4 as a prospective tool for radiology reporting, with further refinement needed for consistent use in clinical practice.

Artificial intelligence in coronary CT angiography: transforming the diagnosis and risk stratification of atherosclerosis.

Irannejad K, Mafi M, Krishnan S, Budoff MJ

pubmed logopapersJun 27 2025
Coronary CT Angiography (CCTA) is essential for assessing atherosclerosis and coronary artery disease, aiding in early detection, risk prediction, and clinical assessment. However, traditional CCTA interpretation is limited by observer variability, time inefficiency, and inconsistent plaque characterization. AI has emerged as a transformative tool, enhancing diagnostic accuracy, workflow efficiency, and risk prediction for major adverse cardiovascular events (MACE). Studies show that AI improves stenosis detection by 27%, inter-reader agreement by 30%, and reduces reporting times by 40%, thereby addressing key limitations of manual interpretation. Integrating AI with multimodal imaging (e.g., FFR-CT, PET-CT) further enhances ischemia detection by 28% and lesion classification by 35%, providing a more comprehensive cardiovascular evaluation. This review synthesizes recent advancements in CCTA-AI automation, risk stratification, and precision diagnostics while critically analyzing data quality, generalizability, ethics, and regulation challenges. Future directions, including real-time AI-assisted triage, cloud-based diagnostics, and AI-driven personalized medicine, are explored for their potential to revolutionize clinical workflows and optimize patient outcomes.

3D Auto-segmentation of pancreas cancer and surrounding anatomical structures for surgical planning.

Rhu J, Oh N, Choi GS, Kim JM, Choi SY, Lee JE, Lee J, Jeong WK, Min JH

pubmed logopapersJun 27 2025
This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning. We included patients with pancreatic cancer who underwent pancreatic surgery at three tertiary referral hospitals. A hierarchical Swin Transformer V2 model was implemented to segment the pancreas, pancreatic cancers, and peripancreatic structures from preoperative contrast-enhanced CT scans. Data was divided into training and internal validation sets at a 3:1 ratio (from one tertiary institution), with separately prepared external validation set (from two separate institutions). Segmentation performance was quantitatively assessed using the dice similarity coefficient (DSC) and qualitatively evaluated (complete vs partial vs absent). A total of 275 patients (51.6% male, mean age 65.8 ± 9.5 years) were included (176 training group, 59 internal validation group, and 40 external validation group). No significant differences in baseline characteristics were observed between the groups. The model achieved an overall mean DSC of 75.4 ± 6.0 and 75.6 ± 4.8 in the internal and external validation groups, respectively. It showed high accuracy particularly in the pancreas parenchyma (84.8 ± 5.3 and 86.1 ± 4.1) and lower accuracy in pancreatic cancer (57.0 ± 28.7 and 54.5 ± 23.5). The DSC scores for pancreatic cancer tended to increase with larger tumor sizes. Moreover, the qualitative assessments revealed high accuracy in the superior mesenteric artery (complete segmentation, 87.5%-100%), portal and superior mesenteric vein (97.5%-100%), pancreas parenchyma (83.1%-87.5%), but lower accuracy in cancers (62.7%-65.0%). The deep learning-based autosegmentation model for 3D visualization of pancreatic cancer and peripancreatic structures showed robust performance. Further improvement will enhance many promising applications in clinical research.

Automated Sella-Turcica Annotation and Mesh Alignment of 3D Stereophotographs for Craniosynostosis Patients Using a PCA-FFNN Based Approach.

Bielevelt F, Chargi N, van Aalst J, Nienhuijs M, Maal T, Delye H, de Jong G

pubmed logopapersJun 27 2025
Craniosynostosis, characterized by the premature fusion of cranial sutures, can lead to significant neurological and developmental complications, necessitating early diagnosis and precise treatment. Traditional cranial morphologic assessment has relied on CT scans, which expose infants to ionizing radiation. Recently, 3D stereophotogrammetry has emerged as a noninvasive alternative, but accurately aligning 3D photographs within standardized reference frames, such as the Sella-turcica-Nasion (S-N) frame, remains a challenge. This study proposes a novel method for predicting the Sella turcica (ST) coordinate from 3D cranial surface models using Principal Component Analysis (PCA) combined with a Feedforward Neural Network (FFNN). The accuracy of this method is compared with the conventional Computed Cranial Focal Point (CCFP) method, which has limitations, especially in cases of asymmetric cranial deformations like plagiocephaly. A data set of 153 CT scans, including 68 craniosynostosis subjects, was used to train and test the PCA-FFNN model. The results demonstrate that the PCA-FFNN approach outperforms CCFP, achieving significantly lower deviations in ST coordinate predictions (3.61 vs. 8.38 mm, P<0.001), particularly along the y-axes and z-axes. In addition, mesh realignment within the S-N reference frame showed improved accuracy with the PCA-FFNN method, evidenced by lower mean deviations and reduced dispersion in distance maps. These findings highlight the potential of the PCA-FFNN approach to provide a more reliable, noninvasive solution for cranial assessment, improving craniosynostosis follow-up and enhancing clinical outcomes.

Early prediction of adverse outcomes in liver cirrhosis using a CT-based multimodal deep learning model.

Xie N, Liang Y, Luo Z, Hu J, Ge R, Wan X, Wang C, Zou G, Guo F, Jiang Y

pubmed logopapersJun 27 2025
Early-stage cirrhosis frequently presents without symptoms, making timely identification of high-risk patients challenging. We aimed to develop a deep learning-based triple-modal fusion liver cirrhosis network (TMF-LCNet) for the prediction of adverse outcomes, offering a promising tool to enhance early risk assessment and improve clinical management strategies. This retrospective study included 243 patients with early-stage cirrhosis across two centers. Adverse outcomes were defined as the development of severe complications like ascites, hepatic encephalopathy and variceal bleeding. TMF-LCNet was developed by integrating three types of data: non-contrast abdominal CT images, radiomic features extracted from liver and spleen, and clinical text detailing laboratory parameters and adipose tissue composition measurements. TMF-LCNet was compared with conventional methods on the same dataset, and single-modality versions of TMF-LCNet were tested to determine the impact of each data type. Model effectiveness was measured using the area under the receiver operating characteristics curve (AUC) for discrimination, calibration curves for model fit, and decision curve analysis (DCA) for clinical utility. TMF-LCNet demonstrated superior predictive performance compared to conventional image-based, radiomics-based, and multimodal methods, achieving an AUC of 0.797 in the training cohort (n = 184) and 0.747 in the external test cohort (n = 59). Only TMF-LCNet exhibited robust model calibration in both cohorts. Of the three data types, the imaging modality contributed the most, as the image-only version of TMF-LCNet achieved performance closest to the complete version (AUC = 0.723 and 0.716, respectively; p > 0.05). This was followed by the text modality, with radiomics contributing the least, a pattern consistent with the clinical utility trends observed in DCA. TMF-LCNet represents an accurate and robust tool for predicting adverse outcomes in early-stage cirrhosis by integrating multiple data types. It holds potential for early identification of high-risk patients, guiding timely interventions, and ultimately improving patient prognosis.

A multi-view CNN model to predict resolving of new lung nodules on follow-up low-dose chest CT.

Wang J, Zhang X, Tang W, van Tuinen M, Vliegenthart R, van Ooijen P

pubmed logopapersJun 27 2025
New, intermediate-sized nodules in lung cancer screening undergo follow-up CT, but some of these will resolve. We evaluated the performance of a multi-view convolutional neural network (CNN) in distinguishing resolving and non-resolving new, intermediate-sized lung nodules. This retrospective study utilized data on 344 intermediate-sized nodules (50-500 mm<sup>3</sup>) in 250 participants from the NELSON (Dutch-Belgian Randomized Lung Cancer Screening) trial. We implemented four-fold cross-validation for model training and testing. A multi-view CNN model was developed by combining three two-dimensional (2D) CNN models and one three-dimensional (3D) CNN model. We used 2D, 2.5D, and 3D models for comparison. The models' performance was evaluated using sensitivity, specificity, and area under the ROC curve (AUC). Specificity, indicating what percentage of non-resolving nodules requiring follow-up can be correctly predicted, was maximized. Among all nodules, 18.3% (63) were resolving. The multi-view CNN model achieved an AUC of 0.81, with a mean sensitivity of 0.63 (SD, 0.15) and a mean specificity of 0.93 (SD, 0.02). The model significantly improved performance compared to 2D, 2.5D, or 3D models (p < 0.05). Under the premise of specificity greater than 90% (meaning < 10% of non-resolving nodules are incorrectly identified as resolving), follow-up CT in 14% of individuals could be prevented. The multi-view CNN model achieved high specificity in discriminating new intermediate nodules that would need follow-up CT by identifying non-resolving nodules. After further validation and optimization, this model may assist with decision-making when new intermediate nodules are found in lung cancer screening. The multi-view CNN-based model has the potential to reduce unnecessary follow-up scans when new nodules are detected, aiding radiologists in making earlier, more informed decisions. Predicting the resolution of new intermediate lung nodules in lung cancer screening CT is a challenge. Our multi-view CNN model showed an AUC of 0.81, a specificity of 0.93, and a sensitivity of 0.63 at the nodule level. The multi-view model demonstrated a significant improvement in AUC compared to the three 2D models, one 2.5D model, and one 3D model.

A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

Liu Q, Jiang J, Wu K, Zhang Y, Sun N, Luo J, Ba T, Lv A, Liu C, Yin Y, Yang Z, Xu H

pubmed logopapersJun 27 2025
To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images. A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated. In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP). The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy. Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data. Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.&#xD;Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.&#xD;Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.&#xD;Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.

Cardiovascular disease classification using radiomics and geometric features from cardiac CT

Ajay Mittal, Raghav Mehta, Omar Todd, Philipp Seeböck, Georg Langs, Ben Glocker

arxiv logopreprintJun 27 2025
Automatic detection and classification of Cardiovascular disease (CVD) from Computed Tomography (CT) images play an important part in facilitating better-informed clinical decisions. However, most of the recent deep learning based methods either directly work on raw CT data or utilize it in pair with anatomical cardiac structure segmentation by training an end-to-end classifier. As such, these approaches become much more difficult to interpret from a clinical perspective. To address this challenge, in this work, we break down the CVD classification pipeline into three components: (i) image segmentation, (ii) image registration, and (iii) downstream CVD classification. Specifically, we utilize the Atlas-ISTN framework and recent segmentation foundational models to generate anatomical structure segmentation and a normative healthy atlas. These are further utilized to extract clinically interpretable radiomic features as well as deformation field based geometric features (through atlas registration) for CVD classification. Our experiments on the publicly available ASOCA dataset show that utilizing these features leads to better CVD classification accuracy (87.50\%) when compared against classification model trained directly on raw CT images (67.50\%). Our code is publicly available: https://github.com/biomedia-mira/grc-net
Page 89 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.