Sort by:
Page 21 of 1521519 results

Automatic recognition and differentiation of pulmonary contusion and bacterial pneumonia based on deep learning and radiomics.

Deng T, Feng J, Le X, Xia Y, Shi F, Yu F, Zhan Y, Liu X, Li C

pubmed logopapersJul 1 2025
In clinical work, there are difficulties in distinguishing pulmonary contusion(PC) from bacterial pneumonia(BP) on CT images by the naked eye alone when the history of trauma is unknown. Artificial intelligence is widely used in medical imaging, but its diagnostic performance for pulmonary contusion is unclear. In this study, artificial intelligence was used for the first time to identify lung contusion and bacterial pneumonia, and its diagnostic performance was compared with that of manual. In this retrospective study, 2179 patients between April 2016 and July 2022 from two hospitals were collected and divided into a training set, an internal validation set, an external validation set. PC and BP were automatically recognized, segmented using VB-net and radiomics features were automatically extracted. Four machine learning algorithms including Decision Trees, Logistic Regression, Random Forests and Support Vector Machines(SVM) were using to built the models. De-long test was used to compare the performance among models. The best performing model and four radiologists diagnosed the external validation set, and compare the diagnostic efficacy of human and artificial intelligence. VB-net automatically detected and segmented PC and BP. Among the four machine learning models we've built, De-long test showed that SVM model had the best performance, with AUC, accuracy, sensitivity, and specificity of 0.998 (95% CI: 0.995-1), 0.980, 0.979, 0.982 in the training set, 0.891 (95% CI: 0.854-0.928), 0.979, 0.750, 0.860 in the internal validation set, 0.885 (95% CI: 0.850-0.920), 0.903, 0.976, 0.794 in the external validation set. The diagnostic ability of the SVM model was superior to that of human (P < 0.05). Our VB-net automatically recognizes and segments PC and BP in chest CT images. SVM model based on radiomics features can quickly and accurately differentiate between them with higher accuracy than experienced radiologist.

Deep learning-based time-of-flight (ToF) enhancement of non-ToF PET scans for different radiotracers.

Mehranian A, Wollenweber SD, Bradley KM, Fielding PA, Huellner M, Iagaru A, Dedja M, Colwell T, Kotasidis F, Johnsen R, Jansen FP, McGowan DR

pubmed logopapersJul 1 2025
To evaluate a deep learning-based time-of-flight (DLToF) model trained to enhance the image quality of non-ToF PET images for different tracers, reconstructed using BSREM algorithm, towards ToF images. A 3D residual U-NET model was trained using 8 different tracers (FDG: 75% and non-FDG: 25%) from 11 sites from US, Europe and Asia. A total of 309 training and 33 validation datasets scanned on GE Discovery MI (DMI) ToF scanners were used for development of DLToF models of three strengths: low (L), medium (M) and high (H). The training and validation pairs consisted of target ToF and input non-ToF BSREM reconstructions using site-preferred regularisation parameters (beta values). The contrast and noise properties of each model were defined by adjusting the beta value of target ToF images. A total of 60 DMI datasets, consisting of a set of 4 tracers (<sup>18</sup>F-FDG, <sup>18</sup>F-PSMA, <sup>68</sup>Ga-PSMA, <sup>68</sup>Ga-DOTATATE) and 15 exams each, were collected for testing and quantitative analysis of the models based on standardized uptake value (SUV) in regions of interest (ROI) placed in lesions, lungs and liver. Each dataset includes 5 image series: ToF and non-ToF BSREM and three DLToF images. The image series (300 in total) were blind scored on a 5-point Likert score by 4 readers based on lesion detectability, diagnostic confidence, and image noise/quality. In lesion SUV<sub>max</sub> quantification with respect to ToF BSREM, DLToF-H achieved the best results among the three models by reducing the non-ToF BSREM errors from -39% to -6% for <sup>18</sup>F-FDG (38 lesions); from -42% to -7% for <sup>18</sup>F-PSMA (35 lesions); from -34% to -4% for <sup>68</sup>Ga-PSMA (23 lesions) and from -34% to -12% for <sup>68</sup>Ga-DOTATATE (32 lesions). Quantification results in liver and lung also showed ToF-like performance of DLToF models. Clinical reader resulted showed that DLToF-H results in an improved lesion detectability on average for all four radiotracers whereas DLToF-L achieved the highest scores for image quality (noise level). The results of DLToF-M however showed that this model results in the best trade-off between lesion detection and noise level and hence achieved the highest score for diagnostic confidence on average for all radiotracers. This study demonstrated that the DLToF models are suitable for both FDG and non-FDG tracers and could be utilized for digital BGO PET/CT scanners to provide an image quality and lesion detectability comparable and close to ToF.

The impact of updated imaging software on the performance of machine learning models for breast cancer diagnosis: a multi-center, retrospective study.

Cai L, Golatta M, Sidey-Gibbons C, Barr RG, Pfob A

pubmed logopapersJul 1 2025
Artificial Intelligence models based on medical (imaging) data are increasingly developed. However, the imaging software on which the original data is generated is frequently updated. The impact of updated imaging software on the performance of AI models is unclear. We aimed to develop machine learning models using shear wave elastography (SWE) data to identify malignant breast lesions and to test the models' generalizability by validating them on external data generated by both the original updated software versions. We developed and validated different machine learning models (GLM, MARS, XGBoost, SVM) using multicenter, international SWE data (NCT02638935) using tenfold cross-validation. Findings were compared to the histopathologic evaluation of the biopsy specimen or 2-year follow-up. The outcome measure was the area under the curve (AUROC). We included 1288 cases in the development set using the original imaging software and 385 cases in the validation set using both, original and updated software. In the external validation set, the GLM and XGBoost models showed better performance with the updated software data compared to the original software data (AUROC 0.941 vs. 0.902, p < 0.001 and 0.934 vs. 0.872, p < 0.001). The MARS model showed worse performance with the updated software data (0.847 vs. 0.894, p = 0.045). SVM was not calibrated. In this multicenter study using SWE data, some machine learning models demonstrated great potential to bridge the gap between original software and updated software, whereas others exhibited weak generalizability.

Deformation registration based on reconstruction of brain MRI images with pathologies.

Lian L, Chang Q

pubmed logopapersJul 1 2025
Deformable registration between brain tumor images and brain atlas has been an important tool to facilitate pathological analysis. However, registration of images with tumors is challenging due to absent correspondences induced by the tumor. Furthermore, the tumor growth may displace the tissue, causing larger deformations than what is observed in healthy brains. Therefore, we propose a new reconstruction-driven cascade feature warping (RCFW) network for brain tumor images. We first introduce the symmetric-constrained feature reasoning (SFR) module which reconstructs the missed normal appearance within tumor regions, allowing a dense spatial correspondence between the reconstructed quasi-normal appearance and the atlas. The dilated multi-receptive feature fusion module is further introduced, which collects long-range features from different dimensions to facilitate tumor region reconstruction, especially for large tumor cases. Then, the reconstructed tumor images and atlas are jointly fed into the multi-stage feature warping module (MFW) to progressively predict spatial transformations. The method was performed on the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge database and compared with six existing methods. Experimental results showed that the proposed method effectively handles the problem of brain tumor image registration, which can maintain the smooth deformation of the tumor region while maximizing the image similarity of normal regions.

A Workflow-Efficient Approach to Pre- and Post-Operative Assessment of Weight-Bearing Three-Dimensional Knee Kinematics.

Banks SA, Yildirim G, Jachode G, Cox J, Anderson O, Jensen A, Cole JD, Kessler O

pubmed logopapersJul 1 2025
Knee kinematics during daily activities reflect disease severity preoperatively and are associated with clinical outcomes after total knee arthroplasty (TKA). It is widely believed that measured kinematics would be useful for preoperative planning and postoperative assessment. Despite decades-long interest in measuring three-dimensional (3D) knee kinematics, no methods are available for routine, practical clinical examinations. We report a clinically practical method utilizing machine-learning-enhanced software and upgraded C-arm fluoroscopy for the accurate and time-efficient measurement of pre-TKA and post-TKA 3D dynamic knee kinematics. Using a common C-arm with an upgraded detector and software, we performed an 8-s horizontal sweeping pulsed fluoroscopic scan of the weight-bearing knee joint. The patient's knee was then imaged using pulsed C-arm fluoroscopy while performing standing, kneeling, squatting, stair, chair, and gait motion activities. We used limited-arc cone-beam reconstruction methods to create 3D models of the femur and tibia/fibula bones with implants, which can then be used to perform model-image registration to quantify the 3D knee kinematics. The proposed protocol can be accomplished by an individual radiology technician in ten minutes and does not require additional equipment beyond a step and stool. The image analysis can be performed by a computer onboard the upgraded c-arm or in the cloud, before loading the examination results into the Picture Archiving and Communication System and Electronic Medical Record systems. Weight-bearing kinematics affects knee function pre- and post-TKA. It has long been exclusively the domain of researchers to make such measurements. We present an approach that leverages common, but digitally upgraded, imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. With these capabilities, it will be possible to include dynamic 3D knee kinematics as a component of the routine clinical workup for patients who have diseased or replaced knees.

A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images.

Khan SD, Basalamah S, Lbath A

pubmed logopapersJul 1 2025
Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.

Intraindividual Comparison of Image Quality Between Low-Dose and Ultra-Low-Dose Abdominal CT With Deep Learning Reconstruction and Standard-Dose Abdominal CT Using Dual-Split Scan.

Lee TY, Yoon JH, Park JY, Park SH, Kim H, Lee CM, Choi Y, Lee JM

pubmed logopapersJul 1 2025
The aim of this study was to intraindividually compare the conspicuity of focal liver lesions (FLLs) between low- and ultra-low-dose computed tomography (CT) with deep learning reconstruction (DLR) and standard-dose CT with model-based iterative reconstruction (MBIR) from a single CT using dual-split scan in patients with suspected liver metastasis via a noninferiority design. This prospective study enrolled participants who met the eligibility criteria at 2 tertiary hospitals in South Korea from June 2022 to January 2023. The criteria included ( a ) being aged between 20 and 85 years and ( b ) having suspected or known liver metastases. Dual-source CT scans were conducted, with the standard radiation dose divided in a 2:1 ratio between tubes A and B (67% and 33%, respectively). The voltage settings of 100/120 kVp were selected based on the participant's body mass index (<30 vs ≥30 kg/m 2 ). For image reconstruction, MBIR was utilized for standard-dose (100%) images, whereas DLR was employed for both low-dose (67%) and ultra-low-dose (33%) images. Three radiologists independently evaluated FLL conspicuity, the probability of metastasis, and subjective image quality using a 5-point Likert scale, in addition to quantitative signal-to-noise and contrast-to-noise ratios. The noninferiority margins were set at -0.5 for conspicuity and -0.1 for detection. One hundred thirty-three participants (male = 58, mean body mass index = 23.0 ± 3.4 kg/m 2 ) were included in the analysis. The low- and ultra-low- dose had a lower radiation dose than the standard-dose (median CT dose index volume: 3.75, 1.87 vs 5.62 mGy, respectively, in the arterial phase; 3.89, 1.95 vs 5.84 in the portal venous phase, P < 0.001 for all). Median FLL conspicuity was lower in the low- and ultra-low-dose scans compared with the standard-dose (3.0 [interquartile range, IQR: 2.0, 4.0], 3.0 [IQR: 1.0, 4.0] vs 3.0 [IQR: 2.0, 4.0] in the arterial phase; 4.0 [IQR: 1.0, 5.0], 3.0 [IQR: 1.0, 4.0] vs 4.0 [IQR: 2.0, 5.0] in the portal venous phases), yet within the noninferiority margin ( P < 0.001 for all). FLL detection was also lower but remained within the margin (lesion detection rate: 0.772 [95% confidence interval, CI: 0.727, 0.812], 0.754 [0.708, 0.795], respectively) compared with the standard-dose (0.810 [95% CI: 0.770, 0.844]). Sensitivity for liver metastasis differed between the standard- (80.6% [95% CI: 76.0, 84.5]), low-, and ultra-low-doses (75.7% [95% CI: 70.2, 80.5], 73.7 [95% CI: 68.3, 78.5], respectively, P < 0.001 for both), whereas specificity was similar ( P > 0.05). Low- and ultra-low-dose CT with DLR showed noninferior FLL conspicuity and detection compared with standard-dose CT with MBIR. Caution is needed due to a potential decrease in sensitivity for metastasis ( clinicaltrials.gov/NCT05324046 ).

Identifying Primary Sites of Spinal Metastases: Expert-Derived Features vs. ResNet50 Model Using Nonenhanced MRI.

Liu K, Ning J, Qin S, Xu J, Hao D, Lang N

pubmed logopapersJul 1 2025
The spinal column is a frequent site for metastases, affecting over 30% of solid tumor patients. Identifying the primary tumor is essential for guiding clinical decisions but often requires resource-intensive diagnostics. To develop and validate artificial intelligence (AI) models using noncontrast MRI to identify primary sites of spinal metastases, aiming to enhance diagnostic efficiency. Retrospective. A total of 514 patients with pathologically confirmed spinal metastases (mean age, 59.3 ± 11.2 years; 294 males) were included, split into a development set (360) and a test set (154). Noncontrast sagittal MRI sequences (T1-weighted, T2-weighted, and fat-suppressed T2) were acquired using 1.5 T and 3 T scanners. Two models were evaluated for identifying primary sites of spinal metastases: the expert-derived features (EDF) model using radiologist-identified imaging features and a ResNet50-based deep learning (DL) model trained on noncontrast MRI. Performance was assessed using accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve (ROC-AUC) for top-1, top-2, and top-3 indicators. Statistical analyses included Shapiro-Wilk, t tests, Mann-Whitney U test, and chi-squared tests. ROC-AUCs were compared via DeLong tests, with 95% confidence intervals from 1000 bootstrap replications and significance at P < 0.05. The EDF model outperformed the DL model in top-3 accuracy (0.88 vs. 0.69) and AUC (0.80 vs. 0.71). Subgroup analysis showed superior EDF performance for common sites like lung and kidney (e.g., kidney F1: 0.94 vs. 0.76), while the DL model had higher recall for rare sites like thyroid (0.80 vs. 0.20). SHapley Additive exPlanations (SHAP) analysis identified sex (SHAP: -0.57 to 0.68), age (-0.48 to 0.98), T1WI signal intensity (-0.29 to 0.72), and pathological fractures (-0.76 to 0.25) as key features. AI techniques using noncontrast MRI improve diagnostic efficiency for spinal metastases. The EDF model outperformed the DL model, showing greater clinical potential. Spinal metastases, or cancer spreading to the spine, are common in patients with advanced cancer, often requiring extensive tests to determine the original tumor site. Our study explored whether artificial intelligence could make this process faster and more accurate using noncontrast MRI scans. We tested two methods: one based on radiologists' expertise in identifying imaging features and another using a deep learning model trained to analyze MRI images. The expert-based method was more reliable, correctly identifying the tumor site in 88% of cases when considering the top three likely diagnoses. This approach may help doctors reduce diagnostic time and improve patient care. 3 TECHNICAL EFFICACY: Stage 2.

Effect of artificial intelligence-aided differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management.

Grosu S, Fabritius MP, Winkelmann M, Puhr-Westerheide D, Ingenerf M, Maurus S, Graser A, Schulz C, Knösel T, Cyran CC, Ricke J, Kazmierczak PM, Ingrisch M, Wesp P

pubmed logopapersJul 1 2025
Adenomatous colorectal polyps require endoscopic resection, as opposed to non-adenomatous hyperplastic colorectal polyps. This study aims to evaluate the effect of artificial intelligence (AI)-assisted differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management. Five board-certified radiologists evaluated CT colonography images with colorectal polyps of all sizes and morphologies retrospectively and decided whether the depicted polyps required endoscopic resection. After a primary unassisted reading based on current guidelines, a second reading with access to the classification of a radiomics-based random-forest AI-model labelling each polyp as "non-adenomatous" or "adenomatous" was performed. Performance was evaluated using polyp histopathology as the reference standard. 77 polyps in 59 patients comprising 118 polyp image series (47% supine position, 53% prone position) were evaluated unassisted and AI-assisted by five independent board-certified radiologists, resulting in a total of 1180 readings (subsequent polypectomy: yes or no). AI-assisted readings had higher accuracy (76% +/- 1% vs. 84% +/- 1%), sensitivity (78% +/- 6% vs. 85% +/- 1%), and specificity (73% +/- 8% vs. 82% +/- 2%) in selecting polyps eligible for polypectomy (p < 0.001). Inter-reader agreement was improved in the AI-assisted readings (Fleiss' kappa 0.69 vs. 0.92). AI-based characterisation of colorectal polyps at CT colonography as a second reader might enable a more precise selection of polyps eligible for subsequent endoscopic resection. However, further studies are needed to confirm this finding and histopathologic polyp evaluation is still mandatory. Question This is the first study evaluating the impact of AI-based polyp classification in CT colonography on radiologists' therapy management. Findings Compared with unassisted reading, AI-assisted reading had higher accuracy, sensitivity, and specificity in selecting polyps eligible for polypectomy. Clinical relevance Integrating an AI tool for colorectal polyp classification in CT colonography could further improve radiologists' therapy recommendations.

Machine-learning model based on ultrasomics for non-invasive evaluation of fibrosis in IgA nephropathy.

Huang Q, Huang F, Chen C, Xiao P, Liu J, Gao Y

pubmed logopapersJul 1 2025
To develop and validate an ultrasomics-based machine-learning (ML) model for non-invasive assessment of interstitial fibrosis and tubular atrophy (IF/TA) in patients with IgA nephropathy (IgAN). In this multi-center retrospective study, 471 patients with primary IgA nephropathy from four institutions were included (training, n = 275; internal testing, n = 69; external testing, n = 127; respectively). The least absolute shrinkage and selection operator logistic regression with tenfold cross-validation was used to identify the most relevant features. The ML models were constructed based on ultrasomics. The Shapley Additive Explanation (SHAP) was used to explore the interpretability of the models. Logistic regression analysis was employed to combine ultrasomics, clinical data, and ultrasound imaging characteristics, creating a comprehensive model. A receiver operating characteristic curve, calibration, decision curve, and clinical impact curve were used to evaluate prediction performance. To differentiate between mild and moderate-to-severe IF/TA, three prediction models were developed: the Rad_SVM_Model, Clinic_LR_Model, and Rad_Clinic_Model. The area under curves of these three models were 0.861, 0.884, and 0.913 in the training cohort, and 0.760, 0.860, and 0.894 in the internal validation cohort, as well as 0.794, 0.865, and 0.904 in the external validation cohort. SHAP identified the contribution of radiomics features. Difference analysis showed that there were significant differences between radiomics features and fibrosis. The comprehensive model was superior to that of individual indicators and performed well. We developed and validated a model that combined ultrasomics, clinical data, and clinical ultrasonic characteristics based on ML to assess the extent of fibrosis in IgAN. Question Currently, there is a lack of a comprehensive ultrasomics-based machine-learning model for non-invasive assessment of the extent of Immunoglobulin A nephropathy (IgAN) fibrosis. Findings We have developed and validated a robust and interpretable machine-learning model based on ultrasomics for assessing the degree of fibrosis in IgAN. Clinical relevance The machine-learning model developed in this study has significant interpretable clinical relevance. The ultrasomics-based comprehensive model had the potential for non-invasive assessment of fibrosis in IgAN, which helped evaluate disease progress.
Page 21 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.