Sort by:
Page 189 of 2352345 results

Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification.

Jones MA, Zhang K, Faiz R, Islam W, Jo J, Zheng B, Qiu Y

pubmed logopapersJun 1 2025
The purpose of this study is to investigate the impact of using morphological information in classifying suspicious breast lesions. The widespread use of deep transfer learning can significantly improve the performance of the mammogram based CADx schemes. However, digital mammograms are grayscale images, while deep learning models are typically optimized using the natural images containing three channels. Thus, it is needed to convert the grayscale mammograms into three channel images for the input of deep transfer models. This study aims to develop a novel pseudo color image generation method which utilizes the mass contour information to enhance the classification performance. Accordingly, a total of 830 breast cancer cases were retrospectively collected, which contains 310 benign and 520 malignant cases, respectively. For each case, a total of four regions of interest (ROI) are collected from the grayscale images captured for both the CC and MLO views of the two breasts. Meanwhile, a total of seven pseudo color image sets are generated as the input of the deep learning models, which are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass. Accordingly, the output features from four identical pre-trained deep learning models are concatenated and then processed by a support vector machine-based classifier to generate the final benign/malignant labels. The performance of each image set was evaluated and compared. The results demonstrate that the pseudo color sets containing the manually segmented mass performed significantly better than all other pseudo color sets, which achieved an AUC (area under the ROC curve) up to 0.889 ± 0.012 and an overall accuracy up to 0.816 ± 0.020, respectively. At the same time, the performance improvement is also dependent on the accuracy of the mass segmentation. The results of this study support our hypothesis that adding accurately segmented mass contours can provide complementary information, thereby enhancing the performance of the deep transfer model in classifying suspicious breast lesions.

Deep Learning-Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors.

Del Cerro CF, Giménez RC, García-Blas J, Sosenko K, Ortega JM, Desco M, Abella M

pubmed logopapersJun 1 2025
Radiation dose and image quality in radiology are influenced by the X-ray prime factors: KVp, mAs, and source-detector distance. These parameters are set by the X-ray technician prior to the acquisition considering the radiographic position. A wrong setting of these parameters may result in exposure errors, forcing the test to be repeated with the increase of the radiation dose delivered to the patient. This work presents a novel approach based on deep learning that automatically estimates the radiographic position from a photograph captured prior to X-ray exposure, which can then be used to select the optimal prime factors. We created a database using 66 radiographic positions commonly used in clinical settings, prospectively obtained during 2022 from 75 volunteers in two different X-ray facilities. The architecture for radiographic position classification was a lightweight version of ConvNeXt trained with fine-tuning, discriminative learning rates, and a one-cycle policy scheduler. Our resulting model achieved an accuracy of 93.17% for radiographic position classification and increased to 95.58% when considering the correct selection of prime factors, since half of the errors involved positions with the same KVp and mAs values. Most errors occurred for radiographic positions with similar patient pose in the photograph. Results suggest the feasibility of the method to facilitate the acquisition workflow reducing the occurrence of exposure errors while preventing unnecessary radiation dose delivered to patients.

SDS-Net: A Synchronized Dual-Stage Network for Predicting Patients Within 4.5-h Thrombolytic Treatment Window Using MRI.

Zhang X, Luan Y, Cui Y, Zhang Y, Lu C, Zhou Y, Zhang Y, Li H, Ju S, Tang T

pubmed logopapersJun 1 2025
Timely and precise identification of acute ischemic stroke (AIS) within 4.5 h is imperative for effective treatment decision-making. This study aims to construct a novel network that utilizes limited datasets to recognize AIS patients within this critical window. We conducted a retrospective analysis of 265 AIS patients who underwent both fluid attenuation inversion recovery (FLAIR) and diffusion-weighted imaging (DWI) within 24 h of symptom onset. Patients were categorized based on the time since stroke onset (TSS) into two groups: TSS ≤ 4.5 h and TSS > 4.5 h. The TSS was calculated as the time from stroke onset to MRI completion. We proposed a synchronized dual-stage network (SDS-Net) and a sequential dual-stage network (Dual-stage Net), which were comprised of infarct voxel identification and TSS classification stages. The models were trained on 181 patients and validated on an independent external cohort of 84 patients using metrics of area under the curve (AUC), sensitivity, specificity, and accuracy. A DeLong test was used to statistically compare the performance of the two models. SDS-Net achieved an accuracy of 0.844 with an AUC of 0.914 in the validation dataset, outperforming the Dual-stage Net, which had an accuracy of 0.822 and an AUC of 0.846. In the external test dataset, SDS-Net further demonstrated superior performance with an accuracy of 0.800 and an AUC of 0.879, compared to the accuracy of 0.694 and AUC of 0.744 of Dual-stage Net (P = 0.049). SDS-Net is a robust and reliable tool for identifying AIS patients within a 4.5-h treatment window using MRI. This model can assist clinicians in making timely treatment decisions, potentially improving patient outcomes.

Integrating VAI-Assisted Quantified CXRs and Multimodal Data to Assess the Risk of Mortality.

Chen YC, Fang WH, Lin CS, Tsai DJ, Hsiang CW, Chang CK, Ko KH, Huang GS, Lee YT, Lin C

pubmed logopapersJun 1 2025
To address the unmet need for a widely available examination for mortality prediction, this study developed a foundation visual artificial intelligence (VAI) to enhance mortality risk stratification using chest X-rays (CXRs). The VAI employed deep learning to extract CXR features and a Cox proportional hazard model to generate a hazard score ("CXR-risk"). We retrospectively collected CXRs from patients visited outpatient department and physical examination center. Subsequently, we reviewed mortality and morbidity outcomes from electronic medical records. The dataset consisted of 41,945, 10,492, 31,707, and 4441 patients in the training, validation, internal test, and external test sets, respectively. During the median follow-up of 3.2 (IQR, 1.2-6.1) years of both internal and external test sets, the "CXR-risk" demonstrated C-indexes of 0.859 (95% confidence interval (CI), 0.851-0.867) and 0.870 (95% CI, 0.844-0.896), respectively. Patients with high "CXR-risk," above 85th percentile, had a significantly higher risk of mortality than those with low risk, below 50th percentile. The addition of clinical and laboratory data and radiographic report further improved the predictive accuracy, resulting in C-indexes of 0.888 and 0.900. The VAI can provide accurate predictions of mortality and morbidity outcomes using just a single CXR, and it can complement other risk prediction indicators to assist physicians in assessing patient risk more effectively.

A Machine Learning Model Based on Global Mammographic Radiomic Features Can Predict Which Normal Mammographic Cases Radiology Trainees Find Most Difficult.

Siviengphanom S, Brennan PC, Lewis SJ, Trieu PD, Gandomkar Z

pubmed logopapersJun 1 2025
This study aims to investigate whether global mammographic radiomic features (GMRFs) can distinguish hardest- from easiest-to-interpret normal cases for radiology trainees (RTs). Data from 137 RTs were analysed, with each interpreting seven educational self-assessment test sets comprising 60 cases (40 normal and 20 cancer). The study only examined normal cases. Difficulty scores were computed based on the percentage of readers who incorrectly classified each case, leading to their classification as hardest- or easiest-to-interpret based on whether their difficulty scores fell within and above the 75th or within and below the 25th percentile, respectively (resulted in 140 cases in total used). Fifty-nine low-density and 81 high-density cases were identified. Thirty-four GMRFs were extracted for each case. A random forest machine learning model was trained to differentiate between hardest- and easiest-to-interpret normal cases and validated using leave-one-out-cross-validation approach. The model's performance was evaluated using the area under receiver operating characteristic curve (AUC). Significant features were identified through feature importance analysis. Difference between hardest- and easiest-to-interpret cases among 34 GMRFs and in difficulty level between low- and high-density cases was tested using Kruskal-Wallis. The model achieved AUC = 0.75 with cluster prominence and range emerging as the most useful features. Fifteen GMRFs differed significantly (p < 0.05) between hardest- and easiest-to-interpret cases. Difficulty level among low- vs high-density cases did not differ significantly (p = 0.12). GMRFs can predict hardest-to-interpret normal cases for RTs, underscoring the importance of GMRFs in identifying the most difficult normal cases for RTs and facilitating customised training programmes tailored to trainees' learning needs.

Prediction of Malignancy and Pathological Types of Solid Lung Nodules on CT Scans Using a Volumetric SWIN Transformer.

Chen H, Wen Y, Wu W, Zhang Y, Pan X, Guan Y, Qin D

pubmed logopapersJun 1 2025
Lung adenocarcinoma and squamous cell carcinoma are the two most common pathological lung cancer subtypes. Accurate diagnosis and pathological subtyping are crucial for lung cancer treatment. Solitary solid lung nodules with lobulation and spiculation signs are often indicative of lung cancer; however, in some cases, postoperative pathology finds benign solid lung nodules. It is critical to accurately identify solid lung nodules with lobulation and spiculation signs before surgery; however, traditional diagnostic imaging is prone to misdiagnosis, and studies on artificial intelligence-assisted diagnosis are few. Therefore, we introduce a volumetric SWIN Transformer-based method. It is a multi-scale, multi-task, and highly interpretable model for distinguishing between benign solid lung nodules with lobulation and spiculation signs, lung adenocarcinomas, and lung squamous cell carcinoma. The technique's effectiveness was improved by using 3-dimensional (3D) computed tomography (CT) images instead of conventional 2-dimensional (2D) images to combine as much information as possible. The model was trained using 352 of the 441 CT image sequences and validated using the rest. The experimental results showed that our model could accurately differentiate between benign lung nodules with lobulation and spiculation signs, lung adenocarcinoma, and squamous cell carcinoma. On the test set, our model achieves an accuracy of 0.9888, precision of 0.9892, recall of 0.9888, and an F1-score of 0.9888, along with a class activation mapping (CAM) visualization of the 3D model. Consequently, our method could be used as a preoperative tool to assist in diagnosing solitary solid lung nodules with lobulation and spiculation signs accurately and provide a theoretical basis for developing appropriate clinical diagnosis and treatment plans for the patients.

Using Machine Learning on MRI Radiomics to Diagnose Parotid Tumours Before Comparing Performance with Radiologists: A Pilot Study.

Ammari S, Quillent A, Elvira V, Bidault F, Garcia GCTE, Hartl DM, Balleyguier C, Lassau N, Chouzenoux É

pubmed logopapersJun 1 2025
The parotid glands are the largest of the major salivary glands. They can harbour both benign and malignant tumours. Preoperative work-up relies on MR images and fine needle aspiration biopsy, but these diagnostic tools have low sensitivity and specificity, often leading to surgery for diagnostic purposes. The aim of this paper is (1) to develop a machine learning algorithm based on MR images characteristics to automatically classify parotid gland tumours and (2) compare its results with the diagnoses of junior and senior radiologists in order to evaluate its utility in routine practice. While automatic algorithms applied to parotid tumours classification have been developed in the past, we believe that our study is one of the first to leverage four different MRI sequences and propose a comparison with clinicians. In this study, we leverage data coming from a cohort of 134 patients treated for benign or malignant parotid tumours. Using radiomics extracted from the MR images of the gland, we train a random forest and a logistic regression to predict the corresponding histopathological subtypes. On the test set, the best results are given by the random forest: we obtain a 0.720 accuracy, a 0.860 specificity, and a 0.720 sensitivity over all histopathological subtypes, with an average AUC of 0.838. When considering the discrimination between benign and malignant tumours, the algorithm results in a 0.760 accuracy and a 0.769 AUC, both on test set. Moreover, the clinical experiment shows that our model helps to improve diagnostic abilities of junior radiologists as their sensitivity and accuracy raised by 6 % when using our proposed method. This algorithm may be useful for training of physicians. Radiomics with a machine learning algorithm may help improve discrimination between benign and malignant parotid tumours, decreasing the need for diagnostic surgery. Further studies are warranted to validate our algorithm for routine use.

Deep Conformal Supervision: Leveraging Intermediate Features for Robust Uncertainty Quantification.

Vahdani AM, Faghani S

pubmed logopapersJun 1 2025
Trustworthiness is crucial for artificial intelligence (AI) models in clinical settings, and a fundamental aspect of trustworthy AI is uncertainty quantification (UQ). Conformal prediction as a robust uncertainty quantification (UQ) framework has been receiving increasing attention as a valuable tool in improving model trustworthiness. An area of active research is the method of non-conformity score calculation for conformal prediction. We propose deep conformal supervision (DCS), which leverages the intermediate outputs of deep supervision for non-conformity score calculation, via weighted averaging based on the inverse of mean calibration error for each stage. We benchmarked our method on two publicly available datasets focused on medical image classification: a pneumonia chest radiography dataset and a preprocessed version of the 2019 RSNA Intracranial Hemorrhage dataset. Our method achieved mean coverage errors of 16e-4 (CI: 1e-4, 41e-4) and 5e-4 (CI: 1e-4, 10e-4) compared to baseline mean coverage errors of 28e-4 (CI: 2e-4, 64e-4) and 21e-4 (CI: 8e-4, 3e-4) on the two datasets, respectively (p < 0.001 on both datasets). Based on our findings, the baseline results of conformal prediction already exhibit small coverage errors. However, our method shows a significant improvement on coverage error, particularly noticeable in scenarios involving smaller datasets or when considering smaller acceptable error levels, which are crucial in developing UQ frameworks for healthcare AI applications.

A Robust [<sup>18</sup>F]-PSMA-1007 Radiomics Ensemble Model for Prostate Cancer Risk Stratification.

Pasini G, Stefano A, Mantarro C, Richiusa S, Comelli A, Russo GI, Sabini MG, Cosentino S, Ippolito M, Russo G

pubmed logopapersJun 1 2025
The aim of this study is to investigate the role of [<sup>18</sup>F]-PSMA-1007 PET in differentiating high- and low-risk prostate cancer (PCa) through a robust radiomics ensemble model. This retrospective study included 143 PCa patients who underwent [<sup>18</sup>F]-PSMA-1007 PET/CT imaging. PCa areas were manually contoured on PET images and 1781 image biomarker standardization initiative (IBSI)-compliant radiomics features were extracted. A 30 times iterated preliminary analysis pipeline, comprising of the least absolute shrinkage and selection operator (LASSO) for feature selection and fivefold cross-validation for model optimization, was adopted to identify the most robust features to dataset variations, select candidate models for ensemble modelling, and optimize hyperparameters. Thirteen subsets of selected features, 11 generated from the preliminary analysis plus two additional subsets, the first based on the combination of robust and fine-tuning features, and the second only on fine-tuning features were used to train the model ensemble. Accuracy, area under curve (AUC), sensitivity, specificity, precision, and f-score values were calculated to provide models' performance. Friedman test, followed by post hoc tests corrected with Dunn-Sidak correction for multiple comparisons, was used to verify if statistically significant differences were found in the different ensemble models over the 30 iterations. The model ensemble trained with the combination of robust and fine-tuning features obtained the highest average accuracy (79.52%), AUC (85.75%), specificity (84.29%), precision (82.85%), and f-score (78.26%). Statistically significant differences (p < 0.05) were found for some performance metrics. These findings support the role of [<sup>18</sup>F]-PSMA-1007 PET radiomics in improving risk stratification for PCa, by reducing dependence on biopsies.

Leveraging Ensemble Models and Follow-up Data for Accurate Prediction of mRS Scores from Radiomic Features of DSC-PWI Images.

Yassin MM, Zaman A, Lu J, Yang H, Cao A, Hassan H, Han T, Miao X, Shi Y, Guo Y, Luo Y, Kang Y

pubmed logopapersJun 1 2025
Predicting long-term clinical outcomes based on the early DSC PWI MRI scan is valuable for prognostication, resource management, clinical trials, and patient expectations. Current methods require subjective decisions about which imaging features to assess and may require time-consuming postprocessing. This study's goal was to predict multilabel 90-day modified Rankin Scale (mRS) score in acute ischemic stroke patients by combining ensemble models and different configurations of radiomic features generated from Dynamic susceptibility contrast perfusion-weighted imaging. In Follow-up studies, a total of 70 acute ischemic stroke (AIS) patients underwent magnetic resonance imaging within 24 hours poststroke and had a follow-up scan. In the single study, 150 DSC PWI Image scans for AIS patients. The DRF are extracted from DSC-PWI Scans. Then Lasso algorithm is applied for feature selection, then new features are generated from initial and follow-up scans. Then we applied different ensemble models to classify between three classes normal outcome (0, 1 mRS score), moderate outcome (2,3,4 mRS score), and severe outcome (5,6 mRS score). ANOVA and post-hoc Tukey HSD tests confirmed significant differences in model style performance across various studies and classification techniques. Stacking models consistently on average outperformed others, achieving an Accuracy of 0.68 ± 0.15, Precision of 0.68 ± 0.17, Recall of 0.65 ± 0.14, and F1 score of 0.63 ± 0.15 in the follow-up time study. Techniques like Bo_Smote showed significantly higher recall and F1 scores, highlighting their robustness and effectiveness in handling imbalanced data. Ensemble models, particularly Bagging and Stacking, demonstrated superior performance, achieving nearly 0.93 in Accuracy, 0.95 in Precision, 0.94 in Recall, and 0.94 in F1 metrics in follow-up conditions, significantly outperforming single models. Ensemble models based on radiomics generated from combining Initial and follow-up scans can be used to predict multilabel 90-day stroke outcomes with reduced subjectivity and user burden.
Page 189 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.