Sort by:
Page 101 of 2382377 results

The impacts of artificial intelligence on the workload of diagnostic radiology services: A rapid review and stakeholder contextualisation

Sutton, C., Prowse, J., Elshehaly, M., Randell, R.

medrxiv logopreprintJul 24 2025
BackgroundAdvancements in imaging technology, alongside increasing longevity and co-morbidities, have led to heightened demand for diagnostic radiology services. However, there is a shortfall in radiology and radiography staff to acquire, read and report on such imaging examinations. Artificial intelligence (AI) has been identified, notably by AI developers, as a potential solution to impact positively the workload of radiology services for diagnostics to address this staffing shortfall. MethodsA rapid review complemented with data from interviews with UK radiology service stakeholders was undertaken. ArXiv, Cochrane Library, Embase, Medline and Scopus databases were searched for publications in English published between 2007 and 2022. Following screening 110 full texts were included. Interviews with 15 radiology service managers, clinicians and academics were carried out between May and September 2022. ResultsMost literature was published in 2021 and 2022 with a distinct focus on AI for diagnostics of lung and chest disease (n = 25) notably COVID-19 and respiratory system cancers, closely followed by AI for breast screening (n = 23). AI contribution to streamline the workload of radiology services was categorised as autonomous, augmentative and assistive contributions. However, percentage estimates, of workload reduction, varied considerably with the most significant reduction identified in national screening programmes. AI was also recognised as aiding radiology services through providing second opinion, assisting in prioritisation of images for reading and improved quantification in diagnostics. Stakeholders saw AI as having the potential to remove some of the laborious work and contribute service resilience. ConclusionsThis review has shown there is limited data on real-world experiences from radiology services for the implementation of AI in clinical production. Autonomous, augmentative and assistive AI can, as noted in the article, decrease workload and aid reading and reporting, however the governance surrounding these advancements lags.

Enhancing InceptionResNet to Diagnose COVID-19 from Medical Images.

Aljawarneh S, Ray I

pubmed logopapersJul 24 2025
This investigation delves into the diagnosis of COVID-19, using X-ray images generated by way of an effective deep learning model. In terms of assessing the COVID-19 diagnosis learning model, the methods currently employed tend to focus on the accuracy rate level, while neglecting several significant assessment parameters. These parameters, which include precision, sensitivity and specificity, significantly, F1-score, and ROC-AUC influence the performance level of the model. In this paper, we have improved the InceptionResNet and called Enhanced InceptionResNet with restructured parameters termed, "Enhanced InceptionResNet," which incorporates depth-wise separable convolutions to enhance the efficiency of feature extraction and minimize the consumption of computational resources. For this investigation, three residual network (ResNet) models, namely Res- Net, InceptionResNet model, and the Enhanced InceptionResNet with restructured parameters, were employed for a medical image classification assignment. The performance of each model was evaluated on a balanced dataset of 2600 X-ray images. The models were subsequently assessed for accuracy and loss, as well subjected to a confusion matrix analysis. The Enhanced InceptionResNet consistently outperformed ResNet and InceptionResNet in terms of validation and testing accuracy, recall, precision, F1-score, and ROC-AUC demonstrating its superior capacity for identifying pertinent information in the data. In the context of validation and testing accuracy, our Enhanced InceptionRes- Net repeatedly proved to be more reliable than ResNet, an indication of the former's capacity for the efficient identification of pertinent information in the data (99.0% and 98.35%, respectively), suggesting enhanced feature extraction capabilities. The Enhanced InceptionResNet excelled in COVID-19 diagnosis from chest X-rays, surpassing ResNet and Default InceptionResNet in accuracy, precision, and sensitivity. Despite computational demands, it shows promise for medical image classification. Future work should leverage larger datasets, cloud platforms, and hyperparameter optimisation to improve performance, especially for distinguishing normal and pneumonia cases.

Analyzing pediatric forearm X-rays for fracture analysis using machine learning.

Lam V, Parida A, Dance S, Tabaie S, Cleary K, Anwar SM

pubmed logopapersJul 24 2025
Forearm fractures constitute a significant proportion of emergency department presentations in pediatric population. The treatment goal is to restore length and alignment between the distal and proximal bone fragments. While immobilization through splinting or casting is enough for non-displaced and minimally displaced fractures. However, moderately or severely displaced fractures often require reduction for realignment. However, appropriate treatment in current practices has challenges due to the lack of resources required for specialized pediatric care leading to delayed and unnecessary transfers between medical centers, which potentially create treatment complications and burdens. The purpose of this study is to build a machine learning model for analyzing forearm fractures to assist clinical centers that lack surgical expertise in pediatric orthopedics. X-ray scans from 1250 children were curated, preprocessed, and manually annotated at our clinical center. Several machine learning models were fine-tuned using a pretraining strategy leveraging self-supervised learning model with vision transformer backbone. We further employed strategies to identify the most important region related to fractures within the forearm X-ray. The model performance was evaluated with and without region of interest (ROI) detection to find an optimal model for forearm fracture analyses. Our proposed strategy leverages self-supervised pretraining (without labels) followed by supervised fine-tuning (with labels). The fine-tuned model using regions cropped with ROI identification resulted in the highest classification performance with a true-positive rate (TPR) of 0.79, true-negative rate (TNR) of 0.74, AUROC of 0.81, and AUPR of 0.86 when evaluated on the testing data. The results showed the feasibility of using machine learning models in predicting the appropriate treatment for forearm fractures in pediatric cases. With further improvement, the algorithm could potentially be used as a tool to assist non-specialized orthopedic providers in diagnosing and providing treatment.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.

Deep Learning to Differentiate Parkinsonian Syndromes Using Multimodal Magnetic Resonance Imaging: A Proof-of-Concept Study.

Mattia GM, Chougar L, Foubert-Samier A, Meissner WG, Fabbri M, Pavy-Le Traon A, Rascol O, Grabli D, Degos B, Pyatigorskaya N, Faucher A, Vidailhet M, Corvol JC, Lehéricy S, Péran P

pubmed logopapersJul 24 2025
The differentiation between multiple system atrophy (MSA) and Parkinson's disease (PD) based on clinical diagnostic criteria can be challenging, especially at an early stage. Leveraging deep learning methods and magnetic resonance imaging (MRI) data has shown great potential in aiding automatic diagnosis. The aim was to determine the feasibility of a three-dimensional convolutional neural network (3D CNN)-based approach using multimodal, multicentric MRI data for differentiating MSA and its variants from PD. MRI data were retrospectively collected from three MSA French reference centers. We computed quantitative maps of gray matter density (GD) from a T1-weighted sequence and mean diffusivity (MD) from diffusion tensor imaging. These maps were used as input to a 3D CNN, either individually ("monomodal," "GD" or "MD") or in combination ("bimodal," "GD-MD"). Classification tasks included the differentiation of PD and MSA patients. Model interpretability was investigated by analyzing misclassified patients and providing a visual interpretation of the most activated regions in CNN predictions. The study population included 92 patients with MSA (50 with MSA-P, parkinsonian variant; 33 with MSA-C, cerebellar variant; 9 with MSA-PC, mixed variant) and 64 with PD. The best accuracies were obtained for the PD/MSA (0.88 ± 0.03 with GD-MD), PD/MSA-C&PC (0.84 ± 0.08 with MD), and PD/MSA-P (0.78 ± 0.09 with GD) tasks. Patients misclassified by the CNN exhibited fewer and milder image alterations, as found using an image-based z score analysis. Activation maps highlighted regions involved in MSA pathophysiology, namely the putamen and cerebellum. Our findings hold promise for developing an efficient, MRI-based, and user-independent diagnostic tool suitable for differentiating parkinsonian syndromes in clinical practice. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Back to the Future-Cardiovascular Imaging From 1966 to Today and Tomorrow.

Wintersperger BJ, Alkadhi H, Wildberger JE

pubmed logopapersJul 23 2025
This article, on the 60th anniversary of the journal Investigative Radiology, a journal dedicated to cutting-edge imaging technology, discusses key historical milestones in CT and MRI technology, as well as the ongoing advancement of contrast agent development for cardiovascular imaging over the past decades. It specifically highlights recent developments and the current state-of-the-art technology, including photon-counting detector CT and artificial intelligence, which will further push the boundaries of cardiovascular imaging. What were once ideas and visions have become today's clinical reality for the benefit of patients, and imaging technology will continue to evolve and transform modern medicine.

Deep Learning-Based Prediction of Microvascular Invasion and Survival Outcomes in Hepatocellular Carcinoma Using Dual-phase CT Imaging of Tumors and Lesser Omental Adipose: A Multicenter Study.

Miao S, Sun M, Li X, Wang M, Jiang Y, Liu Z, Wang Q, Ding X, Wang R

pubmed logopapersJul 23 2025
Accurate preoperative prediction of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) remains challenging. Current imaging biomarkers show limited predictive performance. To develop a deep learning model based on preoperative multiphase CT images of tumors and lesser omental adipose tissue (LOAT) for predicting MVI status and to analyze associated survival outcomes. This retrospective study included pathologically confirmed HCC patients from two medical centers between 2016 and 2023. A dual-branch feature fusion model based on ResNet18 was constructed, which extracted fused features from dual-phase CT images of both tumors and LOAT. The model's performance was evaluated on both internal and external test sets. Logistic regression was used to identify independent predictors of MVI. Based on MVI status, patients in the training, internal test, and external test cohorts were stratified into high- and low-risk groups, and overall survival differences were analyzed. The model incorporating LOAT features outperformed the tumor-only modality, achieving an AUC of 0.889 (95% CI: [0.882, 0.962], P=0.004) in the internal test set and 0.826 (95% CI: [0.793, 0.872], P=0.006) in the external test set. Both results surpassed the independent diagnoses of three radiologists (average AUC=0.772). Multivariate logistic regression confirmed that maximum tumor diameter and LOAT area were independent predictors of MVI. Further Cox regression analysis showed that MVI-positive patients had significantly increased mortality risks in both the internal test set (Hazard Ratio [HR]=2.246, 95% CI: [1.088, 4.637], P=0.029) and external test set (HR=3.797, 95% CI: [1.262, 11.422], P=0.018). This study is the first to use a deep learning framework integrating LOAT and tumor imaging features, improving preoperative MVI risk stratification accuracy. Independent prognostic value of LOAT has been validated in multicenter cohorts, highlighting its potential to guide personalized surgical planning.

To Compare the Application Value of Different Deep Learning Models Based on CT in Predicting Visceral Pleural Invasion of Non-small Cell Lung Cancer: A Retrospective, Multicenter Study.

Zhu X, Yang Y, Yan C, Xie Z, Shi H, Ji H, He L, Yang T, Wang J

pubmed logopapersJul 23 2025
Visceral pleural invasion (VPI) indicates poor prognosis in non-small cell lung cancer (NSCLC), and upgrades T classification of NSCLC from T1 to T2 when accompanied by VPI. This study aimed to develop and validate deep learning models for the accurate prediction of VPI in patients with NSCLC, and to compare the performance of two-dimensional (2D), three-dimensional (3D), and hybrid 3D models. This retrospective study included consecutive patients with pathologically confirmed lung tumor between June 2017 and September 2022. The clinical data and preoperative imaging features of these patients were investigated and their relationships with VPI were statistically compared. Elastic fiber staining analysis results were the gold standard for diagnosis of VPI. The data of non-VPI and VPI patients were randomly divided into training cohort and validation cohort based on 8:2 and 6:4, respectively. The EfficientNet-B0_2D model and Double-head Res2Net/_F6/_F24 models were constructed, optimized and verified using two convolutional neural network model architectures-EfficientNet-B0 and Res2Net, respectively, by extracting the features of original CT images and combining specific clinical-CT features. The receiver operating characteristic curve, the area under the curve (AUC), and confusion matrix were utilized to assess the diagnostic efficiency of models. Delong test was used to compare performance between models. A total of 1931 patients with NSCLC were finally evaluated. By univariate analysis, 20 clinical-CT features were identified as risk predictors of VPI. Comparison of the diagnostic efficacy among the EfficientNet-b0_2D, Double-head Res2Net, Res2Net_F6, and Res2Net_F24 combined models revealed that Double-head Res2Net_F6 model owned the largest AUC of 0.941 among all models, followed by Double-head Res2Net (AUC=0.879), Double-head Res2Net_F24 (AUC=0.876), and EfficientNet-b0_2D (AUC=0.785). The three 3D-based models showed comparable predictive performance in the validation cohort and all outperformed the 2D model (EfficientNet-B0_2D, all P<0.05). It is feasible to predict VPI in NSCLC with the predictive models based on deep learning, and the Double-head Res2Net_F6 model fused with six clinical-CT features showed greatest diagnostic efficacy.

CT-based intratumoral and peritumoral radiomics to predict the treatment response to hepatic arterial infusion chemotherapy plus lenvatinib and PD-1 in high-risk hepatocellular carcinoma cases: a multi-center study.

Liu Z, Li X, Huang Y, Chang X, Zhang H, Wu X, Diao Y, He F, Sun J, Feng B, Liang H

pubmed logopapersJul 23 2025
Noninvasive and precise tools for treatment response estimation in patients with high-risk hepatocellular carcinoma (HCC) who could benefit from hepatic arterial infusion chemotherapy (HAIC) plus lenvatinib and humanized programmed death receptor-1 inhibitors (PD-1) (HAIC-LEN-PD1) are lacking. This study aimed to evaluate the predictive potential of intratumoral and peritumoral radiomics for preoperative treatment response assessment to HAIC-LEN-PD1 in high-risk HCC cases. Totally 630 high-risk HCC cases administered HAIC-LEN-PD1 at three institutions were retrospectively identified and assigned to training, validation and external test sets. Totally 1834 radiomic features were, respectively, obtained from intratumoral and peritumoral regions and radiomics models were established using five classifiers. Based on the optimal model, a nomogram was developed and evaluated using areas under the curves (AUCs), calibration curves and decision curve analysis (DCA). Overall survival (OS) and progression-free survival (PFS) were assessed by Kaplan-Meier curves. The Intratumoral + Peritumoral 10 mm (Intra + Peri10) radiomics models were superior to the intratumor models and peritumor models, with AUCs of 0.919 (95%CI 0.889-0.949) in the training set, 0.874 (95%CI 0.812-0.936) in validation set and 0.893 (95%CI 0.839-0.948) in external test sets. The nomogram had good calibration ability and clinical value, with the AUCs of 0.936 (95%CI 0.907-0.965) in the training set, 0.878 (95%CI 0.916-0.940) in validation set and 0.902 (95%CI 0.848-0.957) in external test sets. The Kaplan-Meier analysis showed that high-score patients had significantly shorter OS and PFS than the low-score patients (median OS: 11.7 vs. 29.6 months, the whole set, p < 0.001; median PFS: 6.0 vs. 12.0 months, the whole set, p < 0.001). The Intra + Peri10 model can effectively predict the treatment response of high-risk HCC cases administered HAIC-LEN-PD1. The nomogram could provide an effective tool to evaluate the treatment response and risk stratification.

Development of a deep learning model for T1N0 gastric cancer diagnosis using 2.5D radiomic data in preoperative CT images.

He J, Xu J, Chen W, Cao M, Zhang J, Yang Q, Li E, Zhang R, Tong Y, Zhang Y, Gao C, Zhao Q, Xu Z, Wang L, Cheng X, Zheng G, Pan S, Hu C

pubmed logopapersJul 23 2025
Early detection and precise preoperative staging of early gastric cancer (EGC) are critical. Therefore, this study aims to develop a deep learning model using portal venous phase CT images to accurately distinguish EGC without lymph node metastasis. This study included 3164 patients with gastric cancer (GC) who underwent radical surgery at two medical centers in China from 2006 to 2019. Moreover, 2.5D radiomic data and multi-instance learning (MIL) were novel approaches applied in this study. By basing the selection of features on 2.5D radiomic data and MIL, the ResNet101 model combined with the XGBoost model represented a satisfactory performance for diagnosing pT1N0 GC. Furthermore, the 2.5D MIL-based model demonstrated a markedly superior predictive performance compared to traditional radiomics models and clinical models. We first constructed a deep learning prediction model based on 2.5D radiomics and MIL for effectively diagnosing pT1N0 GC patients, which provides valuable information for the individualized treatment selection.
Page 101 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.