Sort by:
Page 44 of 2352345 results

Improving lung cancer detection with enhanced convolutional sequential networks.

Haziq U, Uddin J, Rahman S, Yaseen M, Khan I, Khan J, Jung Y

pubmed logopapersSep 1 2025
Lung cancer is the most common cause of cancer-related deaths worldwide, and early detection is extremely important for improving survival. According to the National Institute of Health Sciences, lung cancer has the highest rate of cancer mortality, according to the National Institute of Health Sciences. Medical professionals are usually based on clinical imaging methods such as MRI, X-ray, biopsy, ultrasound, and CT scans. However, these imaging techniques often face challenges including false positives, false negatives, and sensitivity. Deep learning approaches, particularly folding networks (CNNS), have arisen as they tackle these issues. However, traditional CNN models often suffer from high computing complexity, slow inference times and over adaptation in real-world clinical data. To overcome these limitations, we propose an optimized sequential folding network (SCNN) that maintains a high level of classification accuracy, simultaneously reducing processing time and computing load. The SCNN model consists of three folding layers, three maximum pooling layers, flat layers and dense layers, allowing for efficient and accurate classification. In the histological imaging dataset, three categories of lung cancer models are adenocarcinoma, benign and squamous cell carcinoma. Our SCNN achieves an average accuracy of 95.34%, an accuracy of 95.66%, a recall of 95.33%, and an F1 score of over 60 epochs within 1000 seconds. These results go beyond traditional CNN, R-CNN, and custom inception classifiers, indicating superior speed and robustness in histological image classification. Therefore, SCNN offers a practical and scalable solution to improve lung cancer awareness in clinical practice.

Explainable self-supervised learning for medical image diagnosis based on DINO V2 model and semantic search.

Hussien A, Elkhateb A, Saeed M, Elsabawy NM, Elnakeeb AE, Elrashidy N

pubmed logopapersSep 1 2025
Medical images have become indispensable for decision-making and significantly affect treatment planning. However, increasing medical imaging has widened the gap between medical images and available radiologists, leading to delays and diagnosis errors. Recent studies highlight the potential of deep learning (DL) in medical image diagnosis. However, their reliance on labelled data limits their applicability in various clinical settings. As a result, recent studies explore the role of self-supervised learning to overcome these challenges. Our study aims to address these challenges by examining the performance of self-supervised learning (SSL) in diverse medical image datasets and comparing it with traditional pre-trained supervised learning models. Unlike prior SSL methods that focus solely on classification, our framework leverages DINOv2's embeddings to enable semantic search in medical databases (via Qdrant), allowing clinicians to retrieve similar cases efficiently. This addresses a critical gap in clinical workflows where rapid case The results affirmed SSL's ability, especially DINO v2, to overcome the challenge associated with labelling data and provide an accurate diagnosis superior to traditional SL. DINO V2 provides 100%, 99%, 99%, 100 and 95% for classification accuracy of Lung cancer, brain tumour, leukaemia and Eye Retina Disease datasets, respectively. While existing SSL models (e.g., BYOL, SimCLR) lack interpretability, we uniquely combine DINOv2 with ViT-CX, a causal explanation method tailored for transformers. This provides clinically actionable heatmaps, revealing how the model localizes tumors/cellular patternsa feature absent in prior SSL medical imaging studies Furthermore, our research explores the impact of semantic search in the medical images domain and how it can revolutionize the querying process and provide semantic results alongside SSL and the Qudra Net dataset utilized to save the embedding of the developed model after the training process. Cosine similarity measures the distance between the image query and stored information in the embedding using cosine similarity. Our study aims to enhance the efficiency and accuracy of medical image analysis, ultimately improving the decision-making process.

Machine learning to predict high-risk coronary artery disease on CT in the SCOT-HEART trial.

Williams MC, Guimaraes ARM, Jiang M, Kwieciński J, Weir-McCall JR, Adamson PD, Mills NL, Roditi GH, van Beek EJR, Nicol E, Berman DS, Slomka PJ, Dweck MR, Newby DE, Dey D

pubmed logopapersSep 1 2025
Machine learning based on clinical characteristics has the potential to predict coronary CT angiography (CCTA) findings and help guide resource utilisation. From the SCOT-HEART (Scottish Computed Tomography of the HEART) trial, data from 1769 patients was used to train and to test machine learning models (XGBoost, 10-fold cross validation, grid search hyperparameter selection). Two models were separately generated to predict the presence of coronary artery disease (CAD) and an increased burden of low-attenuation coronary artery plaque (LAP) using symptoms, demographic and clinical characteristics, electrocardiography and exercise tolerance testing (ETT). Machine learning predicted the presence of CAD on CCTA (area under the curve (AUC) 0.80, 95% CI 0.74 to 0.85) better than the 10-year cardiovascular risk score alone (AUC 0.75, 95% CI 0.70, 0.81, p=0.004). The most important features in this model were the 10-year cardiovascular risk score, age, sex, total cholesterol and an abnormal ETT. In contrast, the second model used to predict an increased LAP burden performed similarly to the 10-year cardiovascular risk score (AUC 0.75, 95% CI 0.70 to 0.80 vs AUC 0.72, 95% CI 0.66 to 0.77, p=0.08) with the most important features being the 10-year cardiovascular risk score, age, body mass index and total and high-density lipoprotein cholesterol concentrations. Machine learning models can improve prediction of the presence of CAD on CCTA, over the standard cardiovascular risk score. However, it was not possible to improve the prediction of an increased LAP burden based on clinical factors alone.

CT-based deep learning radiomics model for predicting proliferative hepatocellular carcinoma: application in transarterial chemoembolization and radiofrequency ablation.

Zhang H, Zhang Z, Zhang K, Gao Z, Shen Z, Shen W

pubmed logopapersSep 1 2025
Proliferative hepatocellular carcinoma (HCC) is an aggressive tumor with varying prognosis depending on the different disease stages and subsequent treatment. This study aims to develop and validate a deep learning radiomics (DLR) model based on contrast-enhanced CT to predict proliferative HCC and to implement risk prediction in patients treated with transarterial chemoembolization (TACE) and radiofrequency ablation (RFA). 312 patients (mean age, 58 years ± 10 [SD]; 261 men and 51 women) with HCC undergoing surgery at two medical centers were included, who were divided into a training set (<i>n</i> = 182), an internal test set (<i>n</i> = 46) and an external test set (<i>n</i> = 84). DLR features were extracted from preoperative contrast-enhanced CT images. Multiple machine learning algorithms were used to develop and validate proliferative HCC prediction models in training and test sets. Subsequently, patients from two independent new sets (RFA and TACE sets) were divided into high- and low-risk groups using the DLR score generated by the optimal model. The risk prediction value of DLR scores in recurrence-free survival (RFS) and time to progression (TTP) was examined separately in RFA and TACE sets. The DLR proliferative HCC prediction model demonstrated excellent predictive performance with an AUC of 0.906 (95% CI 0.861–0.952) in the training set, 0.901 (95% CI 0.779–1.000) in the internal test set and 0.837 (95% CI 0.746–0.928) in the external test set. The DLR score effectively enables risk prediction for patients in RFA and TACE sets. For the RFA set, the low-risk group had significantly longer RFS compared to the high-risk group (<i>P</i> = 0.037). Similarly, the low-risk group showed a longer TTP than the high-risk group for the TACE set (<i>P</i> = 0.034). The DLR-based contrast-enhanced CT model enables non-invasive prediction of proliferative HCC. Furthermore, the DLR risk prediction helps identify high-risk patients undergoing RFA or TACE, providing prognostic insights for personalized management. The online version contains supplementary material available at 10.1186/s12880-025-01913-9.

Feasibility of fully automatic assessment of cervical canal stenosis using MRI via deep learning.

Feng X, Zhang Y, Lu M, Ma C, Miao X, Yang J, Lin L, Zhang Y, Zhang K, Zhang N, Kang Y, Luo Y, Cao K

pubmed logopapersSep 1 2025
Currently, there is no fully automated tool available for evaluating the degree of cervical spinal stenosis. The aim of this study was to develop and validate the use of artificial intelligence (AI) algorithms for the assessment of cervical spinal stenosis. In this retrospective multi-center study, cervical spine magnetic resonance imaging (MRI) scans obtained from July 2020 to June 2023 were included. Studies of patients with spinal instrumentation or studies with suboptimal image quality were excluded. Sagittal T2-weighted images were used. The training data from the Fourth People's Hospital of Shanghai (Hos. 1) and Shanghai Changzheng Hospital (Hos. 2) were annotated by two musculoskeletal (MSK) radiologists following Kang's system as the standard reference. First, a convolutional neural network (CNN) was trained to detect the region of interest (ROI), with a second Transformer for classification. The performance of the deep learning (DL) model was assessed on an internal test set from Hos. 2 and an external test set from Shanghai Changhai Hospital (Hos. 3), and compared among six readers. Metrics such as detection precision, interrater agreement, sensitivity (SEN), and specificity (SPE) were calculated. Overall, 795 patients were analyzed (mean age ± standard deviation, 55±14 years; 346 female), with 589 in the training (75%) and validation (25%) sets, 206 in the internal test set, and 95 in the external test set. Four tasks with different clinical application scenarios were trained, and their accuracy (ACC) ranged from 0.8993 to 0.9532. When using a Kang system score of ≥2 as a threshold for diagnosing central cervical canal stenosis in the internal test set, both the algorithm and six readers achieved similar areas under the receiver operating characteristic curve (AUCs) of 0.936 [95% confidence interval (CI): 0.916-0.955], with a SEN of 90.3% and SPE of 93.8%; the AUC of the DL model was 0.931 (95% CI: 0.917-0.946), with a SEN in the external test set of 100%, and a SPE of 86.3%. Correlation analysis comparing the DL method, the six readers, and MRI reports between the reference standard showed a moderate correlation, with R values ranging from 0.589 to 0.668. The DL model produced approximately the same upgrades (9.2%) and downgrades (5.1%) as the six readers. The DL model could fully automatically and reliably assess cervical canal stenosis using MRI scans.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.

Left ventricular ejection fraction assessment: artificial intelligence compared to echocardiography expert and cardiac magnetic resonance measurements.

Mołek-Dziadosz P, Woźniak A, Furman-Niedziejko A, Pieszko K, Szachowicz-Jaworska J, Miszalski-Jamka T, Krupiński M, Dweck MR, Nessler J, Gackowski A

pubmed logopapersSep 1 2025
 Cardiac magnetic resonance (CMR) is the gold standard for assessing left ventricular ejection fraction (LVEF). Artificial intelligence (AI) - based echocardiographic analysis is increasingly utilized in clinical practice.  This study compares measurements of LVEF between echocardiography (ECHO) assessed by experts and automated AI, in comparison to CMR as the reference standard.  We retrospectively analyzed 118 patients who underwent both CMR and ECHO within 7 days. LVEF measured by CMR was compared with results obtained from an AI-based software which automatically analyzed all stored DICOM loops (Multi loop AI analysis) in echocardiography (ECHO). Additionally, AI results were repeated using only one best quality loop for 2 and one for 4 chamber views (One Loop AI Analysis) in ECHO. These results were further compared with standard ECHO analysis performed by two independent experts. Agreement was investigated using Pearson's correlation and Bland-Altman analysis as well as Cohen's Kappa and concordance for categorization of LVEF into subgroups (≤30%, 31-40%, 41-50%, 51-70%; and >70%).  Both Experts demonstrated strong inter-reader agreement (R = 0.88, κ = 0.77) and correlated well with CMR LVEF (Expert 1: R = 0.86, κ = 0.74; Expert 2: R = 0.85, κ = 0.68). Multi loop AI analysis correlated strongly with CMR (R = 0.87, κ = 0.68) and Experts (R = 0.88-0.90). One Loop AI Analysis demonstrated numerically higher concordance with CMR LVEF (R = 0.89, κ = 0.75) compared to Multi loop AI analysis and Experts.  AI-based analysis showed similar LVEF assessment as human experts in comparison to CMR results. AI-based ECHO analysis are promising, but the obtained results should be interpreted with caution.

Temporal Representation Learning for Real-Time Ultrasound Analysis

Yves Stebler, Thomas M. Sutter, Ece Ozkan, Julia E. Vogt

arxiv logopreprintSep 1 2025
Ultrasound (US) imaging is a critical tool in medical diagnostics, offering real-time visualization of physiological processes. One of its major advantages is its ability to capture temporal dynamics, which is essential for assessing motion patterns in applications such as cardiac monitoring, fetal development, and vascular imaging. Despite its importance, current deep learning models often overlook the temporal continuity of ultrasound sequences, analyzing frames independently and missing key temporal dependencies. To address this gap, we propose a method for learning effective temporal representations from ultrasound videos, with a focus on echocardiography-based ejection fraction (EF) estimation. EF prediction serves as an ideal case study to demonstrate the necessity of temporal learning, as it requires capturing the rhythmic contraction and relaxation of the heart. Our approach leverages temporally consistent masking and contrastive learning to enforce temporal coherence across video frames, enhancing the model's ability to represent motion patterns. Evaluated on the EchoNet-Dynamic dataset, our method achieves a substantial improvement in EF prediction accuracy, highlighting the importance of temporally-aware representation learning for real-time ultrasound analysis.

Development of an age estimation method for the coxal bone and lumbar vertebrae obtained from post-mortem computed tomography images using a convolutional neural network.

Imaizumi K, Usui S, Nagata T, Hayakawa H, Shiotani S

pubmed logopapersSep 1 2025
Age estimation plays a major role in the identification of unknown dead bodies, including skeletal remains. We present a novel age estimation method developed by applying a deep-learning network to the coxal bone and lumbar vertebrae on post-mortem computed tomography (PMCT) images. The coxal bone and lumbar vertebrae were targeted in this study. Volume-rendered images of these bones from 1,229 individuals were captured and input to a convolutional neural network based on the visual geometry group 16 network. A transfer learning strategy was employed. The predictive capabilities of age estimation models were assessed by a 10-fold cross-validation procedure, with mean absolute error (MAE) and correlation coefficients between chronological and estimated ages calculated for validation. In addition, gradient-weighted class activation mapping (Grad-CAM) was conducted to visualize the regions of interest in learning. The estimation models created showed low MAE (range, 7.27-6.44 years) and high correlation coefficients (range, 0.84-0.91) in the validation. Aging-induced shape changes were grossly observed at the vertebral body, coxal bone surface, and other sites. The Grad-CAM results identified these as regions of interest in learning. The present method has the potential to become an age estimation tool that is routinely applied in the examination of unknown dead bodies, including skeletal remains.

Deep learning-based automated assessment of hepatic fibrosis via magnetic resonance images and nonimage data.

Li W, Zhu Y, Zhao G, Chen X, Zhao X, Xu H, Che Y, Chen Y, Ye Y, Dou X, Wang H, Cheng J, Xie Q, Chen K

pubmed logopapersSep 1 2025
Accurate staging of hepatic fibrosis is critical for prognostication and management among patients with chronic liver disease, and noninvasive, efficient alternatives to biopsy are urgently needed. This study aimed to evaluate the performance of an automated deep learning (DL) algorithm for fibrosis staging and for differentiating patients with hepatic fibrosis from healthy individuals via magnetic resonance (MR) images with and without additional clinical data. A total of 500 patients from two medical centers were retrospectively analyzed. DL models were developed based on delayed-phase MR images to predict fibrosis stages. Additional models were constructed by integrating the DL algorithm with nonimaging variables, including serologic biomarkers [aminotransferase-to-platelet ratio index (APRI) and fibrosis index based on four factors (FIB-4)], viral status (hepatitis B and C), and MR scanner parameters. Diagnostic performance, was assessed via the area under the receiver operating characteristic curve (AUROC), and comparisons were through use of the DeLong test. Sensitivity and specificity of the DL and full models (DL plus all clinical features) were compared with those of experienced radiologists and serologic biomarkers via the McNemar test. In the test set, the full model achieved AUROC values of 0.99 [95% confidence interval (CI): 0.94-1.00], 0.98 (95% CI: 0.93-0.99), 0.90 (95% CI: 0.83-0.95), 0.81 (95% CI: 0.73-0.88), and 0.84 (95% CI: 0.76-0.90) for staging F0-4, F1-4, F2-4, F3-4, and F4, respectively. This model significantly outperformed the DL model in early-stage classification (F0-4 and F1-4). Compared with expert radiologists, it showed superior specificity for F0-4 and higher sensitivity across the other four classification tasks. Both the DL and full models showed significantly greater specificity than did the biomarkers for staging advanced fibrosis (F3-4 and F4). The proposed DL algorithm provides a noninvasive method for hepatic fibrosis staging and screening, outperforming both radiologists and conventional biomarkers, and may facilitate improved clinical decision-making.
Page 44 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.