Sort by:
Page 23 of 3433422 results

Live(r) Die: Predicting Survival in Colorectal Liver Metastasis

Muhammad Alberb, Helen Cheung, Anne Martel

arxiv logopreprintSep 10 2025
Colorectal cancer frequently metastasizes to the liver, significantly reducing long-term survival. While surgical resection is the only potentially curative treatment for colorectal liver metastasis (CRLM), patient outcomes vary widely depending on tumor characteristics along with clinical and genomic factors. Current prognostic models, often based on limited clinical or molecular features, lack sufficient predictive power, especially in multifocal CRLM cases. We present a fully automated framework for surgical outcome prediction from pre- and post-contrast MRI acquired before surgery. Our framework consists of a segmentation pipeline and a radiomics pipeline. The segmentation pipeline learns to segment the liver, tumors, and spleen from partially annotated data by leveraging promptable foundation models to complete missing labels. Also, we propose SAMONAI, a novel zero-shot 3D prompt propagation algorithm that leverages the Segment Anything Model to segment 3D regions of interest from a single point prompt, significantly improving our segmentation pipeline's accuracy and efficiency. The predicted pre- and post-contrast segmentations are then fed into our radiomics pipeline, which extracts features from each tumor and predicts survival using SurvAMINN, a novel autoencoder-based multiple instance neural network for survival analysis. SurvAMINN jointly learns dimensionality reduction and hazard prediction from right-censored survival data, focusing on the most aggressive tumors. Extensive evaluation on an institutional dataset comprising 227 patients demonstrates that our framework surpasses existing clinical and genomic biomarkers, delivering a C-index improvement exceeding 10%. Our results demonstrate the potential of integrating automated segmentation algorithms and radiomics-based survival analysis to deliver accurate, annotation-efficient, and interpretable outcome prediction in CRLM.

Deep-Learning System for Automatic Measurement of the Femorotibial Rotational Angle on Lower-Extremity Computed Tomography.

Lee SW, Lee GP, Yoon I, Kim YJ, Kim KG

pubmed logopapersSep 10 2025
To develop and validate a deep-learning-based algorithm for automatic identification of anatomical landmarks and calculating femoral and tibial version angles (FTT angles) on lower-extremity CT scans. In this IRB-approved, retrospective study, lower-extremity CT scans from 270 adult patients (median age, 69 years; female to male ratio, 235:35) were analyzed. CT data were preprocessed using contrast-limited adaptive histogram equalization and RGB superposition to enhance tissue boundary distinction. The Attention U-Net model was trained using the gold standard of manual labeling and landmark drawing, enabling it to segment bones, detect landmarks, create lines, and automatically measure the femoral version and tibial torsion angles. The model's performance was validated against manual segmentations by a musculoskeletal radiologist using a test dataset. The segmentation model demonstrated 92.16%±0.02 sensitivity, 99.96%±<0.01 specificity, and 2.14±2.39 HD95, with a Dice similarity coefficient (DSC) of 93.12%±0.01. Automatic measurements of femoral and tibial torsion angles showed good correlation with radiologists' measurements, with correlation coefficients of 0.64 for femoral and 0.54 for tibial angles (p < 0.05). Automated segmentation significantly reduced the measurement time per leg compared to manual methods (57.5 ± 8.3 s vs. 79.6 ± 15.9 s, p < 0.05). We developed a method to automate the measurement of femorotibial rotation in continuous axial CT scans of patients with osteoarthritis (OA) using a deep-learning approach. This method has the potential to expedite the analysis of patient data in busy clinical settings.

Few-shot learning for highly accelerated 3D time-of-flight MRA reconstruction.

Li H, Chiew M, Dragonu I, Jezzard P, Okell TW

pubmed logopapersSep 10 2025
To develop a deep learning-based reconstruction method for highly accelerated 3D time-of-flight MRA (TOF-MRA) that achieves high-quality reconstruction with robust generalization using extremely limited acquired raw data, addressing the challenge of time-consuming acquisition of high-resolution, whole-head angiograms. A novel few-shot learning-based reconstruction framework is proposed, featuring a 3D variational network specifically designed for 3D TOF-MRA that is pre-trained on simulated complex-valued, multi-coil raw k-space datasets synthesized from diverse open-source magnitude images and fine-tuned using only two single-slab experimentally acquired datasets. The proposed approach was evaluated against existing methods on acquired retrospectively undersampled in vivo k-space data from five healthy volunteers and on prospectively undersampled data from two additional subjects. The proposed method achieved superior reconstruction performance on experimentally acquired in vivo data over comparison methods, preserving most fine vessels with minimal artifacts with up to eight-fold acceleration. Compared to other simulation techniques, the proposed method generated more realistic raw k-space data for 3D TOF-MRA. Consistently high-quality reconstructions were also observed on prospectively undersampled data. By leveraging few-shot learning, the proposed method enabled highly accelerated 3D TOF-MRA relying on minimal experimentally acquired data, achieving promising results on both retrospective and prospective in vivo data while outperforming existing methods. Given the challenges of acquiring and sharing large raw k-space datasets, this holds significant promise for advancing research and clinical applications in high-resolution, whole-head 3D TOF-MRA imaging.

An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI.

Shen Z, Chen L, Wang L, Dong S, Wang F, Pan Y, Zhou J, Wang Y, Xu X, Chong H, Lin H, Li W, Li R, Ma H, Ma J, Yu Y, Du L, Wang X, Zhang S, Yan F

pubmed logopapersSep 10 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the effectiveness of an explainable deep learning (DL) model, developed using multiparametric MRI (mpMRI) features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs ≥ 1 cm in diameter at mpMRI were included in the study. nn-Unet and Liver Imaging Feature Transformer (LIFT) models were developed using retrospective data from one hospital (January 2018-August 2023). nnU-Net was used for lesion segmentation and LIFT for FLL classification. External testing was performed on data from three hospitals (January 2018-December 2023), with a prospective test set obtained from January 2024 to April 2024. Model performance was compared with radiologists and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient (DSC) and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 ± [SD] 12 years; 1476 female) were included in the training, internal test, external test, and prospective test sets. Average DSC values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. LIFT-assisted readings improved diagnostic accuracy (average 5.3% increase, <i>P</i> < .001), reduced reading time (average 34.5 seconds decrease, <i>P</i> < .001), and enhanced confidence (average 0.3-point increase, <i>P</i> < .001) of junior radiologists. Conclusion The proposed DL model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. ©RSNA, 2025.

Fixed point method for PET reconstruction with learned plug-and-play regularization.

Savanier M, Comtat C, Sureau F

pubmed logopapersSep 10 2025
&#xD;Deep learning has shown great promise for improving medical image reconstruction, including PET. However, concerns remain about the stability and robustness of these methods, especially when trained on limited data. This work aims to explore the use of the Plug-and-Play (PnP) framework in PET reconstruction to address these concerns.&#xD;&#xD;Approach:&#xD;We propose a convergent PnP algorithm for low-count PET reconstruction based on the Douglas-Rachford splitting method. We consider several denoisers trained to satisfy fixed-point conditions, with convergence properties ensured either during training or by design, including a spectrally normalized network and a deep equilibrium model. We evaluate the bias-standard deviation tradeoff across clinically relevant regions and an unseen pathological case in a synthetic experiment and a real study. Comparisons are made with model-based iterative reconstruction, post-reconstruction denoising, a deep end-to-end unfolded network and PnP with a Gaussian denoiser.&#xD;&#xD;Main Results:&#xD;Our method achieves lower bias than post-reconstruction processing and reduced standard deviation at matched bias compared to model-based iterative reconstruction. While spectral normalization underperforms in generalization, the deep equilibrium model remains competitive with convolutional networks for plug-and-play reconstruction and generalizes better to the unseen pathology. Compared to the end-to-end unfolded network, it also generalizes more consistently.&#xD;&#xD;Significance:&#xD;This study demonstrates the potential of the PnP framework to improve image quality and quantification accuracy in PET reconstruction. It also highlights the importance of how convergence conditions are imposed on the denoising network to ensure robust and generalizable performance.

Comparison of DLIR and ASIR-V algorithms for virtual monoenergetic imaging in carotid CTA under a triple-low protocol.

Long J, Wang C, Yu M, Liu X, Xu W, Liu Z, Wang C, Wu Y, Sun A, Zhang S, Hu C, Xu K, Meng Y

pubmed logopapersSep 9 2025
Stroke, frequently associated with carotid artery disease, is evaluated using carotid computed tomography angiography (CTA). Dual-energy CTA (DE-CTA) enhances imaging quality but presents challenges in maintaining high image clarity with low-dose scans. To compare the image quality of 50 keV virtual monoenergetic images (VMI) generated using Deep Learning Image Reconstruction (DLIR) and Adaptive Statistical Iterative Reconstruction-V (ASIR-V) algorithms under a triple-low scanning protocol in carotid CTA. A prospective study was conducted with 120 patients undergoing DE-CTA. The control group (Group 1), with a noise index (NI) of 4.0 and a contrast agent dose of 0.5 mL/kg, used the ASIR-V algorithm. The experimental group was divided into four subgroups: Group 2 (ASIR-V 50%), Group 3 (DLIR-L), Group 4 (DLIR-M), and Group 5 (DLIR-H), with a higher NI of 13.0 and a reduced contrast agent dose of 0.4 mL/kg. Objective image quality was assessed through signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and standard deviation (SD), while subjective quality was evaluated using a 5-point Likert scale. Radiation dose and contrast agent volume were also measured. The triple-low scanning protocol reduced radiation exposure by 53.2%, contrast agent volume by 19.7%, and injection rate by 19.8%. The DLIR-H setting outperformed ASIR-V, demonstrating superior image quality, better noise suppression, and improved contrast in small vessels. VMI at 50 keV showed enhanced diagnostic clarity with minimal radiation and contrast agent usage. The DLIR algorithm, particularly at high settings, significantly enhances image quality in DE-CTA VMI under a triple-low scanning protocol, offering a better balance between radiation dose reduction and image clarity.

Transposing intensive care innovation from modern warfare to other resource-limited settings.

Jarrassier A, de Rocquigny G, Delagarde C, Ezanno AC, Josse F, Dubost C, Duranteau O, Boussen S, Pasquier P

pubmed logopapersSep 9 2025
Delivering intensive care in conflict zones and other resource-limited settings presents unique clinical, logistical, and ethical challenges. These contexts, characterized by disrupted infrastructure, limited personnel, and prolonged field care, require adapted strategies to ensure critical care delivery under resource-limited settings. This scoping review aims to identify and characterize medical innovations developed or implemented in recent conflicts that may be relevant and transposable to intensive care units operating in other resource-limited settings. A scoping review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) guidelines. Five major databases were searched for English-language publications from 2014 to 2025. Studies describing innovations applicable to intensive care in modern warfare or resource-limited settings were included. While many studies relied on experimental or simulated models, a subset described real-world applications in resource-limited environments, including ultrasound-guided regional analgesia, resuscitative endovascular balloon occlusion of the aorta, portable blood transfusion platforms, and artificial intelligence-supported monitoring of traumatic brain injury. Training strategies such as teleconsultation/telementoring and low-cost simulation were also emphasized. Few of these intensive care innovations were validated in real-life wartime conditions. Innovations from modern warfare offer pragmatic and potentially transposable solutions for intensive care in resource-limited settings. Successfully adapting them requires validation and contextual adaptation, as well as the implementation of concrete collaborative strategies, including tailored training programs, joint simulation exercises, and structured knowledge translation initiatives, to ensure effective and sustainable integration.

Early Detection of Lung Metastases in Breast Cancer Using YOLOv10 and Transfer Learning: A Diagnostic Accuracy Study.

Taş HG, Taş MBH, Yildiz E, Aydin S

pubmed logopapersSep 9 2025
BACKGROUND This study used CT imaging analyzed with deep learning techniques to assess the diagnostic accuracy of lung metastasis detection in patients with breast cancer. The aim of the research was to create and verify a system for detecting malignant and metastatic lung lesions that uses YOLOv10 and transfer learning. MATERIAL AND METHODS From January 2023 to 2024, CT scans of 16 patients with breast cancer who had confirmed lung metastases were gathered retrospectively from Erzincan Mengücek Gazi Training and Research Hospital. The YOLOv10 deep learning system was used to assess a labeled dataset of 1264 enhanced CT images. RESULTS A total of 1264 labeled images from 16 patients were included. With an accuracy of 96.4%, sensitivity of 94.1%, specificity of 97.1%, and precision of 90.3%, the ResNet-50 model performed best. The robustness of the model was shown by the remarkable area under the curve (AUC), which came in at 0.96. After dataset tuning, the GoogLeNet model's accuracy was 97.3%. These results highlight our approach's improved diagnostic capabilities over current approaches. CONCLUSIONS This study shows how YOLOv10 and transfer learning can be used to improve the diagnostic precision of pulmonary metastases in patients with breast cancer. The model's effectiveness is demonstrated by the excellent performance metrics attained, opening the door for its application in clinical situations. The suggested approach supports prompt and efficient treatment decisions by lowering radiologists; workload and improving the early diagnosis of metastatic lesions.

Use of artificial intelligence for classification of fractures around the elbow in adults according to the 2018 AO/OTA classification system.

Pettersson A, Axenhus M, Stukan T, Ljungberg O, Nåsell H, Razavian AS, Gordon M

pubmed logopapersSep 9 2025
This study evaluates the accuracy of an Artificial Intelligence (AI) system, specifically a convolutional neural network (CNN), in classifying elbow fractures using the detailed 2018 AO/OTA fracture classification system. A retrospective analysis of 5,367 radiograph exams visualizing the elbow from adult patients (2002-2016) was conducted using a deep neural network. Radiographs were manually categorized according to the 2018 AO/OTA system by orthopedic surgeons. A pretrained Efficientnet B4 network with squeeze and excitation layers was fine-tuned. Performance was assessed against a test set of 208 radiographs reviewed independently by four orthopedic surgeons, with disagreements resolved via consensus. The study evaluated 54 distinct fracture types, each with a minimum of 10 cases, ensuring adequate dataset representation. Overall fracture detection achieved an AUC of 0.88 (95% CI 0.83-0.93). The weighted mean AUC was 0.80 for proximal radius fractures, 0.86 for proximal ulna, and 0.85 for distal humerus. These results underscore the AI system's ability to accurately detect and classify a broad spectrum of elbow fractures. AI systems, such as CNNs, can enhance clinicians' ability to identify and classify elbow fractures, offering a complementary tool to improve diagnostic accuracy and optimize treatment decisions. The findings suggest AI can reduce the risk of undiagnosed fractures, enhancing clinical outcomes and radiologic evaluation.

Prediction of double expression status of primary CNS lymphoma using multiparametric MRI radiomics combined with habitat radiomics: a double-center study.

Zhao J, Liang L, Li J, Li Q, Li F, Niu L, Xue C, Fu W, Liu Y, Song S, Liu X

pubmed logopapersSep 9 2025
Double expression lymphoma (DEL) is an independent high-risk prognostic factor for primary CNS lymphoma (PCNSL), and its diagnosis currently relies on invasive methods. This study first integrates radiomics and habitat radiomics features to enhance preoperative DEL status prediction models via intratumoral heterogeneity analysis. Clinical, pathological, and MRI imaging data of 139 PCNSL patients from two independent centers were collected. Radiomics, habitat radiomics, and combined models were constructed using machine learning classifiers, including KNN, DT, LR, and SVM. The AUC in the test set was used to evaluate the optimal predictive model. DCA curve and calibration curve were employed to evaluate the predictive performance of the models. SHAP analysis was utilized to visualize the contribution of each feature in the optimal model. For the radiomics-based models, the Combined radiomics model constructed by LR demonstrated better performance, with the AUC of 0.8779 (95% CI: 0.8171-0.9386) in the training set and 0.7166 (95% CI: 0.497-0.9361) in the test set. The Habitat radiomics model (SVM) based on T1-CE showed an AUC of 0.7446 (95% CI: 0.6503- 0.8388) in the training set and 0.7433 (95% CI: 0.5322-0.9545) in the test set. Finally, the Combined all model exhibited the highest predictive performance: LR achieved AUC values of 0.8962 (95% CI: 0.8299-0.9625) and 0.8289 (95% CI: 0.6785-0.9793) in training and test sets, respectively. The Combined all model developed in this study can provide effective reference value in predicting the DEL status of PCNSL, and habitat radiomics significantly enhances the predictive efficacy.
Page 23 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.