Sort by:
Page 135 of 1521519 results

Multimodal integration of longitudinal noninvasive diagnostics for survival prediction in immunotherapy using deep learning.

Yeghaian M, Bodalal Z, van den Broek D, Haanen JBAG, Beets-Tan RGH, Trebeschi S, van Gerven MAJ

pubmed logopapersMay 26 2025
Immunotherapies have revolutionized the landscape of cancer treatments. However, our understanding of response patterns in advanced cancers treated with immunotherapy remains limited. By leveraging routinely collected noninvasive longitudinal and multimodal data with artificial intelligence, we could unlock the potential to transform immunotherapy for cancer patients, paving the way for personalized treatment approaches. In this study, we developed a novel artificial neural network architecture, multimodal transformer-based simple temporal attention (MMTSimTA) network, building upon a combination of recent successful developments. We integrated pre- and on-treatment blood measurements, prescribed medications, and CT-based volumes of organs from a large pan-cancer cohort of 694 patients treated with immunotherapy to predict mortality at 3, 6, 9, and 12 months. Different variants of our extended MMTSimTA network were implemented and compared to baseline methods, incorporating intermediate and late fusion-based integration methods. The strongest prognostic performance was demonstrated using a variant of the MMTSimTA model with area under the curves of 0.84 ± 0.04, 0.83 ± 0.02, 0.82 ± 0.02, 0.81 ± 0.03 for 3-, 6-, 9-, and 12-month survival prediction, respectively. Our findings show that integrating noninvasive longitudinal data using our novel architecture yields an improved multimodal prognostic performance, especially in short-term survival prediction. Our study demonstrates that multimodal longitudinal integration of noninvasive data using deep learning may offer a promising approach for personalized prognostication in immunotherapy-treated cancer patients.

Predicting Surgical Versus Nonsurgical Management of Acute Isolated Distal Radius Fractures in Patients Under Age 60 Using a Convolutional Neural Network.

Hsu D, Persitz J, Noori A, Zhang H, Mashouri P, Shah R, Chan A, Madani A, Paul R

pubmed logopapersMay 26 2025
Distal radius fractures (DRFs) represent up to 20% of the fractures in the emergency department. Delays to surgery of more than 14 days are associated with poorer functional outcomes and increased health care utilization/costs. At our institution, the average time to surgery is more than 19 days because of the separation of surgical and nonsurgical care pathways and a lengthy referral process. To address this challenge, we aimed to create a convolutional neural network (CNN) capable of automating DRF x-ray analysis and triaging. We hypothesize that this model will accurately predict whether an acute isolated DRF fracture in a patient under the age of 60 years will be treated surgically or nonsurgically at our institution based on the radiographic input. We included 163 patients under the age of 60 years who presented to the emergency department between 2018 and 2023 with an acute isolated DRF and who were referred for clinical follow-up. Radiographs taken within 4 weeks of injury were collected in posterior-anterior and lateral views and then preprocessed for model training. The surgeons' decision to treat surgically or nonsurgically at our institution was the reference standard for assessing the model prediction accuracy. We included 723 radiographic posterior-anterior and lateral pairs (385 surgical and 338 nonsurgical) for model training. The best-performing model (seven CNN layers, one fully connected layer, an image input size of 256 × 256 pixels, and a 1.5× weighting for volarly displaced fractures) achieved 88% accuracy and 100% sensitivity. Values for true positive (100%), true negative (72.7%), false positive (27.3%), and false negative (0%) were calculated. After training based on institution-specific indications, a CNN-based algorithm can predict with 88% accuracy whether treatment of an acute isolated DRF in a patient under the age of 60 years will be treated surgically or nonsurgically. By promptly identifying patients who would benefit from expedited surgical treatment pathways, this model can reduce times for referral.

DeepInverse: A Python package for solving imaging inverse problems with deep learning

Julián Tachella, Matthieu Terris, Samuel Hurault, Andrew Wang, Dongdong Chen, Minh-Hai Nguyen, Maxime Song, Thomas Davies, Leo Davy, Jonathan Dong, Paul Escande, Johannes Hertrich, Zhiyuan Hu, Tobías I. Liaudat, Nils Laurent, Brett Levac, Mathurin Massias, Thomas Moreau, Thibaut Modrzyk, Brayan Monroy, Sebastian Neumayer, Jérémy Scanvic, Florian Sarron, Victor Sechaud, Georg Schramm, Chao Tang, Romain Vo, Pierre Weiss

arxiv logopreprintMay 26 2025
DeepInverse is an open-source PyTorch-based library for solving imaging inverse problems. The library covers all crucial steps in image reconstruction from the efficient implementation of forward operators (e.g., optics, MRI, tomography), to the definition and resolution of variational problems and the design and training of advanced neural network architectures. In this paper, we describe the main functionality of the library and discuss the main design choices.

Advancements in Medical Image Classification through Fine-Tuning Natural Domain Foundation Models

Mobina Mansoori, Sajjad Shahabodini, Farnoush Bayatmakou, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi

arxiv logopreprintMay 26 2025
Using massive datasets, foundation models are large-scale, pre-trained models that perform a wide range of tasks. These models have shown consistently improved results with the introduction of new methods. It is crucial to analyze how these trends impact the medical field and determine whether these advancements can drive meaningful change. This study investigates the application of recent state-of-the-art foundation models, DINOv2, MAE, VMamba, CoCa, SAM2, and AIMv2, for medical image classification. We explore their effectiveness on datasets including CBIS-DDSM for mammography, ISIC2019 for skin lesions, APTOS2019 for diabetic retinopathy, and CHEXPERT for chest radiographs. By fine-tuning these models and evaluating their configurations, we aim to understand the potential of these advancements in medical image classification. The results indicate that these advanced models significantly enhance classification outcomes, demonstrating robust performance despite limited labeled data. Based on our results, AIMv2, DINOv2, and SAM2 models outperformed others, demonstrating that progress in natural domain training has positively impacted the medical domain and improved classification outcomes. Our code is publicly available at: https://github.com/sajjad-sh33/Medical-Transfer-Learning.

Evolution of deep learning tooth segmentation from CT/CBCT images: a systematic review and meta-analysis.

Kot WY, Au Yeung SY, Leung YY, Leung PH, Yang WF

pubmed logopapersMay 26 2025
Deep learning has been utilized to segment teeth from computed tomography (CT) or cone-beam CT (CBCT). However, the performance of deep learning is unknown due to multiple models and diverse evaluation metrics. This systematic review and meta-analysis aims to evaluate the evolution and performance of deep learning in tooth segmentation. We systematically searched PubMed, Web of Science, Scopus, IEEE Xplore, arXiv.org, and ACM for studies investigating deep learning in human tooth segmentation from CT/CBCT. Included studies were assessed using the Quality Assessment of Diagnostic Accuracy Study (QUADAS-2) tool. Data were extracted for meta-analyses by random-effects models. A total of 30 studies were included in the systematic review, and 28 of them were included for meta-analyses. Various deep learning algorithms were categorized according to the backbone network, encompassing single-stage convolutional models, convolutional models with U-Net architecture, Transformer models, convolutional models with attention mechanisms, and combinations of multiple models. Convolutional models with U-Net architecture were the most commonly used deep learning algorithms. The integration of attention mechanism within convolutional models has become a new topic. 29 evaluation metrics were identified, with Dice Similarity Coefficient (DSC) being the most popular. The pooled results were 0.93 [0.93, 0.93] for DSC, 0.86 [0.85, 0.87] for Intersection over Union (IoU), 0.22 [0.19, 0.24] for Average Symmetric Surface Distance (ASSD), 0.92 [0.90, 0.94] for sensitivity, 0.71 [0.26, 1.17] for 95% Hausdorff distance, and 0.96 [0.93, 0.98] for precision. No significant difference was observed in the segmentation of single-rooted or multi-rooted teeth. No obvious correlation between sample size and segmentation performance was observed. Multiple deep learning algorithms have been successfully applied to tooth segmentation from CT/CBCT and their evolution has been well summarized and categorized according to their backbone structures. In future, studies are needed with standardized protocols and open labelled datasets.

tUbe net: a generalisable deep learning tool for 3D vessel segmentation

Holroyd, N. A., Li, Z., Walsh, C., Brown, E. E., Shipley, R. J., Walker-Samuel, S.

biorxiv logopreprintMay 26 2025
Deep learning has become an invaluable tool for bioimage analysis but, while open-source cell annotation software such as cellpose are widely used, an equivalent tool for three-dimensional (3D) vascular annotation does not exist. With the vascular system being directly impacted by a broad range of diseases, there is significant medical interest in quantitative analysis for vascular imaging. However, existing deep learning approaches for this task are specialised to particular tissue types or imaging modalities. We present a new deep learning model for segmentation of vasculature that is generalisable across tissues, modalities, scales and pathologies. To create a generalisable model, a 3D convolutional neural network was trained using data from multiple modalities including optical imaging, computational tomography and photoacoustic imaging. Through this varied training set, the model was forced to learn common features of vessels cross-modality and scale. Following this, the general model was fine-tuned to different applications with a minimal amount of manually labelled ground truth data. It was found that the general model could be specialised to segment new datasets, with a high degree of accuracy, using as little as 0.3% of the volume of that dataset for fine-tuning. As such, this model enables users to produce accurate segmentations of 3D vascular networks without the need to label large amounts of training data.

Radiomics based on dual-energy CT for noninvasive prediction of cervical lymph node metastases in patients with nasopharyngeal carcinoma.

Li L, Yang D, Wu Y, Sun R, Qin Y, Kang M, Deng X, Bu M, Li Z, Zeng Z, Zeng X, Jiang M, Chen BT

pubmed logopapersMay 26 2025
To develop and validate a machine learning model based on dual-energy computed tomography (DECT) for predicting cervical lymph node metastases (CLNM) in patients diagnosed with nasopharyngeal carcinoma (NPC). This prospective single-center study enrolled patients with NPC and the study assessment included both DECT and 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Radiomics features were extracted from each region of interest (ROI) for cervical lymph nodes using arterial and venous phase images at 100 keV and 150 keV, either individually as non-fusion models or combined as fusion models on the DECT images. The performance of the random forest (RF) models, combined with radiomics features, was evaluated by area under the receiver operating characteristic curve (AUC) analysis. DeLong's test was employed to compare model performances, while decision curve analysis (DCA) assessed the clinical utility of the predictive models. Sixty-six patients with NPC were included for analysis, which was divided into a training set (n = 42) and a validation set (n = 22). A total of 13 radiomic models were constructed (4 non-fusion models and 9 fusion models). In the non-fusion models, when the threshold value exceeded 0.4, the venous phase at 100 keV (V100) (AUC, 0.9667; 95 % confidence interval [95 % CI], 0.9363-0.9901) model exhibited a higher net benefit than other non-fusion models. The V100 + V150 fusion model achieved the best performance, with an AUC of 0.9697 (95 % CI, 0.9393-0.9907). DECT-based radiomics effectively diagnosed CLNM in patients with NPC and may potentially be a valuable tool for clinical decision-making. This study improved pre-operative evaluation, treatment strategy selection, and prognostic evaluation for patients with nasopharyngeal carcinoma by combining DECT and radiomics to predict cervical lymph node status prior to treatment.

Clinical, radiological, and radiomics feature-based explainable machine learning models for prediction of neurological deterioration and 90-day outcomes in mild intracerebral hemorrhage.

Zeng W, Chen J, Shen L, Xia G, Xie J, Zheng S, He Z, Deng L, Guo Y, Yang J, Lv Y, Qin G, Chen W, Yin J, Wu Q

pubmed logopapersMay 26 2025
The risks and prognosis of mild intracerebral hemorrhage (ICH) patients were easily overlooked by clinicians. Our goal was to use machine learning (ML) methods to predict mild ICH patients' neurological deterioration (ND) and 90-day prognosis. This prospective study recruited 257 patients with mild ICH for this study. After exclusions, 148 patients were included in the ND study and 144 patients in the 90-day prognosis study. We trained five ML models using filtered data, including clinical, traditional imaging, and radiomics indicators based on non-contrast computed tomography (NCCT). Additionally, we incorporated the Shapley Additive Explanation (SHAP) method to display key features and visualize the decision-making process of the model for each individual. A total of 21 (14.2%) mild ICH patients developed ND, and 35 (24.3%) mild ICH patients had a 90-day poor prognosis. In the validation set, the support vector machine (SVM) models achieved an AUC of 0.846 (95% confidence intervals (CI), 0.627-1.000) and an F1-score of 0.667 for predicting ND, and an AUC of 0.970 (95% CI, 0.928-1.000), and an F1-score of 0.846 for predicting 90-day prognosis. The SHAP analysis results indicated that several clinical features, the island sign, and the radiomics features of the hematoma were of significant value in predicting ND and 90-day prognosis. The ML models, constructed using clinical, traditional imaging, and radiomics indicators, demonstrated good classification performance in predicting ND and 90-day prognosis in patients with mild ICH, and have the potential to serve as an effective tool in clinical practice. Not applicable.

Can intraoperative improvement of radial endobronchial ultrasound imaging enhance the diagnostic yield in peripheral pulmonary lesions?

Nishida K, Ito T, Iwano S, Okachi S, Nakamura S, Chrétien B, Chen-Yoshikawa TF, Ishii M

pubmed logopapersMay 26 2025
Data regarding the diagnostic efficacy of radial endobronchial ultrasound (R-EBUS) findings obtained via transbronchial needle aspiration (TBNA)/biopsy (TBB) with endobronchial ultrasonography with a guide sheath (EBUS-GS) for peripheral pulmonary lesions (PPLs) are lacking. We evaluated whether intraoperative probe repositioning improves R-EBUS imaging and affects diagnostic yield and safety of EBUS-guided sampling for PPLs. We retrospectively studied 363 patients with PPLs who underwent TBNA/TBB (83 lesions) or TBB (280 lesions) using EBUS-GS. Based on the R-EBUS findings before and after these procedures, patients were categorized into three groups: the improved R-EBUS image (n = 52), unimproved R-EBUS image (n = 69), and initial within-lesion groups (n = 242). The impact of improved R-EBUS findings on diagnostic yield and complications was assessed using multivariable logistic regression, adjusting for lesion size, lesion location, and the presence of a bronchus leading to the lesion on CT. A separate exploratory random-forest model with SHAP analysis was used to explore factors associated with successful repositioning in lesions not initially "within." The diagnostic yield in the improved R-EBUS group was significantly higher than that in the unimproved R-EBUS group (76.9% vs. 46.4%, p = 0.001). The regression model revealed that the improvement in intraoperative R-EBUS findings was associated with a high diagnostic yield (odds ratio: 3.55, 95% confidence interval, 1.57-8.06, p = 0.002). Machine learning analysis indicated that inner lesion location and radiographic visibility were the most influential predictors of successful repositioning. The complication rates were similar across all groups (total complications: 5.8% vs. 4.3% vs. 6.2%, p = 0.943). Improved R-EBUS findings during TBNA/TBB or TBB with EBUS-GS were associated with a high diagnostic yield without an increase in complications, even when the initial R-EBUS findings were inadequate. This suggests that repeated intraoperative probe repositioning can safely boost outcomes.
Page 135 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.