Sort by:
Page 44 of 1651650 results

Machine learning-based model to predict long-term tumor control and additional interventions following pituitary surgery for Cushing's disease.

Shinya Y, Ghaith AK, Hong S, Erickson D, Bancos I, Herndon JS, Davidge-Pitts CJ, Nguyen RT, Bon Nieves A, Sáez Alegre M, Morshed RA, Pinheiro Neto CD, Peris Celda M, Pollock BE, Meyer FB, Atkinson JLD, Van Gompel JJ

pubmed logopapersJul 1 2025
In this study, the authors aimed to establish a supervised machine learning (ML) model based on multiple tree-based algorithms to predict long-term biochemical outcomes and intervention-free survival (IFS) after endonasal transsphenoidal surgery (ETS) in patients with Cushing's disease (CD). The medical records of patients who underwent ETS for CD between 2013 and 2023 were reviewed. Data were collected on the patient's baseline characteristics, intervention details, histopathology, surgical outcomes, and postoperative endocrine functions. The study's primary outcome was IFS, and the therapeutic outcomes were labeled as "under control" or "treatment failure," depending on whether additional therapeutic interventions after primary ETS were required. The decision tree and random forest classifiers were trained and tested to predict long-term IFS based on unseen data, using an 80/20 cohort split. Data from 150 patients, with a median follow-up period of 56 months, were extracted. In the cohort, 42 (28%) patients required additional intervention for persistent or recurrent CD. Consequently, the IFS rates following ETS alone were 83% at 3 years and 78% at 5 years. Multivariable Cox proportional hazards analysis demonstrated that a smaller tumor diameter that could be detected by MRI (hazard ratio 0.95, 95% CI 0.90-0.99; p = 0.047) was significantly associated with greater IFS. However, the lack of tumor detection on MRI was a poor predictor. The ML-based model using a decision tree model displayed 91% accuracy (95% CI 0.70-0.94, sensitivity 87.0%, specificity 89.0%) in predicting IFS in the unseen test dataset. Random forest analysis revealed that tumor size (mean minimal depth 1.67), Knosp grade (1.75), patient age (1.80), and BMI (1.99) were the four most significant predictors of long-term IFS. The ML algorithm could predict long-term postoperative endocrinological remission in CD with high accuracy, indicating that prognosis may vary not only with previously reported factors such as tumor size, Knosp grade, gross-total resection, and patient age but also with BMI. The decision tree flowchart could potentially stratify patients with CD before ETS, allowing for the selection of personalized treatment options and thereby assisting in determining treatment plans for these patients. This ML model may lead to a deeper understanding of the complex mechanisms of CD by uncovering patterns embedded within the data.

Enhanced diagnostic and prognostic assessment of cardiac amyloidosis using combined <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy.

Hong Z, Spielvogel CP, Xue S, Calabretta R, Jiang Z, Yu J, Kluge K, Haberl D, Nitsche C, Grünert S, Hacker M, Li X

pubmed logopapersJul 1 2025
Cardiac amyloidosis (CA) is a severe condition characterized by amyloid fibril deposition in the myocardium, leading to restrictive cardiomyopathy and heart failure. Differentiating between amyloidosis subtypes is crucial due to distinct treatment strategies. The individual conventional diagnostic methods lack the accuracy needed for effective subtype identification. This study aimed to evaluate the efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in detecting CA and distinguishing between its main subtypes, light chain (AL) and transthyretin (ATTR) amyloidosis while assessing the association of imaging findings with patient prognosis. We retrospectively evaluated the diagnostic efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in a cohort of 50 patients with clinical suspicion of CA. Semi-quantitative imaging markers were extracted from the images. Diagnostic performance was calculated against biopsy results or genetic testing. Both machine learning models and a rationale-based model were developed to detect CA and classify subtypes. Survival prediction over five years was assessed using a random survival forest model. Prognostic value was assessed using Kaplan-Meier estimators and Cox proportional hazards models. The combined imaging approach significantly improved diagnostic accuracy, with <sup>11</sup>C-PiB PET and <sup>99m</sup>Tc-DPD scintigraphy showing complementary strengths in detecting AL and ATTR, respectively. The machine learning model achieved an AUC of 0.94 (95% CI 0.93-0.95) for CA subtype differentiation, while the rationale-based model demonstrated strong diagnostic ability with AUCs of 0.95 (95% CI 0.88-1.00) for ATTR and 0.88 (95% CI 0.770-0.961) for AL. Survival prediction models identified key prognostic markers, with significant stratification of overall mortality based on predicted survival (p value = 0.006; adj HR 2.43 [95% CI 1.03-5.71]). The integration of <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy, supported by both machine learning and rationale-based models, enhances the diagnostic accuracy and prognostic assessment of cardiac amyloidosis, with significant implications for clinical practice.

Regression modeling with convolutional neural network for predicting extent of resection from preoperative MRI in giant pituitary adenomas: a pilot study.

Patel BK, Tariciotti L, DiRocco L, Mandile A, Lohana S, Rodas A, Zohdy YM, Maldonado J, Vergara SM, De Andrade EJ, Revuelta Barbero JM, Reyes C, Solares CA, Garzon-Muvdi T, Pradilla G

pubmed logopapersJul 1 2025
Giant pituitary adenomas (GPAs) are challenging skull base tumors due to their size and proximity to critical neurovascular structures. Achieving gross-total resection (GTR) can be difficult, and residual tumor burden is commonly reported. This study evaluated the ability of convolutional neural networks (CNNs) to predict the extent of resection (EOR) from preoperative MRI with the goals of enhancing surgical planning, improving preoperative patient counseling, and enhancing multidisciplinary postoperative coordination of care. A retrospective study of 100 consecutive patients with GPAs was conducted. Patients underwent surgery via the endoscopic endonasal transsphenoidal approach. CNN models were trained on DICOM images from preoperative MR images to predict EOR, using a split of 80 patients for training and 20 for validation. The models included different architectural modules to refine image selection and predict EOR based on tumor-contained images in various anatomical planes. The model design, training, and validation were conducted in a local environment in Python using the TensorFlow machine learning system. The median preoperative tumor volume was 19.4 cm3. The median EOR was 94.5%, with GTR achieved in 49% of cases. The CNN model showed high predictive accuracy, especially when analyzing images from the coronal plane, with a root mean square error of 2.9916 and a mean absolute error of 2.6225. The coefficient of determination (R2) was 0.9823, indicating excellent model performance. CNN-based models may effectively predict the EOR for GPAs from preoperative MRI scans, offering a promising tool for presurgical assessment and patient counseling. Confirmatory studies with large patient samples are needed to definitively validate these findings.

Deep learning image enhancement algorithms in PET/CT imaging: a phantom and sarcoma patient radiomic evaluation.

Bonney LM, Kalisvaart GM, van Velden FHP, Bradley KM, Hassan AB, Grootjans W, McGowan DR

pubmed logopapersJul 1 2025
PET/CT imaging data contains a wealth of quantitative information that can provide valuable contributions to characterising tumours. A growing body of work focuses on the use of deep-learning (DL) techniques for denoising PET data. These models are clinically evaluated prior to use, however, quantitative image assessment provides potential for further evaluation. This work uses radiomic features to compare two manufacturer deep-learning (DL) image enhancement algorithms, one of which has been commercialised, against 'gold-standard' image reconstruction techniques in phantom data and a sarcoma patient data set (N=20). All studies in the retrospective sarcoma clinical [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG dataset were acquired on either a GE Discovery 690 or 710 PET/CT scanner with volumes segmented by an experienced nuclear medicine radiologist. The modular heterogeneous imaging phantom used in this work was filled with [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG, and five repeat acquisitions of the phantom were acquired on a GE Discovery 710 PET/CT scanner. The DL-enhanced images were compared to 'gold-standard' images the algorithms were trained to emulate and input images. The difference between image sets was tested for significance in 93 international biomarker standardisation initiative (IBSI) standardised radiomic features. Comparing DL-enhanced images to the 'gold-standard', 4.0% and 9.7% radiomic features measured significantly different (p<sub>critical</sub> < 0.0005) in the phantom and patient data respectively (averaged over the two DL algorithms). Larger differences were observed comparing DL-enhanced images to algorithm input images with 29.8% and 43.0% of radiomic features measuring significantly different in the phantom and patient data respectively (averaged over the two DL algorithms). DL-enhanced images were found to be similar to images generated using the 'gold-standard' target image reconstruction method with more than 80% of radiomic features not significantly different in all comparisons across unseen phantom and sarcoma patient data. This result offers insight into the performance of the DL algorithms, and demonstrate potential applications for DL algorithms in harmonisation for radiomics and for radiomic features in quantitative evaluation of DL algorithms.

A Review of the Opportunities and Challenges with Large Language Models in Radiology: The Road Ahead.

Soni N, Ora M, Agarwal A, Yang T, Bathla G

pubmed logopapersJul 1 2025
In recent years, generative artificial intelligence (AI), particularly large language models (LLMs) and their multimodal counterparts, multimodal large language models, including vision language models, have generated considerable interest in the global AI discourse. LLMs, or pre-trained language models (such as ChatGPT, Med-PaLM, LLaMA), are neural network architectures trained on extensive text data, excelling in language comprehension and generation. Multimodal LLMs, a subset of foundation models, are trained on multimodal data sets, integrating text with another modality, such as images, to learn universal representations akin to human cognition better. This versatility enables them to excel in tasks like chatbots, translation, and creative writing while facilitating knowledge sharing through transfer learning, federated learning, and synthetic data creation. Several of these models can have potentially appealing applications in the medical domain, including, but not limited to, enhancing patient care by processing patient data; summarizing reports and relevant literature; providing diagnostic, treatment, and follow-up recommendations; and ancillary tasks like coding and billing. As radiologists enter this promising but uncharted territory, it is imperative for them to be familiar with the basic terminology and processes of LLMs. Herein, we present an overview of the LLMs and their potential applications and challenges in the imaging domain.

Deep learning-based segmentation of T1 and T2 cardiac MRI maps for automated disease detection

Andreea Bianca Popescu, Andreas Seitz, Heiko Mahrholdt, Jens Wetzl, Athira Jacob, Lucian Mihai Itu, Constantin Suciu, Teodora Chitiboi

arxiv logopreprintJul 1 2025
Objectives Parametric tissue mapping enables quantitative cardiac tissue characterization but is limited by inter-observer variability during manual delineation. Traditional approaches relying on average relaxation values and single cutoffs may oversimplify myocardial complexity. This study evaluates whether deep learning (DL) can achieve segmentation accuracy comparable to inter-observer variability, explores the utility of statistical features beyond mean T1/T2 values, and assesses whether machine learning (ML) combining multiple features enhances disease detection. Materials & Methods T1 and T2 maps were manually segmented. The test subset was independently annotated by two observers, and inter-observer variability was assessed. A DL model was trained to segment left ventricle blood pool and myocardium. Average (A), lower quartile (LQ), median (M), and upper quartile (UQ) were computed for the myocardial pixels and employed in classification by applying cutoffs or in ML. Dice similarity coefficient (DICE) and mean absolute percentage error evaluated segmentation performance. Bland-Altman plots assessed inter-user and model-observer agreement. Receiver operating characteristic analysis determined optimal cutoffs. Pearson correlation compared features from model and manual segmentations. F1-score, precision, and recall evaluated classification performance. Wilcoxon test assessed differences between classification methods, with p < 0.05 considered statistically significant. Results 144 subjects were split into training (100), validation (15) and evaluation (29) subsets. Segmentation model achieved a DICE of 85.4%, surpassing inter-observer agreement. Random forest applied to all features increased F1-score (92.7%, p < 0.001). Conclusion DL facilitates segmentation of T1/ T2 maps. Combining multiple features with ML improves disease detection.

Bridging Classical and Learning-based Iterative Registration through Deep Equilibrium Models

Yi Zhang, Yidong Zhao, Qian Tao

arxiv logopreprintJul 1 2025
Deformable medical image registration is traditionally formulated as an optimization problem. While classical methods solve this problem iteratively, recent learning-based approaches use recurrent neural networks (RNNs) to mimic this process by unrolling the prediction of deformation fields in a fixed number of steps. However, classical methods typically converge after sufficient iterations, but learning-based unrolling methods lack a theoretical convergence guarantee and show instability empirically. In addition, unrolling methods have a practical bottleneck at training time: GPU memory usage grows linearly with the unrolling steps due to backpropagation through time (BPTT). To address both theoretical and practical challenges, we propose DEQReg, a novel registration framework based on Deep Equilibrium Models (DEQ), which formulates registration as an equilibrium-seeking problem, establishing a natural connection between classical optimization and learning-based unrolling methods. DEQReg maintains constant memory usage, enabling theoretically unlimited iteration steps. Through extensive evaluation on the public brain MRI and lung CT datasets, we show that DEQReg can achieve competitive registration performance, while substantially reducing memory consumption compared to state-of-the-art unrolling methods. We also reveal an intriguing phenomenon: the performance of existing unrolling methods first increases slightly then degrades irreversibly when the inference steps go beyond the training configuration. In contrast, DEQReg achieves stable convergence with its inbuilt equilibrium-seeking mechanism, bridging the gap between classical optimization-based and modern learning-based registration methods.

ADAptation: Reconstruction-based Unsupervised Active Learning for Breast Ultrasound Diagnosis

Yaofei Duan, Yuhao Huang, Xin Yang, Luyi Han, Xinyu Xie, Zhiyuan Zhu, Ping He, Ka-Hou Chan, Ligang Cui, Sio-Kei Im, Dong Ni, Tao Tan

arxiv logopreprintJul 1 2025
Deep learning-based diagnostic models often suffer performance drops due to distribution shifts between training (source) and test (target) domains. Collecting and labeling sufficient target domain data for model retraining represents an optimal solution, yet is limited by time and scarce resources. Active learning (AL) offers an efficient approach to reduce annotation costs while maintaining performance, but struggles to handle the challenge posed by distribution variations across different datasets. In this study, we propose a novel unsupervised Active learning framework for Domain Adaptation, named ADAptation, which efficiently selects informative samples from multi-domain data pools under limited annotation budget. As a fundamental step, our method first utilizes the distribution homogenization capabilities of diffusion models to bridge cross-dataset gaps by translating target images into source-domain style. We then introduce two key innovations: (a) a hypersphere-constrained contrastive learning network for compact feature clustering, and (b) a dual-scoring mechanism that quantifies and balances sample uncertainty and representativeness. Extensive experiments on four breast ultrasound datasets (three public and one in-house/multi-center) across five common deep classifiers demonstrate that our method surpasses existing strong AL-based competitors, validating its effectiveness and generalization for clinical domain adaptation. The code is available at the anonymized link: https://github.com/miccai25-966/ADAptation.

Mind the Detail: Uncovering Clinically Relevant Image Details in Accelerated MRI with Semantically Diverse Reconstructions

Jan Nikolas Morshuis, Christian Schlarmann, Thomas Küstner, Christian F. Baumgartner, Matthias Hein

arxiv logopreprintJul 1 2025
In recent years, accelerated MRI reconstruction based on deep learning has led to significant improvements in image quality with impressive results for high acceleration factors. However, from a clinical perspective image quality is only secondary; much more important is that all clinically relevant information is preserved in the reconstruction from heavily undersampled data. In this paper, we show that existing techniques, even when considering resampling for diffusion-based reconstruction, can fail to reconstruct small and rare pathologies, thus leading to potentially wrong diagnosis decisions (false negatives). To uncover the potentially missing clinical information we propose ``Semantically Diverse Reconstructions'' (\SDR), a method which, given an original reconstruction, generates novel reconstructions with enhanced semantic variability while all of them are fully consistent with the measured data. To evaluate \SDR automatically we train an object detector on the fastMRI+ dataset. We show that \SDR significantly reduces the chance of false-negative diagnoses (higher recall) and improves mean average precision compared to the original reconstructions. The code is available on https://github.com/NikolasMorshuis/SDR

Medical Image Segmentation Using Advanced Unet: VMSE-Unet and VM-Unet CBAM+

Sayandeep Kanrar, Raja Piyush, Qaiser Razi, Debanshi Chakraborty, Vikas Hassija, GSS Chalapathi

arxiv logopreprintJul 1 2025
In this paper, we present the VMSE U-Net and VM-Unet CBAM+ model, two cutting-edge deep learning architectures designed to enhance medical image segmentation. Our approach integrates Squeeze-and-Excitation (SE) and Convolutional Block Attention Module (CBAM) techniques into the traditional VM U-Net framework, significantly improving segmentation accuracy, feature localization, and computational efficiency. Both models show superior performance compared to the baseline VM-Unet across multiple datasets. Notably, VMSEUnet achieves the highest accuracy, IoU, precision, and recall while maintaining low loss values. It also exhibits exceptional computational efficiency with faster inference times and lower memory usage on both GPU and CPU. Overall, the study suggests that the enhanced architecture VMSE-Unet is a valuable tool for medical image analysis. These findings highlight its potential for real-world clinical applications, emphasizing the importance of further research to optimize accuracy, robustness, and computational efficiency.
Page 44 of 1651650 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.