Sort by:
Page 34 of 3503499 results

Construction and validation of a urinary stone composition prediction model based on machine learning.

Guo J, Zhang J, Zhang J, Xu C, Wang X, Liu C

pubmed logopapersAug 11 2025
The composition of urinary calculi serves as a critical determinant for personalized surgical strategies; however, such compositional data are often unavailable preoperatively. This study aims to develop a machine learning-based preoperative prediction model for stone composition and evaluate its clinical utility. A retrospective cohort study design was employed to include patients with urinary calculi admitted to the Department of Urology at the Second Affiliated Hospital of Zhengzhou University from 2019 to 2024. Feature selection was performed using least absolute shrinkage and selection operator (LASSO) regression combined with multivariate logistic regression, and a binary prediction model for urinary calculi was subsequently constructed. Model validation was conducted using metrics such as the area under the curve (AUC), while Shapley Additive Explanations(SHAP) values were applied to interpret the predictive outcomes. Among 708 eligible patients, distinct prediction models were established for four stone types: calcium oxalate stones: Logistic regression achieved optimal performance (AUC = 0.845), with maximum stone CT value, 24-hour urinary oxalate, and stone size as top predictors (SHAP-ranked); infection stones: Logistic regression (AUC = 0.864) prioritized stone size, urinary pH, and recurrence history; uric acid stones: LASSO-ridge-elastic net model demonstrated exceptional accuracy (AUC = 0.961), driven by maximum CT value, 24-hour oxalate, and urinary calcium; calcium-containing stones: Logistic regression attained better prediction (AUC = 0.953), relying on CT value, 24-hour calcium, and stone size. This study developed a machine learning prediction model based on multi-algorithm integration, achieving accurate preoperative discrimination of urinary stone composition. The integration of key imaging features with metabolic indicators enhanced the model's predictive performance.

A Physics-Driven Neural Network with Parameter Embedding for Generating Quantitative MR Maps from Weighted Images

Lingjing Chen, Chengxiu Zhang, Yinqiao Yi, Yida Wang, Yang Song, Xu Yan, Shengfang Xu, Dalin Zhu, Mengqiu Cao, Yan Zhou, Chenglong Wang, Guang Yang

arxiv logopreprintAug 11 2025
We propose a deep learning-based approach that integrates MRI sequence parameters to improve the accuracy and generalizability of quantitative image synthesis from clinical weighted MRI. Our physics-driven neural network embeds MRI sequence parameters -- repetition time (TR), echo time (TE), and inversion time (TI) -- directly into the model via parameter embedding, enabling the network to learn the underlying physical principles of MRI signal formation. The model takes conventional T1-weighted, T2-weighted, and T2-FLAIR images as input and synthesizes T1, T2, and proton density (PD) quantitative maps. Trained on healthy brain MR images, it was evaluated on both internal and external test datasets. The proposed method achieved high performance with PSNR values exceeding 34 dB and SSIM values above 0.92 for all synthesized parameter maps. It outperformed conventional deep learning models in accuracy and robustness, including data with previously unseen brain structures and lesions. Notably, our model accurately synthesized quantitative maps for these unseen pathological regions, highlighting its superior generalization capability. Incorporating MRI sequence parameters via parameter embedding allows the neural network to better learn the physical characteristics of MR signals, significantly enhancing the performance and reliability of quantitative MRI synthesis. This method shows great potential for accelerating qMRI and improving its clinical utility.

Neonatal neuroimaging: from research to bedside practice.

Cizmeci MN, El-Dib M, de Vries LS

pubmed logopapersAug 11 2025
Neonatal neuroimaging is essential in research and clinical practice, offering important insights into brain development and neurologic injury mechanisms. Visualizing the brain enables researchers and clinicians to improve neonatal care and parental counselling through better diagnosis and prognostication of disease. Common neuroimaging modalities used in the neonatal intensive care unit (NICU) are cranial ultrasonography (cUS) and magnetic resonance imaging (MRI). Between these modalities, conventional MRI provides the optimal image resolution and detail about the developing brain, while advanced MRI techniques allow for the evaluation of tissue microstructure and functional networks. Over the last two decades, medical imaging techniques using brain MRI have rapidly progressed, and these advances have facilitated high-quality extraction of quantitative features as well as the implementation of novel devices for use in neurological disorders. Major advancements encompass the use of low-field dedicated MRI systems within the NICU and trials of ultralow-field portable MRI systems at the bedside. Additionally, higher-field magnets are utilized to enhance image quality, and ultrafast brain MRI is employed to decrease image acquisition time. Furthermore, the implementation of advanced MRI sequences, the application of machine learning algorithms, multimodal neuroimaging techniques, motion correction techniques, and novel modalities are used to visualize pathologies that are not visible to the human eye. In this narrative review, we will discuss the fundamentals of these neuroimaging modalities, and their clinical applications to explore the present landscape of neonatal neuroimaging from bench to bedside.

C<sup>5</sup>-net: Cross-organ cross-modality cswin-transformer coupled convolutional network for dual task transfer learning in lymph node segmentation and classification.

Wang M, Chen H, Mao L, Jiao W, Han H, Zhang Q

pubmed logopapersAug 11 2025
Deep learning has made notable strides in the ultrasonic diagnosis of lymph nodes, yet it faces three primary challenges: a limited number of lymph node images and a scarcity of annotated data; difficulty in comprehensively learning both local and global semantic information; and obstacles in collaborative learning for both image segmentation and classification to achieve accurate diagnosis. To address these issues, we propose the Cross-organ Cross-modality Cswin-transformer Coupled Convolutional Network (C<sup>5</sup>-Net). First, we design a cross-organ and cross-modality transfer learning strategy to leverage skin lesion dermoscopic images, which have abundant annotations and share similarities in fields of view and morphology with the lymph node ultrasound images. Second, we couple Transformer and convolutional network to comprehensively learn both local details and global information. Third, the encoder weights in the C<sup>5</sup>-Net are shared between segmentation and classification tasks to exploit the synergistic knowledge, enhancing overall performance in ultrasound lymph node diagnosis. Our study leverages 690 lymph node ultrasound images and 1000 skin lesion dermoscopic images. Experimental results show that our C<sup>5</sup>-Net achieves the best segmentation and classification performance for lymph nodes among advanced methods, with the Dice coefficient of segmentation equaling 0.854, and the accuracy of classification equaling 0.874. Our method has consistently shown accuracy and robustness in the segmentation and classification of lymph nodes, contributing to the early and accurate detection of lymph nodal malignancy, which is potentially essential for effective treatment planning in clinical oncology.

Deep learning and radiomics fusion for predicting the invasiveness of lung adenocarcinoma within ground glass nodules.

Sun Q, Yu L, Song Z, Wang C, Li W, Chen W, Xu J, Han S

pubmed logopapersAug 11 2025
Microinvasive adenocarcinoma (MIA) and invasive adenocarcinoma (IAC) require distinct treatment strategies and are associated with different prognoses, underscoring the importance of accurate differentiation. This study aims to develop a predictive model that combines radiomics and deep learning to effectively distinguish between MIA and IAC. In this retrospective study, 252 pathologically confirmed cases of ground-glass nodules (GGNs) were included, with 177 allocated to the training set and 75 to the testing set. Radiomics, 2D deep learning, and 3D deep learning models were constructed based on CT images. In addition, two fusion strategies were employed to integrate these modalities: early fusion, which concatenates features from all modalities prior to classification, and late fusion, which ensembles the output probabilities of the individual models. The predictive performance of all five models was evaluated using the area under the receiver operating characteristic curve (AUC), and DeLong's test was performed to compare differences in AUC between models. The radiomics model achieved an AUC of 0.794 (95% CI: 0.684-0.898), while the 2D and 3D deep learning models achieved AUCs of 0.754 (95% CI: 0.594-0.882) and 0.847 (95% CI: 0.724-0.945), respectively, in the testing set. Among the fusion models, the late fusion strategy demonstrated the highest predictive performance, with an AUC of 0.898 (95% CI: 0.784-0.962), outperforming the early fusion model, which achieved an AUC of 0.857 (95% CI: 0.731-0.936). Although the differences were not statistically significant, the late fusion model yielded the highest numerical values for diagnostic accuracy, sensitivity, and specificity across all models. The fusion of radiomics and deep learning features shows potential in improving the differentiation of MIA and IAC in GGNs. The late fusion strategy demonstrated promising results, warranting further validation in larger, multicenter studies.

Diffusing the Blind Spot: Uterine MRI Synthesis with Diffusion Models

Johanna P. Müller, Anika Knupfer, Pedro Blöss, Edoardo Berardi Vittur, Bernhard Kainz, Jana Hutter

arxiv logopreprintAug 11 2025
Despite significant progress in generative modelling, existing diffusion models often struggle to produce anatomically precise female pelvic images, limiting their application in gynaecological imaging, where data scarcity and patient privacy concerns are critical. To overcome these barriers, we introduce a novel diffusion-based framework for uterine MRI synthesis, integrating both unconditional and conditioned Denoising Diffusion Probabilistic Models (DDPMs) and Latent Diffusion Models (LDMs) in 2D and 3D. Our approach generates anatomically coherent, high fidelity synthetic images that closely mimic real scans and provide valuable resources for training robust diagnostic models. We evaluate generative quality using advanced perceptual and distributional metrics, benchmarking against standard reconstruction methods, and demonstrate substantial gains in diagnostic accuracy on a key classification task. A blinded expert evaluation further validates the clinical realism of our synthetic images. We release our models with privacy safeguards and a comprehensive synthetic uterine MRI dataset to support reproducible research and advance equitable AI in gynaecology.

Adapting Biomedical Foundation Models for Predicting Outcomes of Anti Seizure Medications

Pham, D. K., Mehta, D., Jiang, Y., Thom, D., Chang, R. S.-k., Foster, E., Fazio, T., Holper, S., Verspoor, K., Liu, J., Nhu, D., Barnard, S., O'Brien, T., Chen, Z., French, J., Kwan, P., Ge, Z.

medrxiv logopreprintAug 11 2025
Epilepsy affects over 50 million people worldwide, with anti-seizure medications (ASMs) as the primary treatment for seizure control. However, ASM selection remains a "trial and error" process due to the lack of reliable predictors of effectiveness and tolerability. While machine learning approaches have been explored, existing models are limited to predicting outcomes only for ASMs encountered during training and have not leveraged recent biomedical foundation models for this task. This work investigates ASM outcome prediction using only patient MRI scans and reports. Specifically, we leverage biomedical vision-language foundation models and introduce a novel contextualized instruction-tuning framework that integrates expert-built knowledge trees of MRI entities to enhance their performance. Additionally, by training only on the four most commonly prescribed ASMs, our framework enables generalization to predicting outcomes and effectiveness for unseen ASMs not present during training. We evaluate our instruction-tuning framework on two retrospective epilepsy patient datasets, achieving an average AUC of 71.39 and 63.03 in predicting outcomes for four primary ASMs and three completely unseen ASMs, respectively. Our approach improves the AUC by 5.53 and 3.51 compared to standard report-based instruction tuning for seen and unseen ASMs, respectively. Our code, MRI knowledge tree, prompting templates, and TREE-TUNE generated instruction-answer tuning dataset are available at the link.

Improving discriminative ability in mammographic microcalcification classification using deep learning: a novel double transfer learning approach validated with an explainable artificial intelligence technique

Arlan, K., Bjornstrom, M., Makela, T., Meretoja, T. J., Hukkinen, K.

medrxiv logopreprintAug 11 2025
BackgroundBreast microcalcification diagnostics are challenging due to their subtle presentation, overlapping with benign findings, and high inter-reader variability, often leading to unnecessary biopsies. While deep learning (DL) models - particularly deep convolutional neural networks (DCNNs) - have shown potential to improve diagnostic accuracy, their clinical application remains limited by the need for large annotated datasets and the "black box" nature of their decision-making. PurposeTo develop and validate a deep learning model (DCNN) using a double transfer learning (d-TL) strategy for classifying suspected mammographic microcalcifications, with explainable AI (XAI) techniques to support model interpretability. Material and methodsA retrospective dataset of 396 annotated regions of interest (ROIs) from full-field digital mammography (FFDM) images of 194 patients who underwent stereotactic vacuum-assisted biopsy at the Womens Hospital radiological department, Helsinki University Hospital, was collected. The dataset was randomly split into training and test sets (24% test set, balanced for benign and malignant cases). A ResNeXt-based DCNN was developed using a d-TL approach: first pretrained on ImageNet, then adapted using an intermediate mammography dataset before fine-tuning on the target microcalcification data. Saliency maps were generated using Gradient-weighted Class Activation Mapping (Grad-CAM) to evaluate the visual relevance of model predictions. Diagnostic performance was compared to a radiologists BI-RADS-based assessment, using final histopathology as the reference standard. ResultsThe ensemble DCNN achieved an area under the ROC curve (AUC) of 0.76, with 65% sensitivity, 83% specificity, 79% positive predictive value (PPV), and 70% accuracy. The radiologist achieved an AUC of 0.65 with 100% sensitivity but lower specificity (30%) and PPV (59%). Grad-CAM visualizations showed consistent activation of the correct ROIs, even in misclassified cases where confidence scores fell below the threshold. ConclusionThe DCNN model utilizing d-TL achieved performance comparable to radiologists, with higher specificity and PPV than BI-RADS. The approach addresses data limitation issues and may help reduce additional imaging and unnecessary biopsies.

Artificial Intelligence-Driven Body Composition Analysis Enhances Chemotherapy Toxicity Prediction in Colorectal Cancer.

Liu YZ, Su PF, Tai AS, Shen MR, Tsai YS

pubmed logopapersAug 11 2025
Body surface area (BSA)-based chemotherapy dosing remains standard despite its limitations in predicting toxicity. Variations in body composition, particularly skeletal muscle and adipose tissue, influence drug metabolism and toxicity risk. This study aims to investigate the mediating role of body composition in the relationship between BSA-based dosing and dose-limiting toxicities (DLTs) in colorectal cancer patients receiving oxaliplatin-based chemotherapy. We retrospectively analyzed 483 stage III colorectal cancer patients treated at National Cheng Kung University Hospital (2013-2021). An artificial intelligence (AI)-driven algorithm quantified skeletal muscle and adipose tissue compartments from lumbar 3 (L3) vertebral-level computed tomography (CT) scans. Mediation analysis evaluated body composition's role in chemotherapy-related toxicities. Among the cohort, 18.2% (n = 88) experienced DLTs. While BSA alone was not significantly associated with DLTs (OR = 0.473, p = 0.376), increased intramuscular adipose tissue (IMAT) significantly predicted higher DLT risk (OR = 1.047, p = 0.038), whereas skeletal muscle area was protective. Mediation analysis confirmed that IMAT partially mediated the relationship between BSA and DLTs (indirect effect: 0.05, p = 0.040), highlighting adipose infiltration's role in chemotherapy toxicity. BSA-based dosing inadequately accounts for interindividual variations in chemotherapy tolerance. AI-assisted body composition analysis provides a precision oncology framework for identifying high-risk patients and optimizing chemotherapy regimens. Prospective validation is warranted to integrate body composition into routine clinical decision-making.

Automated Prediction of Bone Volume Removed in Mastoidectomy.

Nagururu NV, Ishida H, Ding AS, Ishii M, Unberath M, Taylor RH, Munawar A, Sahu M, Creighton FX

pubmed logopapersAug 11 2025
The bone volume drilled by surgeons during mastoidectomy is determined by the need to localize the position, optimize the view, and reach the surgical endpoint while avoiding critical structures. Predicting the volume of bone removed before an operation can significantly enhance surgical training by providing precise, patient-specific guidance and enable the development of more effective computer-assisted and robotic surgical interventions. Single institution, cross-sectional. VR simulation. We developed a deep learning pipeline to automate the prediction of bone volume removed during mastoidectomy using data from virtual reality mastoidectomy simulations. The data set included 15 deidentified temporal bone computed tomography scans. The network was evaluated using fivefold cross-validation, comparing predicted and actual bone removal with metrics such as the Dice score (DSC) and Hausdorff distance (HD). Our method achieved a median DSC of 0.775 (interquartile range [IQR]: 0.725-0.810) and a median HD of 0.492 mm (IQR: 0.298-0.757 mm). Predictions reached the mastoidectomy endpoint of visualizing the horizontal canal and incus in 80% (12/15) of temporal bones. Qualitative analysis indicated that predictions typically produced realistic mastoidectomy endpoints, though some cases showed excessive or insufficient bone removal, particularly at the temporal bone cortex and tegmen mastoideum. This study establishes a foundational step in using deep learning to predict bone volume removal during mastoidectomy. The results indicate that learning-based methods can reasonably approximate the surgical endpoint of mastoidectomy. Further refinement with larger, more diverse data sets and improved model architectures will be essential for enhancing prediction accuracy.
Page 34 of 3503499 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.