Sort by:
Page 181 of 6526512 results

Dennstädt F, Fauser S, Cihoric N, Schmerder M, Lombardo P, Cereghetti GM, von Däniken S, Minder T, Meyer J, Chiang L, Gaio R, Lerch L, Filchenko I, Reichenpfader D, Denecke K, Vojvodic C, Tatalovic I, Sander A, Hastings J, Aebersold DM, von Tengg-Kobligk H, Nairz K

pubmed logopapersSep 10 2025
Large language models (LLMs) have been successfully used for data extraction from free-text radiology reports. Most current studies were conducted with LLMs accessed via an application programming interface (API). We evaluated the feasibility of using open-source LLMs, deployed on limited local hardware resources for data extraction from free-text mammography reports, using a common data element (CDE)-based structure. Seventy-nine CDEs were defined by an interdisciplinary expert panel, reflecting real-world reporting practice. Sixty-one reports were classified by two independent researchers to establish ground truth. Five different open-source LLMs deployable on a single GPU were used for data extraction using the general-classifier Python package. Extractions were performed for five different prompt approaches with calculation of overall accuracy, micro-recall and micro-F1. Additional analyses were conducted using thresholds for the relative probability of classifications. High inter-rater agreement was observed between manual classifiers (Cohen's kappa 0.83). Using default prompts, the LLMs achieved accuracies of 59.2-72.9%. Chain-of-thought prompting yielded mixed results, while few-shot prompting led to decreased accuracy. Adaptation of the default prompts to precisely define classification tasks improved performance for all models, with accuracies of 64.7-85.3%. Setting certainty thresholds further improved accuracies to > 90% but reduced the coverage rate to < 50%. Locally deployed open-source LLMs can effectively extract information from mammography reports, maintaining compatibility with limited computational resources. Selection and evaluation of the model and prompting strategy are critical. Clear, task-specific instructions appear crucial for high performance. Using a CDE-based framework provides clear semantics and structure for the data extraction.

Li H, Chiew M, Dragonu I, Jezzard P, Okell TW

pubmed logopapersSep 10 2025
To develop a deep learning-based reconstruction method for highly accelerated 3D time-of-flight MRA (TOF-MRA) that achieves high-quality reconstruction with robust generalization using extremely limited acquired raw data, addressing the challenge of time-consuming acquisition of high-resolution, whole-head angiograms. A novel few-shot learning-based reconstruction framework is proposed, featuring a 3D variational network specifically designed for 3D TOF-MRA that is pre-trained on simulated complex-valued, multi-coil raw k-space datasets synthesized from diverse open-source magnitude images and fine-tuned using only two single-slab experimentally acquired datasets. The proposed approach was evaluated against existing methods on acquired retrospectively undersampled in vivo k-space data from five healthy volunteers and on prospectively undersampled data from two additional subjects. The proposed method achieved superior reconstruction performance on experimentally acquired in vivo data over comparison methods, preserving most fine vessels with minimal artifacts with up to eight-fold acceleration. Compared to other simulation techniques, the proposed method generated more realistic raw k-space data for 3D TOF-MRA. Consistently high-quality reconstructions were also observed on prospectively undersampled data. By leveraging few-shot learning, the proposed method enabled highly accelerated 3D TOF-MRA relying on minimal experimentally acquired data, achieving promising results on both retrospective and prospective in vivo data while outperforming existing methods. Given the challenges of acquiring and sharing large raw k-space datasets, this holds significant promise for advancing research and clinical applications in high-resolution, whole-head 3D TOF-MRA imaging.

Lee SW, Lee GP, Yoon I, Kim YJ, Kim KG

pubmed logopapersSep 10 2025
To develop and validate a deep-learning-based algorithm for automatic identification of anatomical landmarks and calculating femoral and tibial version angles (FTT angles) on lower-extremity CT scans. In this IRB-approved, retrospective study, lower-extremity CT scans from 270 adult patients (median age, 69 years; female to male ratio, 235:35) were analyzed. CT data were preprocessed using contrast-limited adaptive histogram equalization and RGB superposition to enhance tissue boundary distinction. The Attention U-Net model was trained using the gold standard of manual labeling and landmark drawing, enabling it to segment bones, detect landmarks, create lines, and automatically measure the femoral version and tibial torsion angles. The model's performance was validated against manual segmentations by a musculoskeletal radiologist using a test dataset. The segmentation model demonstrated 92.16%±0.02 sensitivity, 99.96%±<0.01 specificity, and 2.14±2.39 HD95, with a Dice similarity coefficient (DSC) of 93.12%±0.01. Automatic measurements of femoral and tibial torsion angles showed good correlation with radiologists' measurements, with correlation coefficients of 0.64 for femoral and 0.54 for tibial angles (p < 0.05). Automated segmentation significantly reduced the measurement time per leg compared to manual methods (57.5 ± 8.3 s vs. 79.6 ± 15.9 s, p < 0.05). We developed a method to automate the measurement of femorotibial rotation in continuous axial CT scans of patients with osteoarthritis (OA) using a deep-learning approach. This method has the potential to expedite the analysis of patient data in busy clinical settings.

Savanier M, Comtat C, Sureau F

pubmed logopapersSep 10 2025
&#xD;Deep learning has shown great promise for improving medical image reconstruction, including PET. However, concerns remain about the stability and robustness of these methods, especially when trained on limited data. This work aims to explore the use of the Plug-and-Play (PnP) framework in PET reconstruction to address these concerns.&#xD;&#xD;Approach:&#xD;We propose a convergent PnP algorithm for low-count PET reconstruction based on the Douglas-Rachford splitting method. We consider several denoisers trained to satisfy fixed-point conditions, with convergence properties ensured either during training or by design, including a spectrally normalized network and a deep equilibrium model. We evaluate the bias-standard deviation tradeoff across clinically relevant regions and an unseen pathological case in a synthetic experiment and a real study. Comparisons are made with model-based iterative reconstruction, post-reconstruction denoising, a deep end-to-end unfolded network and PnP with a Gaussian denoiser.&#xD;&#xD;Main Results:&#xD;Our method achieves lower bias than post-reconstruction processing and reduced standard deviation at matched bias compared to model-based iterative reconstruction. While spectral normalization underperforms in generalization, the deep equilibrium model remains competitive with convolutional networks for plug-and-play reconstruction and generalizes better to the unseen pathology. Compared to the end-to-end unfolded network, it also generalizes more consistently.&#xD;&#xD;Significance:&#xD;This study demonstrates the potential of the PnP framework to improve image quality and quantification accuracy in PET reconstruction. It also highlights the importance of how convergence conditions are imposed on the denoising network to ensure robust and generalizable performance.

Wang H, Zhang X, Li S, Zheng X, Zhang Y, Xie Q, Jin Z

pubmed logopapersSep 10 2025
Total hip arthroplasty (THA) is the standard surgical treatment for end-stage hip osteoarthritis, with its success dependent on precise preoperative planning, which, in turn, relies on accurate three-dimensional segmentation and reconstruction of the periarticular bone of the hip joint. However, patients with hip osteoarthritis often exhibit pathological characteristics, such as joint space narrowing, femoroacetabular impingement, osteophyte formation, and joint deformity. These changes present significant challenges for traditional manual or semi-automatic segmentation methods. To address these challenges, this study proposed a novel 3D UNet-based multi-task network to achieve rapid and accurate segmentation and reconstruction of the periarticular bone in hip osteoarthritis patients. The bone segmentation main network incorporated the Transformer module during the encoder to effectively capture spatial anatomical features, while a boundary-optimization branch was designed to address segmentation challenges at the acetabular-femoral interface. These branches were jointly optimized through a multi-task loss function, with an oversampling strategy introduced to enhance the network's feature learning capability for complex structures. The experimental results showed that the proposed method achieved excellent performance on the test set with hip osteoarthritis. The average Dice coefficient was 96.09% (96.98% for femur, 95.20% for hip), with an overall precision of 96.66% and recall of 97.32%. In terms of the boundary matching metrics, the average surface distance (ASD) and the 95% Hausdorff distance (HD95) were 0.40 mm and 1.78 mm, respectively. The metrics showed that the proposed automatic segmentation network achieved high accuracy in segmenting the periarticular bone of the hip joint, generating reliable 2D masks and 3D models, thereby demonstrating significant potential for supporting THA surgical planning.

Andrew Bell, Yan Kit Choi, Steffen Peterson, Andrew King, Muhummad Sohaib Nazir, Alistair Young

arxiv logopreprintSep 10 2025
Automatic quantification of intramyocardial motion and strain from tagging MRI remains an important but challenging task. We propose a method using implicit neural representations (INRs), conditioned on learned latent codes, to predict continuous left ventricular (LV) displacement -- without requiring inference-time optimisation. Evaluated on 452 UK Biobank test cases, our method achieved the best tracking accuracy (2.14 mm RMSE) and the lowest combined error in global circumferential (2.86%) and radial (6.42%) strain compared to three deep learning baselines. In addition, our method is $\sim$380$\times$ faster than the most accurate baseline. These results highlight the suitability of INR-based models for accurate and scalable analysis of myocardial strain in large CMR datasets.

Binxu Li, Wei Peng, Mingjie Li, Ehsan Adeli, Kilian M. Pohl

arxiv logopreprintSep 10 2025
3D brain MRI studies often examine subtle morphometric differences between cohorts that are hard to detect visually. Given the high cost of MRI acquisition, these studies could greatly benefit from image syntheses, particularly counterfactual image generation, as seen in other domains, such as computer vision. However, counterfactual models struggle to produce anatomically plausible MRIs due to the lack of explicit inductive biases to preserve fine-grained anatomical details. This shortcoming arises from the training of the models aiming to optimize for the overall appearance of the images (e.g., via cross-entropy) rather than preserving subtle, yet medically relevant, local variations across subjects. To preserve subtle variations, we propose to explicitly integrate anatomical constraints on a voxel-level as prior into a generative diffusion framework. Called Probabilistic Causal Graph Model (PCGM), the approach captures anatomical constraints via a probabilistic graph module and translates those constraints into spatial binary masks of regions where subtle variations occur. The masks (encoded by a 3D extension of ControlNet) constrain a novel counterfactual denoising UNet, whose encodings are then transferred into high-quality brain MRIs via our 3D diffusion decoder. Extensive experiments on multiple datasets demonstrate that PCGM generates structural brain MRIs of higher quality than several baseline approaches. Furthermore, we show for the first time that brain measurements extracted from counterfactuals (generated by PCGM) replicate the subtle effects of a disease on cortical brain regions previously reported in the neuroscience literature. This achievement is an important milestone in the use of synthetic MRIs in studies investigating subtle morphological differences.

Muhammad Alberb, Helen Cheung, Anne Martel

arxiv logopreprintSep 10 2025
Colorectal cancer frequently metastasizes to the liver, significantly reducing long-term survival. While surgical resection is the only potentially curative treatment for colorectal liver metastasis (CRLM), patient outcomes vary widely depending on tumor characteristics along with clinical and genomic factors. Current prognostic models, often based on limited clinical or molecular features, lack sufficient predictive power, especially in multifocal CRLM cases. We present a fully automated framework for surgical outcome prediction from pre- and post-contrast MRI acquired before surgery. Our framework consists of a segmentation pipeline and a radiomics pipeline. The segmentation pipeline learns to segment the liver, tumors, and spleen from partially annotated data by leveraging promptable foundation models to complete missing labels. Also, we propose SAMONAI, a novel zero-shot 3D prompt propagation algorithm that leverages the Segment Anything Model to segment 3D regions of interest from a single point prompt, significantly improving our segmentation pipeline's accuracy and efficiency. The predicted pre- and post-contrast segmentations are then fed into our radiomics pipeline, which extracts features from each tumor and predicts survival using SurvAMINN, a novel autoencoder-based multiple instance neural network for survival analysis. SurvAMINN jointly learns dimensionality reduction and hazard prediction from right-censored survival data, focusing on the most aggressive tumors. Extensive evaluation on an institutional dataset comprising 227 patients demonstrates that our framework surpasses existing clinical and genomic biomarkers, delivering a C-index improvement exceeding 10%. Our results demonstrate the potential of integrating automated segmentation algorithms and radiomics-based survival analysis to deliver accurate, annotation-efficient, and interpretable outcome prediction in CRLM.

Felipe Álvarez Barrientos, Tomás Banduc, Isabeau Sirven, Francisco Sahli Costabal

arxiv logopreprintSep 10 2025
The contractile motion of the heart is strongly determined by the distribution of the fibers that constitute cardiac tissue. Strain analysis informed with the orientation of fibers allows to describe several pathologies that are typically associated with impaired mechanics of the myocardium, such as cardiovascular disease. Several methods have been developed to estimate strain-derived metrics from traditional imaging techniques. However, the physical models underlying these methods do not include fiber mechanics, restricting their capacity to accurately explain cardiac function. In this work, we introduce WarpPINN-fibers, a physics-informed neural network framework to accurately obtain cardiac motion and strains enhanced by fiber information. We train our neural network to satisfy a hyper-elastic model and promote fiber contraction with the goal to predict the deformation field of the heart from cine magnetic resonance images. For this purpose, we build a loss function composed of three terms: a data-similarity loss between the reference and the warped template images, a regularizer enforcing near-incompressibility of cardiac tissue and a fiber-stretch penalization that controls strain in the direction of synthetically produced fibers. We show that our neural network improves the former WarpPINN model and effectively controls fiber stretch in a synthetic phantom experiment. Then, we demonstrate that WarpPINN-fibers outperforms alternative methodologies in landmark-tracking and strain curve prediction for a cine-MRI benchmark with a cohort of 15 healthy volunteers. We expect that our method will enable a more precise quantification of cardiac strains through accurate deformation fields that are consistent with fiber physiology, without requiring imaging techniques more sophisticated than MRI.

Zhou Y, Xu H, Jiang W, Zhang J, Chen S, Yang S, Xiang H, Hu W, Qiao X

pubmed logopapersSep 10 2025
High-intensity focused ultrasound (HIFU) is a non-invasive technique for treating uterine fibroids, and the accurate prediction of its therapeutic efficacy depends on precise quantification of the intratumoral heterogeneity. However, existing methods still have limitations in characterizing intratumoral heterogeneity, which restricts the accuracy of efficacy prediction. To this end, this study proposes a deep learning model with a parallel architecture of ResNet and ViT (Res-ViT) to verify whether the synergistic characterization of local texture and global spatial features can improve the accuracy of HIFU efficacy prediction. This study enrolled patients with uterine fibroids who underwent HIFU treatment from Center A (training set: N = 272; internal validation set: N = 92) and Center B (external test set: N = 125). Preoperative T2-weighted magnetic resonance images were used to develop the Res-ViT model for predicting immediate post-treatment non-perfused volume ratio (NPVR) ≥ 80%. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC) and compared against independent Radiomics, ResNet-18, and ViT models. The Res-ViT model outperformed all standalone models across both internal (AUC = 0.895, 95% CI: 0.857-0.987) and external (AUC = 0.853, 95% CI: 0.776-0.921) test sets. SHAP analysis identified the ResNet branch as the predominant decision-making component (feature contribution: 55.4%). The visualization of Gradient-weighted Class Activation Mapping (Grad-CAM) shows that the key regions attended by Res-ViT have higher spatial overlap with the postoperative non-ablated fibroid tissue. The proposed Res-ViT model demonstrates that the fusion strategy of local and global features is an effective method for quantifying uterine fibroid heterogeneity, significantly enhancing the accuracy of HIFU efficacy prediction.
Page 181 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.