Sort by:
Page 22 of 3433422 results

Few-shot learning for highly accelerated 3D time-of-flight MRA reconstruction.

Li H, Chiew M, Dragonu I, Jezzard P, Okell TW

pubmed logopapersSep 10 2025
To develop a deep learning-based reconstruction method for highly accelerated 3D time-of-flight MRA (TOF-MRA) that achieves high-quality reconstruction with robust generalization using extremely limited acquired raw data, addressing the challenge of time-consuming acquisition of high-resolution, whole-head angiograms. A novel few-shot learning-based reconstruction framework is proposed, featuring a 3D variational network specifically designed for 3D TOF-MRA that is pre-trained on simulated complex-valued, multi-coil raw k-space datasets synthesized from diverse open-source magnitude images and fine-tuned using only two single-slab experimentally acquired datasets. The proposed approach was evaluated against existing methods on acquired retrospectively undersampled in vivo k-space data from five healthy volunteers and on prospectively undersampled data from two additional subjects. The proposed method achieved superior reconstruction performance on experimentally acquired in vivo data over comparison methods, preserving most fine vessels with minimal artifacts with up to eight-fold acceleration. Compared to other simulation techniques, the proposed method generated more realistic raw k-space data for 3D TOF-MRA. Consistently high-quality reconstructions were also observed on prospectively undersampled data. By leveraging few-shot learning, the proposed method enabled highly accelerated 3D TOF-MRA relying on minimal experimentally acquired data, achieving promising results on both retrospective and prospective in vivo data while outperforming existing methods. Given the challenges of acquiring and sharing large raw k-space datasets, this holds significant promise for advancing research and clinical applications in high-resolution, whole-head 3D TOF-MRA imaging.

Deep-Learning System for Automatic Measurement of the Femorotibial Rotational Angle on Lower-Extremity Computed Tomography.

Lee SW, Lee GP, Yoon I, Kim YJ, Kim KG

pubmed logopapersSep 10 2025
To develop and validate a deep-learning-based algorithm for automatic identification of anatomical landmarks and calculating femoral and tibial version angles (FTT angles) on lower-extremity CT scans. In this IRB-approved, retrospective study, lower-extremity CT scans from 270 adult patients (median age, 69 years; female to male ratio, 235:35) were analyzed. CT data were preprocessed using contrast-limited adaptive histogram equalization and RGB superposition to enhance tissue boundary distinction. The Attention U-Net model was trained using the gold standard of manual labeling and landmark drawing, enabling it to segment bones, detect landmarks, create lines, and automatically measure the femoral version and tibial torsion angles. The model's performance was validated against manual segmentations by a musculoskeletal radiologist using a test dataset. The segmentation model demonstrated 92.16%±0.02 sensitivity, 99.96%±<0.01 specificity, and 2.14±2.39 HD95, with a Dice similarity coefficient (DSC) of 93.12%±0.01. Automatic measurements of femoral and tibial torsion angles showed good correlation with radiologists' measurements, with correlation coefficients of 0.64 for femoral and 0.54 for tibial angles (p < 0.05). Automated segmentation significantly reduced the measurement time per leg compared to manual methods (57.5 ± 8.3 s vs. 79.6 ± 15.9 s, p < 0.05). We developed a method to automate the measurement of femorotibial rotation in continuous axial CT scans of patients with osteoarthritis (OA) using a deep-learning approach. This method has the potential to expedite the analysis of patient data in busy clinical settings.

Implementing a Resource-Light and Low-Code Large Language Model System for Information Extraction from Mammography Reports: A Pilot Study.

Dennstädt F, Fauser S, Cihoric N, Schmerder M, Lombardo P, Cereghetti GM, von Däniken S, Minder T, Meyer J, Chiang L, Gaio R, Lerch L, Filchenko I, Reichenpfader D, Denecke K, Vojvodic C, Tatalovic I, Sander A, Hastings J, Aebersold DM, von Tengg-Kobligk H, Nairz K

pubmed logopapersSep 10 2025
Large language models (LLMs) have been successfully used for data extraction from free-text radiology reports. Most current studies were conducted with LLMs accessed via an application programming interface (API). We evaluated the feasibility of using open-source LLMs, deployed on limited local hardware resources for data extraction from free-text mammography reports, using a common data element (CDE)-based structure. Seventy-nine CDEs were defined by an interdisciplinary expert panel, reflecting real-world reporting practice. Sixty-one reports were classified by two independent researchers to establish ground truth. Five different open-source LLMs deployable on a single GPU were used for data extraction using the general-classifier Python package. Extractions were performed for five different prompt approaches with calculation of overall accuracy, micro-recall and micro-F1. Additional analyses were conducted using thresholds for the relative probability of classifications. High inter-rater agreement was observed between manual classifiers (Cohen's kappa 0.83). Using default prompts, the LLMs achieved accuracies of 59.2-72.9%. Chain-of-thought prompting yielded mixed results, while few-shot prompting led to decreased accuracy. Adaptation of the default prompts to precisely define classification tasks improved performance for all models, with accuracies of 64.7-85.3%. Setting certainty thresholds further improved accuracies to > 90% but reduced the coverage rate to < 50%. Locally deployed open-source LLMs can effectively extract information from mammography reports, maintaining compatibility with limited computational resources. Selection and evaluation of the model and prompting strategy are critical. Clear, task-specific instructions appear crucial for high performance. Using a CDE-based framework provides clear semantics and structure for the data extraction.

Non-invasive prediction of invasive lung adenocarcinoma and high-risk histopathological characteristics in resectable early-stage adenocarcinoma by [18F]FDG PET/CT radiomics-based machine learning models: a prospective cohort Study.

Cao X, Lv Z, Li Y, Li M, Hu Y, Liang M, Deng J, Tan X, Wang S, Geng W, Xu J, Luo P, Zhou M, Xiao W, Guo M, Liu J, Huang Q, Hu S, Sun Y, Lan X, Jin Y

pubmed logopapersSep 10 2025
Precise preoperative discrimination of invasive lung adenocarcinoma (IA) from preinvasive lesions (adenocarcinoma in situ [AIS]/minimally invasive adenocarcinoma [MIA]) and prediction of high-risk histopathological features are critical for optimizing resection strategies in early-stage lung adenocarcinoma (LUAD). In this multicenter study, 813 LUAD patients (tumors ≤3 cm) formed the training cohort. A total of 1,709 radiomic features were extracted from the PET/CT images. Feature selection was performed using the max-relevance and min-redundancy (mRMR) algorithm and least absolute shrinkage and selection operator (LASSO). Hybrid machine learning models integrating [18F]FDG PET/CT radiomics and clinical-radiological features were developed using H2O.ai AutoML. Models were validated in a prospective internal cohort (N = 256, 2021-2022) and external multicenter cohort (N = 418). Performance was assessed via AUC, calibration, decision curve analysis (DCA) and survival assessment. The hybrid model achieved AUCs of 0.93 (95% CI: 0.90-0.96) for distinguishing IA from AIS/MIA (internal test) and 0.92 (0.90-0.95) in external testing. For predicting high-risk histopathological features (grade-III, lymphatic/pleural/vascular/nerve invasion, STAS), AUCs were 0.82 (0.77-0.88) and 0.85 (0.81-0.89) in internal/external sets. DCA confirmed superior net benefit over CT model. The model stratified progression-free (P = 0.002) and overall survival (P = 0.017) in the TCIA cohort. PET/CT radiomics-based models enable accurate non-invasive prediction of invasiveness and high-risk pathology in early-stage LUAD, guiding optimal surgical resection.

CLAPS: A CLIP-Unified Auto-Prompt Segmentation for Multi-Modal Retinal Imaging

Zhihao Zhao, Yinzheng Zhao, Junjie Yang, Xiangtong Yao, Quanmin Liang, Shahrooz Faghihroohi, Kai Huang, Nassir Navab, M. Ali Nasseri

arxiv logopreprintSep 10 2025
Recent advancements in foundation models, such as the Segment Anything Model (SAM), have significantly impacted medical image segmentation, especially in retinal imaging, where precise segmentation is vital for diagnosis. Despite this progress, current methods face critical challenges: 1) modality ambiguity in textual disease descriptions, 2) a continued reliance on manual prompting for SAM-based workflows, and 3) a lack of a unified framework, with most methods being modality- and task-specific. To overcome these hurdles, we propose CLIP-unified Auto-Prompt Segmentation (\CLAPS), a novel method for unified segmentation across diverse tasks and modalities in retinal imaging. Our approach begins by pre-training a CLIP-based image encoder on a large, multi-modal retinal dataset to handle data scarcity and distribution imbalance. We then leverage GroundingDINO to automatically generate spatial bounding box prompts by detecting local lesions. To unify tasks and resolve ambiguity, we use text prompts enhanced with a unique "modality signature" for each imaging modality. Ultimately, these automated textual and spatial prompts guide SAM to execute precise segmentation, creating a fully automated and unified pipeline. Extensive experiments on 12 diverse datasets across 11 critical segmentation categories show that CLAPS achieves performance on par with specialized expert models while surpassing existing benchmarks across most metrics, demonstrating its broad generalizability as a foundation model.

RoentMod: A Synthetic Chest X-Ray Modification Model to Identify and Correct Image Interpretation Model Shortcuts

Lauren H. Cooke, Matthias Jung, Jan M. Brendel, Nora M. Kerkovits, Borek Foldyna, Michael T. Lu, Vineet K. Raghu

arxiv logopreprintSep 10 2025
Chest radiographs (CXRs) are among the most common tests in medicine. Automated image interpretation may reduce radiologists\' workload and expand access to diagnostic expertise. Deep learning multi-task and foundation models have shown strong performance for CXR interpretation but are vulnerable to shortcut learning, where models rely on spurious and off-target correlations rather than clinically relevant features to make decisions. We introduce RoentMod, a counterfactual image editing framework that generates anatomically realistic CXRs with user-specified, synthetic pathology while preserving unrelated anatomical features of the original scan. RoentMod combines an open-source medical image generator (RoentGen) with an image-to-image modification model without requiring retraining. In reader studies with board-certified radiologists and radiology residents, RoentMod-produced images appeared realistic in 93\% of cases, correctly incorporated the specified finding in 89-99\% of cases, and preserved native anatomy comparable to real follow-up CXRs. Using RoentMod, we demonstrate that state-of-the-art multi-task and foundation models frequently exploit off-target pathology as shortcuts, limiting their specificity. Incorporating RoentMod-generated counterfactual images during training mitigated this vulnerability, improving model discrimination across multiple pathologies by 3-19\% AUC in internal validation and by 1-11\% for 5 out of 6 tested pathologies in external testing. These findings establish RoentMod as a broadly applicable tool for probing and correcting shortcut learning in medical AI. By enabling controlled counterfactual interventions, RoentMod enhances the robustness and interpretability of CXR interpretation models and provides a generalizable strategy for improving foundation models in medical imaging.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results.

Riera-Marín M, O K S, Rodríguez-Comas J, May MS, Pan Z, Zhou X, Liang X, Erick FX, Prenner A, Hémon C, Boussot V, Dillenseger JL, Nunes JC, Qayyum A, Mazher M, Niederer SA, Kushibar K, Martín-Isla C, Radeva P, Lekadir K, Barfoot T, Garcia Peraza Herrera LC, Glocker B, Vercauteren T, Gago L, Englemann J, Kleiss JM, Aubanell A, Antolin A, García-López J, González Ballester MA, Galdrán A

pubmed logopapersSep 10 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Implicit Neural Representations of Intramyocardial Motion and Strain

Andrew Bell, Yan Kit Choi, Steffen E Petersen, Andrew King, Muhummad Sohaib Nazir, Alistair A Young

arxiv logopreprintSep 10 2025
Automatic quantification of intramyocardial motion and strain from tagging MRI remains an important but challenging task. We propose a method using implicit neural representations (INRs), conditioned on learned latent codes, to predict continuous left ventricular (LV) displacement -- without requiring inference-time optimisation. Evaluated on 452 UK Biobank test cases, our method achieved the best tracking accuracy (2.14 mm RMSE) and the lowest combined error in global circumferential (2.86%) and radial (6.42%) strain compared to three deep learning baselines. In addition, our method is $\sim$380$\times$ faster than the most accurate baseline. These results highlight the suitability of INR-based models for accurate and scalable analysis of myocardial strain in large CMR datasets.

A Lightweight CNN Approach for Hand Gesture Recognition via GAF Encoding of A-mode Ultrasound Signals.

Shangguan Q, Lian Y, Liao Z, Chen J, Song Y, Yao L, Jiang C, Lu Z, Lin Z

pubmed logopapersSep 10 2025
Hand gesture recognition(HGR) is a key technology in human-computer interaction and human communication. This paper presents a lightweight, parameter-free attention convolutional neural network (LPA-CNN) approach leveraging Gramian Angular Field(GAF)transformation of A-mode ultrasound signals for HGR. First, this paper maps 1-dimensional (1D) A-mode ultrasound signals, collected from the forearm muscles of 10 healthy participants, into 2-dimensional (2D) images. Second, GAF is selected owing to its higher sensitivity against Markov Transition Field (MTF) and Recurrence Plot (RP) in HGR. Third, a novel LPA-CNN consisting of four components, i.e., a convolution-pooling block, an attention mechanism, an inverted residual block, and a classification block, is proposed. Among them, the convolution-pooling block consists of convolutional and pooling layers, the attention mechanism is applied to generate 3-D weights, the inverted residual block consists of multiple channel shuffling units, and the classification block is performed through fully connected layers. Fourth, comparative experiments were conducted on GoogLeNet, MobileNet, and LPA-CNN to validate the effectiveness of the proposed method. Experimental results show that compared to GoogLeNet and MobileNet, LPA-CNN has a smaller model size and better recognition performance, achieving a classification accuracy of 0.98 ±0.02. This paper achieves efficient and high-accuracy HGR by encoding A-mode ultrasound signals into 2D images and integrating the LPA-CNN model, providing a new technological approach for HGR based on ultrasonic signals.

RetiGen: Framework leveraging domain generalization and test-time adaptation for multi-view retinal diagnostics.

Zhang G, Chen Z, Huo J, do Rio JN, Komninos C, Liu Y, Sparks R, Ourselin S, Bergeles C, Jackson TL

pubmed logopapersSep 10 2025
Domain generalization techniques involve training a model on one set of domains and evaluating its performance on different, unseen domains. In contrast, test-time adaptation optimizes the model specifically for the target domain during inference. Both approaches improve diagnostic accuracy in medical imaging models. However, no research to date has leveraged the advantages of both approaches in an end-to-end fashion. Our paper introduces RetiGen, a test-time optimization framework designed to be integrated with existing domain generalization approaches. With an emphasis on the ophthalmic imaging domain, RetiGen leverages unlabeled multi-view color fundus photographs-a critical optical technology in retinal diagnostics. By utilizing information from multiple viewing angles, our approach significantly enhances the robustness and accuracy of machine learning models when applied across different domains. By integrating class balancing, test-time adaptation, and a multi-view optimization strategy, RetiGen effectively addresses the persistent issue of domain shift, which often hinders the performance of imaging models. Experimental results demonstrate that our method outperforms state-of-the-art techniques in both domain generalization and test-time optimization. Specifically, RetiGen increases the generalizability of the MFIDDR dataset, improving the AUC from 0.751 to 0.872, a 0.121 improvement. Similarly, for the DRTiD dataset, the AUC increased from 0.794 to 0.879, a 0.085 improvement. The code for RetiGen is publicly available at https://github.com/RViMLab/RetiGen.
Page 22 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.