Sort by:
Page 5 of 19183 results

CUTE-MRI: Conformalized Uncertainty-based framework for Time-adaptivE MRI

Paul Fischer, Jan Nikolas Morshuis, Thomas Küstner, Christian Baumgartner

arxiv logopreprintAug 20 2025
Magnetic Resonance Imaging (MRI) offers unparalleled soft-tissue contrast but is fundamentally limited by long acquisition times. While deep learning-based accelerated MRI can dramatically shorten scan times, the reconstruction from undersampled data introduces ambiguity resulting from an ill-posed problem with infinitely many possible solutions that propagates to downstream clinical tasks. This uncertainty is usually ignored during the acquisition process as acceleration factors are often fixed a priori, resulting in scans that are either unnecessarily long or of insufficient quality for a given clinical endpoint. This work introduces a dynamic, uncertainty-aware acquisition framework that adjusts scan time on a per-subject basis. Our method leverages a probabilistic reconstruction model to estimate image uncertainty, which is then propagated through a full analysis pipeline to a quantitative metric of interest (e.g., patellar cartilage volume or cardiac ejection fraction). We use conformal prediction to transform this uncertainty into a rigorous, calibrated confidence interval for the metric. During acquisition, the system iteratively samples k-space, updates the reconstruction, and evaluates the confidence interval. The scan terminates automatically once the uncertainty meets a user-predefined precision target. We validate our framework on both knee and cardiac MRI datasets. Our results demonstrate that this adaptive approach reduces scan times compared to fixed protocols while providing formal statistical guarantees on the precision of the final image. This framework moves beyond fixed acceleration factors, enabling patient-specific acquisitions that balance scan efficiency with diagnostic confidence, a critical step towards personalized and resource-efficient MRI.

Development and validation of 3D super-resolution convolutional neural network for <sup>18</sup>F-FDG-PET images.

Endo H, Hirata K, Magota K, Yoshimura T, Katoh C, Kudo K

pubmed logopapersAug 19 2025
Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight measurements are expensive. Therefore, cost-effective software-based super-resolution methods are required. This study proposes a novel approach for enhancing whole-body PET image resolution applying a 2.5-dimensional Super-Resolution Convolutional Neural Network (2.5D-SRCNN) combined with logarithmic transformation preprocessing. This method aims to improve image quality and maintain quantitative accuracy, particularly for standardized uptake value measurements, while addressing the challenges of providing a memory-efficient alternative to full three-dimensional processing and managing the wide dynamic range of tracer uptake in PET images. We analyzed data from 90 patients who underwent whole-body FDG-PET/CT examinations and reconstructed low-resolution slices with a voxel size of 4 × 4 × 4 mm and corresponding high-resolution (HR) slices with a voxel size of 2 × 2 × 2 mm. The proposed 2.5D-SRCNN model, based on the conventional 2D-SRCNN structure, incorporates information from adjacent slices to generate a high-resolution output. Logarithmic transformation of the voxel values was applied to manage the large dynamic range caused by physiological tracer accumulation in the bladder. Performance was assessed using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The quantitative accuracy of standardized uptake values (SUV) was validated using a phantom study. The results demonstrated that the 2.5D-SRCNN with logarithmic transformation significantly outperformed the conventional 2D-SRCNN in terms of PSNR and SSIM (<i>p</i> < 0.0001). The proposed method also showed an improved depiction of small spheres in the phantom while maintaining the accuracy of the SUV. Our proposed method for whole-body PET images using a super-resolution model with the 2.5D approach and logarithmic transformation may be effective in generating super-resolution images with a lower spatial error and better quantitative accuracy. The online version contains supplementary material available at 10.1186/s40658-025-00791-y.

Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.

Saadh MJ, Albadr RJ, Sur D, Yadav A, Roopashree R, Sangwan G, Krithiga T, Aminov Z, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersAug 18 2025
This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols. The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC). For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets. The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.

Balancing Speed and Sensitivity: Echo-Planar Accelerated MRI for ARIA-H Screening in Anti-Aβ Therapeutics.

Hagiwara A

pubmed logopapersAug 18 2025
The recent advent of anti-amyloid-β monoclonal antibodies has introduced new demands for MRI-based screening of amyloid-related imaging abnormalities, particularly the hemorrhage subtype (ARIA-H). In this editorial, we discuss the study by Loftus and colleagues, which evaluates the diagnostic performance of echo-planar accelerated gradient-recalled echo (GRE) and susceptibility-weighted imaging (SWI) sequences for ARIA-H screening. Their results demonstrate that significant scan time reductions-up to 86%-can be achieved without substantial loss in diagnostic accuracy, particularly for accelerated GRE. These findings align with recently issued MRI guidelines and offer practical solutions for improving workflow efficiency in Alzheimer's care. However, challenges remain in terms of inter-rater variability and image quality, especially with accelerated SWI. We also highlight the emerging role of artificial intelligence-assisted analysis and the importance of reproducibility and data sharing in advancing clinical implementation. Balancing speed and sensitivity remains a central theme in optimizing imaging strategies for antiamyloid therapeutic protocols.

PAINT: Prior-aided Alternate Iterative NeTwork for Ultra-low-dose CT Imaging Using Diffusion Model-restored Sinogram.

Chen K, Zhang W, Deng Z, Zhou Y, Zhao J

pubmed logopapersAug 18 2025
Obtaining multiple CT scans from the same patient is required in many clinical scenarios, such as lung nodule screening and image-guided radiation therapy. Repeated scans would expose patients to higher radiation dose and increase the risk of cancer. In this study, we aim to achieve ultra-low-dose imaging for subsequent scans by collecting extremely undersampled sinogram via regional few-view scanning, and preserve image quality utilizing the preceding fullsampled scan as prior. To fully exploit prior information, we propose a two-stage framework consisting of diffusion model-based sinogram restoration and deep learning-based unrolled iterative reconstruction. Specifically, the undersampled sinogram is first restored by a conditional diffusion model with sinogram-domain prior guidance. Then, we formulate the undersampled data reconstruction problem as an optimization problem combining fidelity terms for both undersampled and restored data, along with a regularization term based on image-domain prior. Next, we propose Prior-aided Alternate Iterative NeTwork (PAINT) to solve the optimization problem. PAINT alternately updates the undersampled or restored data fidelity term, and unrolls the iterations to integrate neural network-based prior regularization. In the case of 112 mm field of view in simulated data experiments, our proposed framework achieved superior performance in terms of CT value accuracy and image details preservation. Clinical data experiments also demonstrated that our proposed framework outperformed the comparison methods in artifact reduction and structure recovery.

Prospective validation of an artificial intelligence assessment in a cohort of applicants seeking financial compensation for asbestosis (PROSBEST).

Smesseim I, Lipman KBWG, Trebeschi S, Stuiver MM, Tissier R, Burgers JA, de Gooijer CJ

pubmed logopapersAug 15 2025
Asbestosis, a rare pneumoconiosis marked by diffuse pulmonary fibrosis, arises from prolonged asbestos exposure. Its diagnosis, guided by the Helsinki criteria, relies on exposure history, clinical findings, radiology, and lung function. However, interobserver variability complicates diagnoses and financial compensation. This study prospectively validated the sensitivity of an AI-driven assessment for asbestosis compensation in the Netherlands. Secondary objectives included evaluating specificity, accuracy, predictive values, area under the curve of the receiver operating characteristic (ROC-AUC), area under the precision-recall curve (PR-AUC), and interobserver variability. Between September 2020 and July 2022, 92 adult compensation applicants were assessed using both AI models and pulmonologists' reviews based on Dutch Health Council criteria. The AI model assigned an asbestosis probability score: negative (< 35), uncertain (35-66), or positive (≥ 66). Uncertain cases underwent additional reviews for a final determination. The AI assessment demonstrated sensitivity of 0.86 (95% confidence interval: 0.77-0.95), specificity of 0.85 (0.76-0.97), accuracy of 0.87 (0.79-0.93), ROC-AUC of 0.92 (0.84-0.97), and PR-AUC of 0.95 (0.89-0.99). Despite strong metrics, the sensitivity target of 98% was unmet. Pulmonologist reviews showed moderate to substantial interobserver variability. The AI-driven approach demonstrated robust accuracy but insufficient sensitivity for validation. Addressing interobserver variability and incorporating objective fibrosis measurements could enhance future reliability in clinical and compensation settings. The AI-driven assessment for financial compensation of asbestosis showed adequate accuracy but did not meet the required sensitivity for validation. We prospectively assessed the sensitivity of an AI-driven assessment procedure for financial compensation of asbestosis. The AI-driven asbestosis probability score underperformed across all metrics compared to internal testing. The AI-driven assessment procedure achieved a sensitivity of 0.86 (95% confidence interval: 0.77-0.95). It did not meet the predefined sensitivity target.

A Convergent Generalized Krylov Subspace Method for Compressed Sensing MRI Reconstruction with Gradient-Driven Denoisers

Tao Hong, Umberto Villa, Jeffrey A. Fessler

arxiv logopreprintAug 15 2025
Model-based reconstruction plays a key role in compressed sensing (CS) MRI, as it incorporates effective image regularizers to improve the quality of reconstruction. The Plug-and-Play and Regularization-by-Denoising frameworks leverage advanced denoisers (e.g., convolutional neural network (CNN)-based denoisers) and have demonstrated strong empirical performance. However, their theoretical guarantees remain limited, as practical CNNs often violate key assumptions. In contrast, gradient-driven denoisers achieve competitive performance, and the required assumptions for theoretical analysis are easily satisfied. However, solving the associated optimization problem remains computationally demanding. To address this challenge, we propose a generalized Krylov subspace method (GKSM) to solve the optimization problem efficiently. Moreover, we also establish rigorous convergence guarantees for GKSM in nonconvex settings. Numerical experiments on CS MRI reconstruction with spiral and radial acquisitions validate both the computational efficiency of GKSM and the accuracy of the theoretical predictions. The proposed optimization method is applicable to any linear inverse problem.

A Case Study on Colposcopy-Based Cervical Cancer Staging Reveals an Alarming Lack of Data Sharing Hindering the Adoption of Machine Learning in Clinical Practice

Schulz, M., Leha, A.

medrxiv logopreprintAug 15 2025
BackgroundThe inbuilt ability to adapt existing models to new applications has been one of the key drivers of the success of deep learning models. Thereby, sharing trained models is crucial for their adaptation to different populations and domains. Not sharing models prohibits validation and potentially following translation into clinical practice, and hinders scientific progress. In this paper we examine the current state of data and model sharing in the medical field using cervical cancer staging on colposcopy images as a case example. MethodsWe conducted a comprehensive literature search in PubMed to identify studies employing machine learning techniques in the analysis of colposcopy images. For studies where raw data was not directly accessible, we systematically inquired about accessing the pre-trained model weights and/or raw colposcopy image data by contacting the authors using various channels. ResultsWe included 46 studies and one publicly available dataset in our study. We retrieved data of the latter and inquired about data access for the 46 studies by contacting a total of 92 authors. We received 15 responses related to 14 studies (30%). The remaining 32 studies remained unresponsive (70%). Of the 15 responses received, two responses redirected our inquiry to other authors, two responses were initially pending, and 11 declined data sharing. Despite our follow-up efforts on all responses received, none of the inquiries led to actual data sharing (0%). The only available data source remained the publicly available dataset. ConclusionsDespite the long-standing demands for reproducible research and efforts to incentivize data sharing, such as the requirement of data availability statements, our case study reveals a persistent lack of data sharing culture. Reasons identified in this case study include a lack of resources to provide the data, data privacy concerns, ongoing trial registrations and low response rates to inquiries. Potential routes for improvement could include comprehensive data availability statements required by journals, data preparation and deposition in a repository as part of the publication process, an automatic maximal embargo time after which data will become openly accessible and data sharing rules set by funders.

SimAQ: Mitigating Experimental Artifacts in Soft X-Ray Tomography using Simulated Acquisitions

Jacob Egebjerg, Daniel Wüstner

arxiv logopreprintAug 14 2025
Soft X-ray tomography (SXT) provides detailed structural insight into whole cells but is hindered by experimental artifacts such as the missing wedge and by limited availability of annotated datasets. We present \method, a simulation pipeline that generates realistic cellular phantoms and applies synthetic artifacts to produce paired noisy volumes, sinograms, and reconstructions. We validate our approach by training a neural network primarily on synthetic data and demonstrate effective few-shot and zero-shot transfer learning on real SXT tomograms. Our model delivers accurate segmentations, enabling quantitative analysis of noisy tomograms without relying on large labeled datasets or complex reconstruction methods.

KonfAI: A Modular and Fully Configurable Framework for Deep Learning in Medical Imaging

Valentin Boussot, Jean-Louis Dillenseger

arxiv logopreprintAug 13 2025
KonfAI is a modular, extensible, and fully configurable deep learning framework specifically designed for medical imaging tasks. It enables users to define complete training, inference, and evaluation workflows through structured YAML configuration files, without modifying the underlying code. This declarative approach enhances reproducibility, transparency, and experimental traceability while reducing development time. Beyond the capabilities of standard pipelines, KonfAI provides native abstractions for advanced strategies including patch-based learning, test-time augmentation, model ensembling, and direct access to intermediate feature representations for deep supervision. It also supports complex multi-model training setups such as generative adversarial architectures. Thanks to its modular and extensible architecture, KonfAI can easily accommodate custom models, loss functions, and data processing components. The framework has been successfully applied to segmentation, registration, and image synthesis tasks, and has contributed to top-ranking results in several international medical imaging challenges. KonfAI is open source and available at \href{https://github.com/vboussot/KonfAI}{https://github.com/vboussot/KonfAI}.
Page 5 of 19183 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.