Sort by:
Page 1 of 14136 results
Next

Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.

Dong V, Mankowski W, Silva Filho TM, McCarthy AM, Kontos D, Maidment ADA, Barufaldi B

pubmed logopapersNov 1 2025
Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability. We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes. LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : 0.880, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : 0.779, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : 0.878, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades. Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.

SimAQ: Mitigating Experimental Artifacts in Soft X-Ray Tomography using Simulated Acquisitions

Jacob Egebjerg, Daniel Wüstner

arxiv logopreprintAug 14 2025
Soft X-ray tomography (SXT) provides detailed structural insight into whole cells but is hindered by experimental artifacts such as the missing wedge and by limited availability of annotated datasets. We present \method, a simulation pipeline that generates realistic cellular phantoms and applies synthetic artifacts to produce paired noisy volumes, sinograms, and reconstructions. We validate our approach by training a neural network primarily on synthetic data and demonstrate effective few-shot and zero-shot transfer learning on real SXT tomograms. Our model delivers accurate segmentations, enabling quantitative analysis of noisy tomograms without relying on large labeled datasets or complex reconstruction methods.

KonfAI: A Modular and Fully Configurable Framework for Deep Learning in Medical Imaging

Valentin Boussot, Jean-Louis Dillenseger

arxiv logopreprintAug 13 2025
KonfAI is a modular, extensible, and fully configurable deep learning framework specifically designed for medical imaging tasks. It enables users to define complete training, inference, and evaluation workflows through structured YAML configuration files, without modifying the underlying code. This declarative approach enhances reproducibility, transparency, and experimental traceability while reducing development time. Beyond the capabilities of standard pipelines, KonfAI provides native abstractions for advanced strategies including patch-based learning, test-time augmentation, model ensembling, and direct access to intermediate feature representations for deep supervision. It also supports complex multi-model training setups such as generative adversarial architectures. Thanks to its modular and extensible architecture, KonfAI can easily accommodate custom models, loss functions, and data processing components. The framework has been successfully applied to segmentation, registration, and image synthesis tasks, and has contributed to top-ranking results in several international medical imaging challenges. KonfAI is open source and available at \href{https://github.com/vboussot/KonfAI}{https://github.com/vboussot/KonfAI}.

Comparative evaluation of CAM methods for enhancing explainability in veterinary radiography.

Dusza P, Banzato T, Burti S, Bendazzoli M, Müller H, Wodzinski M

pubmed logopapersAug 13 2025
Explainable Artificial Intelligence (XAI) encompasses a broad spectrum of methods that aim to enhance the transparency of deep learning models, with Class Activation Mapping (CAM) methods widely used for visual interpretability. However, systematic evaluations of these methods in veterinary radiography remain scarce. This study presents a comparative analysis of eleven CAM methods, including GradCAM, XGradCAM, ScoreCAM, and EigenCAM, on a dataset of 7362 canine and feline X-ray images. A ResNet18 model was chosen based on the specificity of the dataset and preliminary results where it outperformed other models. Quantitative and qualitative evaluations were performed to determine how well each CAM method produced interpretable heatmaps relevant to clinical decision-making. Among the techniques evaluated, EigenGradCAM achieved the highest mean score and standard deviation (SD) of 2.571 (SD = 1.256), closely followed by EigenCAM at 2.519 (SD = 1.228) and GradCAM++ at 2.512 (SD = 1.277), with methods such as FullGrad and XGradCAM achieving worst scores of 2.000 (SD = 1.300) and 1.858 (SD = 1.198) respectively. Despite variations in saliency visualization, no single method universally improved veterinarians' diagnostic confidence. While certain CAM methods provide better visual cues for some pathologies, they generally offered limited explainability and didn't substantially improve veterinarians' diagnostic confidence.

A Physics-Driven Neural Network with Parameter Embedding for Generating Quantitative MR Maps from Weighted Images

Lingjing Chen, Chengxiu Zhang, Yinqiao Yi, Yida Wang, Yang Song, Xu Yan, Shengfang Xu, Dalin Zhu, Mengqiu Cao, Yan Zhou, Chenglong Wang, Guang Yang

arxiv logopreprintAug 11 2025
We propose a deep learning-based approach that integrates MRI sequence parameters to improve the accuracy and generalizability of quantitative image synthesis from clinical weighted MRI. Our physics-driven neural network embeds MRI sequence parameters -- repetition time (TR), echo time (TE), and inversion time (TI) -- directly into the model via parameter embedding, enabling the network to learn the underlying physical principles of MRI signal formation. The model takes conventional T1-weighted, T2-weighted, and T2-FLAIR images as input and synthesizes T1, T2, and proton density (PD) quantitative maps. Trained on healthy brain MR images, it was evaluated on both internal and external test datasets. The proposed method achieved high performance with PSNR values exceeding 34 dB and SSIM values above 0.92 for all synthesized parameter maps. It outperformed conventional deep learning models in accuracy and robustness, including data with previously unseen brain structures and lesions. Notably, our model accurately synthesized quantitative maps for these unseen pathological regions, highlighting its superior generalization capability. Incorporating MRI sequence parameters via parameter embedding allows the neural network to better learn the physical characteristics of MR signals, significantly enhancing the performance and reliability of quantitative MRI synthesis. This method shows great potential for accelerating qMRI and improving its clinical utility.

Post-deployment Monitoring of AI Performance in Intracranial Hemorrhage Detection by ChatGPT.

Rohren E, Ahmadzade M, Colella S, Kottler N, Krishnan S, Poff J, Rastogi N, Wiggins W, Yee J, Zuluaga C, Ramis P, Ghasemi-Rad M

pubmed logopapersAug 11 2025
To evaluate the post-deployment performance of an artificial intelligence (AI) system (Aidoc) for intracranial hemorrhage (ICH) detection and assess the utility of ChatGPT-4 Turbo for automated AI monitoring. This retrospective study evaluated 332,809 head CT examinations from 37 radiology practices across the United States (December 2023-May 2024). Of these, 13,569 cases were flagged as positive for ICH by the Aidoc AI system. A HIPAA (Health Insurance Portability and Accountability Act) -compliant version of ChatGPT-4 Turbo was used to extract data from radiology reports. Ground truth was established through radiologists' review of 200 randomly selected cases. Performance metrics were calculated for ChatGPT, Aidoc and radiologists. ChatGPT-4 Turbo demonstrated high diagnostic accuracy in identifying intracranial hemorrhage (ICH) from radiology reports, with a positive predictive value of 1 and a negative predictive value of 0.988 (AUC:0.996). Aidoc's false positive classifications were influenced by scanner manufacturer, midline shift, mass effect, artifacts, and neurologic symptoms. Multivariate analysis identified Philips scanners (OR: 6.97, p=0.003) and artifacts (OR: 3.79, p=0.029) as significant contributors to false positives, while midline shift (OR: 0.08, p=0.021) and mass effect (OR: 0.18, p=0.021) were associated with a reduced false positive rate. Aidoc-assisted radiologists achieved a sensitivity of 0.936 and a specificity of 1. This study underscores the importance of continuous performance monitoring for AI systems in clinical practice. The integration of LLMs offers a scalable solution for evaluating AI performance, ensuring reliable deployment and enhancing diagnostic workflows.

Fourier Optics and Deep Learning Methods for Fast 3D Reconstruction in Digital Holography

Justin London

arxiv logopreprintAug 8 2025
Computer-generated holography (CGH) is a promising method that modulates user-defined waveforms with digital holograms. An efficient and fast pipeline framework is proposed to synthesize CGH using initial point cloud and MRI data. This input data is reconstructed into volumetric objects that are then input into non-convex Fourier optics optimization algorithms for phase-only hologram (POH) and complex-hologram (CH) generation using alternating projection, SGD, and quasi-Netwton methods. Comparison of reconstruction performance of these algorithms as measured by MSE, RMSE, and PSNR is analyzed as well as to HoloNet deep learning CGH. Performance metrics are shown to be improved by using 2D median filtering to remove artifacts and speckled noise during optimization.

GAN-MRI enhanced multi-organ MRI segmentation: a deep learning perspective.

Channarayapatna Srinivasa A, Bhat SS, Baduwal D, Sim ZTJ, Patil SS, Amarapur A, Prakash KNB

pubmed logopapersAug 8 2025
Clinical magnetic resonance imaging (MRI) is a high-resolution tool widely used for detailed anatomical imaging. However, prolonged scan times often lead to motion artefacts and patient discomfort. Fast acquisition techniques can reduce scan times but often produce noisy, low-contrast images, compromising segmentation accuracy essential for diagnosis and treatment planning. To address these limitations, we developed an end-to-end framework that incorporates BIDS-based data organiser and anonymizer, a GAN-based MR image enhancement model (GAN-MRI), AssemblyNet for brain region segmentation, and an attention-residual U-Net with Guided loss for abdominal and thigh segmentation. Thirty brain scans (5,400 slices) and 32 abdominal (1,920 slices) and 55 thigh scans (2,200 slices) acquired from multiple MRI scanners (GE, Siemens, Toshiba) underwent evaluation. Image quality improved significantly, with SNR and CNR for brain scans increasing from 28.44 to 42.92 (p < 0.001) and 11.88 to 18.03 (p < 0.001), respectively. Abdominal scans exhibited SNR increases from 35.30 to 50.24 (p < 0.001) and CNR from 10,290.93 to 93,767.22 (p < 0.001). Double-blind evaluations highlighted improved visualisations of anatomical structures and bias field correction. Segmentation performance improved substantially in the thigh (muscle: + 21%, IMAT: + 9%) and abdominal regions (SSAT: + 1%, DSAT: + 2%, VAT: + 12%), while brain segmentation metrics remained largely stable, reflecting the robustness of the baseline model. Proposed framework is designed to handle data from multiple anatomies with variations from different MRI scanners and centres by enhancing MRI scan and improving segmentation accuracy, diagnostic precision and treatment planning while reducing scan times and maintaining patient comfort.

MAISI-v2: Accelerated 3D High-Resolution Medical Image Synthesis with Rectified Flow and Region-specific Contrastive Loss

Can Zhao, Pengfei Guo, Dong Yang, Yucheng Tang, Yufan He, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, Daguang Xu

arxiv logopreprintAug 7 2025
Medical image synthesis is an important topic for both clinical and research applications. Recently, diffusion models have become a leading approach in this area. Despite their strengths, many existing methods struggle with (1) limited generalizability that only work for specific body regions or voxel spacings, (2) slow inference, which is a common issue for diffusion models, and (3) weak alignment with input conditions, which is a critical issue for medical imaging. MAISI, a previously proposed framework, addresses generalizability issues but still suffers from slow inference and limited condition consistency. In this work, we present MAISI-v2, the first accelerated 3D medical image synthesis framework that integrates rectified flow to enable fast and high quality generation. To further enhance condition fidelity, we introduce a novel region-specific contrastive loss to enhance the sensitivity to region of interest. Our experiments show that MAISI-v2 can achieve SOTA image quality with $33 \times$ acceleration for latent diffusion model. We also conducted a downstream segmentation experiment to show that the synthetic images can be used for data augmentation. We release our code, training details, model weights, and a GUI demo to facilitate reproducibility and promote further development within the community.

A novel approach for CT image smoothing: Quaternion Bilateral Filtering for kernel conversion.

Nasr M, Piórkowski A, Brzostowski K, El-Samie FEA

pubmed logopapersAug 7 2025
Denoising reconstructed Computed Tomography (CT) images without access to raw projection data remains a significant difficulty in medical imaging, particularly when utilizing sharp or medium reconstruction kernels that generate high-frequency noise. This work introduces an innovative method that integrates quaternion mathematics with bilateral filtering to resolve this issue. The proposed Quaternion Bilateral Filter (QBF) effectively maintains anatomical structures and mitigates noise caused by the kernel by expressing CT scans in quaternion form, with the red, green, and blue channels encoded together. Compared to conventional methods that depend on raw data or grayscale filtering, our approach functions directly on reconstructed sharp kernel images. It converts them to mimic the quality of soft-kernel outputs, obtained with kernels such as B30f, using paired data from the same patients. The efficacy of the QBF is evidenced by both full-reference metrics (Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE)) and no-reference perceptual metrics (Naturalness Image Quality Evaluator (NIQE), Blind Referenceless Image Spatial Quality Evaluator (BRISQUE), and Perception-based Image Quality Evaluator (PIQE)). The results indicate that the QBF demonstrates improved denoising efficacy compared to traditional Bilateral Filter (BF), Non-Local Means (NLM), wavelet, and Convolutional Neural Network (CNN)-based processing, achieving an SSIM of 0.96 and a PSNR of 36.3 on B50f reconstructions. Additionally, segmentation-based visual validation verifies that QBF-filtered outputs maintain essential structural details necessary for subsequent diagnostic tasks. This study emphasizes the importance of quaternion-based filtering as a lightweight, interpretable, and efficient substitute for deep learning models in post-reconstruction CT image enhancement.
Page 1 of 14136 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.