Sort by:
Page 54 of 1691682 results

ToolCAP: Novel Tools to improve management of paediatric Community-Acquired Pneumonia - a randomized controlled trial- Statistical Analysis Plan

Cicconi, S., Glass, T., Du Toit, J., Bresser, M., Dhalla, F., Faye, P. M., Lal, L., Langet, H., Manji, K., Moser, A., Ndao, M. A., Palmer, M., Tine, J. A. D., Van Hoving, N., Keitel, K.

medrxiv logopreprintJun 30 2025
The ToolCAP cohort study is a prospective, observational, multi-site platform study designed to collect harmonized, high-quality clinical, imaging, and biological data on children with IMCI-defined pneumonia in low- and middle-income countries (LMICs). The primary objective is to inform the development and validation of diagnostic and prognostic tools, including lung ultrasound (LUS), point-of-care biomarkers, and AI-based models, to improve pneumonia diagnosis, management, and antimicrobial stewardship. This statistical analysis plan (SAP) outlines the analytic strategy for describing the study population, assessing the performance of candidate diagnostic tools, and enabling data sharing in support of secondary research questions and AI model development. Children under 12 years presenting with suspected pneumonia are enrolled within 24 hours of presentation and undergo clinical assessment, digital auscultation, LUS, and optional biological sampling. Follow-up occurs on Day 8 and Day 29 to assess outcomes including recovery, treatment response, and complications. The SAP details variable definitions, data management strategies, and pre-specified analyses, including descriptive summaries, sensitivity and specificity of diagnostic tools against clinical reference standards, and exploratory subgroup analyses.

Genetically Optimized Modular Neural Networks for Precision Lung Cancer Diagnosis

Agrawal, V. L., Agrawal, T.

medrxiv logopreprintJun 30 2025
Lung cancer remains one of the leading causes of cancer mortality, and while low dose CT screening improves mortality, radiological detection is challenging due to the increasing shortage of radiologists. Artificial intelligence can significantly improve the procedure and also decrease the overall workload of the entire healthcare department. Building upon the existing works of application of genetic algorithm this study aims to create a novel algorithm for lung cancer diagnosis with utmost precision. We included a total of 156 CT scans of patients divided into two databases, followed by feature extraction using image statistics, histograms, and 2D transforms (FFT, DCT, WHT). Optimal feature vectors were formed and organized into Excel based knowledge-bases. Genetically trained classifiers like MLP, GFF-NN, MNN and SVM, are then optimized, with experimentations with different combinations of parameters, activation functions, and data partitioning percentages. Evaluation metrics included classification accuracy, Mean Squared Error (MSE), Area under Receiver Operating Characteristics (ROC) curve, and computational efficiency. Computer simulations demonstrated that the MNN (Topology II) classifier, specifically when trained with FFT coefficients and a momentum learning rule, consistently achieved 100% average classification accuracy on the cross-validation dataset for both Data-base I and Data-base II, outperforming MLP-based classifiers. This genetically optimized and trained MNN (Topology II) classifier is therefore recommended as the optimal solution for lung cancer diagnosis from CT scan images.

CMT-FFNet: A CMT-based feature-fusion network for predicting TACE treatment response in hepatocellular carcinoma.

Wang S, Zhao Y, Cai X, Wang N, Zhang Q, Qi S, Yu Z, Liu A, Yao Y

pubmed logopapersJun 30 2025
Accurately and preoperatively predicting tumor response to transarterial chemoembolization (TACE) treatment is crucial for individualized treatment decision-making hepatocellular carcinoma (HCC). In this study, we propose a novel feature fusion network based on the Convolutional Neural Networks Meet Vision Transformers (CMT) architecture, termed CMT-FFNet, to predict TACE efficacy using preoperative multiphase Magnetic Resonance Imaging (MRI) scans. The CMT-FFNet combines local feature extraction with global dependency modeling through attention mechanisms, enabling the extraction of complementary information from multiphase MRI data. Additionally, we introduce an orthogonality loss to optimize the fusion of imaging and clinical features, further enhancing the complementarity of cross-modal features. Moreover, visualization techniques were employed to highlight key regions contributing to model decisions. Extensive experiments were conducted to evaluate the effectiveness of the proposed modules and network architecture. Experimental results demonstrate that our model effectively captures latent correlations among features extracted from multiphase MRI data and multimodal inputs, significantly improving the prediction performance of TACE treatment response in HCC patients.

Multicenter Evaluation of Interpretable AI for Coronary Artery Disease Diagnosis from PET Biomarkers

Zhang, W., Kwiecinski, J., Shanbhag, A., Miller, R. J., Ramirez, G., Yi, J., Han, D., Dey, D., Grodecka, D., Grodecki, K., Lemley, M., Kavanagh, P., Liang, J. X., Zhou, J., Builoff, V., Hainer, J., Carre, S., Barrett, L., Einstein, A. J., Knight, S., Mason, S., Le, V., Acampa, W., Wopperer, S., Chareonthaitawee, P., Berman, D. S., Di Carli, M. F., Slomka, P.

medrxiv logopreprintJun 30 2025
BackgroundPositron emission tomography (PET)/CT for myocardial perfusion imaging (MPI) provides multiple imaging biomarkers, often evaluated separately. We developed an artificial intelligence (AI) model integrating key clinical PET MPI parameters to improve the diagnosis of obstructive coronary artery disease (CAD). MethodsFrom 17,348 patients undergoing cardiac PET/CT across four sites, we retrospectively enrolled 1,664 subjects who had invasive coronary angiography within 180 days and no prior CAD. Deep learning was used to derive coronary artery calcium score (CAC) from CT attenuation correction maps. XGBoost machine learning model was developed using data from one site to detect CAD, defined as left main stenosis [&ge;]50% or [&ge;]70% in other arteries. The model utilized 10 image-derived parameters from clinical practice: CAC, stress/rest left ventricle ejection fraction, stress myocardial blood flow (MBF), myocardial flow reserve (MFR), ischemic and stress total perfusion deficit (TPD), transient ischemic dilation ratio, rate pressure product, and sex. Generalizability was evaluated in the remaining three sites--chosen to maximize testing power and capture inter-site variability--and model performance was compared with quantitative analyses using the area under the receiver operating characteristic curve (AUC). Patient-specific predictions were explained using shapley additive explanations. ResultsThere was a 61% and 53% CAD prevalence in the training (n=386) and external testing (n=1,278) set, respectively. In the external evaluation, the AI model achieved a higher AUC (0.83 [95% confidence interval (CI): 0.81-0.85]) compared to clinical score by experienced physicians (0.80 [0.77-0.82], p=0.02), ischemic TPD (0.79 [0.77-0.82], p<0.001), MFR (0.75 [0.72-0.78], p<0.001), and CAC (0.69 [0.66-0.72], p<0.001). The models performances were consistent in sex, body mass index, and age groups. The top features driving the prediction were stress/ischemic TPD, CAC, and MFR. ConclusionAI integrating perfusion, flow, and CAC scoring improves PET MPI diagnostic accuracy, offering automated and interpretable predictions for CAD diagnosis.

Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification

Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel

arxiv logopreprintJun 29 2025
Multimodal large language models (MLLMs) have enormous potential to perform few-shot in-context learning in the context of medical image analysis. However, safe deployment of these models into real-world clinical practice requires an in-depth analysis of the accuracies of their predictions, and their associated calibration errors, particularly across different demographic subgroups. In this work, we present the first investigation into the calibration biases and demographic unfairness of MLLMs' predictions and confidence scores in few-shot in-context learning for medical image classification. We introduce CALIN, an inference-time calibration method designed to mitigate the associated biases. Specifically, CALIN estimates the amount of calibration needed, represented by calibration matrices, using a bi-level procedure: progressing from the population level to the subgroup level prior to inference. It then applies this estimation to calibrate the predicted confidence scores during inference. Experimental results on three medical imaging datasets: PAPILA for fundus image classification, HAM10000 for skin cancer classification, and MIMIC-CXR for chest X-ray classification demonstrate CALIN's effectiveness at ensuring fair confidence calibration in its prediction, while improving its overall prediction accuracies and exhibiting minimum fairness-utility trade-off.

Physics informed guided diffusion for accelerated multi-parametric MRI reconstruction

Perla Mayo, Carolin M. Pirkl, Alin Achim, Bjoern Menze, Mohammad Golbabaee

arxiv logopreprintJun 29 2025
We introduce MRF-DiPh, a novel physics informed denoising diffusion approach for multiparametric tissue mapping from highly accelerated, transient-state quantitative MRI acquisitions like Magnetic Resonance Fingerprinting (MRF). Our method is derived from a proximal splitting formulation, incorporating a pretrained denoising diffusion model as an effective image prior to regularize the MRF inverse problem. Further, during reconstruction it simultaneously enforces two key physical constraints: (1) k-space measurement consistency and (2) adherence to the Bloch response model. Numerical experiments on in-vivo brain scans data show that MRF-DiPh outperforms deep learning and compressed sensing MRF baselines, providing more accurate parameter maps while better preserving measurement fidelity and physical model consistency-critical for solving reliably inverse problems in medical imaging.

Federated Breast Cancer Detection Enhanced by Synthetic Ultrasound Image Augmentation

Hongyi Pan, Ziliang Hong, Gorkem Durak, Ziyue Xu, Ulas Bagci

arxiv logopreprintJun 29 2025
Federated learning (FL) has emerged as a promising paradigm for collaboratively training deep learning models across institutions without exchanging sensitive medical data. However, its effectiveness is often hindered by limited data availability and non-independent, identically distributed data across participating clients, which can degrade model performance and generalization. To address these challenges, we propose a generative AI based data augmentation framework that integrates synthetic image sharing into the federated training process for breast cancer diagnosis via ultrasound images. Specifically, we train two simple class-specific Deep Convolutional Generative Adversarial Networks: one for benign and one for malignant lesions. We then simulate a realistic FL setting using three publicly available breast ultrasound image datasets: BUSI, BUS-BRA, and UDIAT. FedAvg and FedProx are adopted as baseline FL algorithms. Experimental results show that incorporating a suitable number of synthetic images improved the average AUC from 0.9206 to 0.9237 for FedAvg and from 0.9429 to 0.9538 for FedProx. We also note that excessive use of synthetic data reduced performance, underscoring the importance of maintaining a balanced ratio of real and synthetic samples. Our findings highlight the potential of generative AI based data augmentation to enhance FL results in the breast ultrasound image classification task.

Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification

Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel

arxiv logopreprintJun 29 2025
Multimodal large language models (MLLMs) have enormous potential to perform few-shot in-context learning in the context of medical image analysis. However, safe deployment of these models into real-world clinical practice requires an in-depth analysis of the accuracies of their predictions, and their associated calibration errors, particularly across different demographic subgroups. In this work, we present the first investigation into the calibration biases and demographic unfairness of MLLMs' predictions and confidence scores in few-shot in-context learning for medical image classification. We introduce CALIN, an inference-time calibration method designed to mitigate the associated biases. Specifically, CALIN estimates the amount of calibration needed, represented by calibration matrices, using a bi-level procedure: progressing from the population level to the subgroup level prior to inference. It then applies this estimation to calibrate the predicted confidence scores during inference. Experimental results on three medical imaging datasets: PAPILA for fundus image classification, HAM10000 for skin cancer classification, and MIMIC-CXR for chest X-ray classification demonstrate CALIN's effectiveness at ensuring fair confidence calibration in its prediction, while improving its overall prediction accuracies and exhibiting minimum fairness-utility trade-off.

Hierarchical Corpus-View-Category Refinement for Carotid Plaque Risk Grading in Ultrasound

Zhiyuan Zhu, Jian Wang, Yong Jiang, Tong Han, Yuhao Huang, Ang Zhang, Kaiwen Yang, Mingyuan Luo, Zhe Liu, Yaofei Duan, Dong Ni, Tianhong Tang, Xin Yang

arxiv logopreprintJun 29 2025
Accurate carotid plaque grading (CPG) is vital to assess the risk of cardiovascular and cerebrovascular diseases. Due to the small size and high intra-class variability of plaque, CPG is commonly evaluated using a combination of transverse and longitudinal ultrasound views in clinical practice. However, most existing deep learning-based multi-view classification methods focus on feature fusion across different views, neglecting the importance of representation learning and the difference in class features. To address these issues, we propose a novel Corpus-View-Category Refinement Framework (CVC-RF) that processes information from Corpus-, View-, and Category-levels, enhancing model performance. Our contribution is four-fold. First, to the best of our knowledge, we are the foremost deep learning-based method for CPG according to the latest Carotid Plaque-RADS guidelines. Second, we propose a novel center-memory contrastive loss, which enhances the network's global modeling capability by comparing with representative cluster centers and diverse negative samples at the Corpus level. Third, we design a cascaded down-sampling attention module to fuse multi-scale information and achieve implicit feature interaction at the View level. Finally, a parameter-free mixture-of-experts weighting strategy is introduced to leverage class clustering knowledge to weight different experts, enabling feature decoupling at the Category level. Experimental results indicate that CVC-RF effectively models global features via multi-level refinement, achieving state-of-the-art performance in the challenging CPG task.

MedRegion-CT: Region-Focused Multimodal LLM for Comprehensive 3D CT Report Generation

Sunggu Kyung, Jinyoung Seo, Hyunseok Lim, Dongyeong Kim, Hyungbin Park, Jimin Sung, Jihyun Kim, Wooyoung Jo, Yoojin Nam, Namkug Kim

arxiv logopreprintJun 29 2025
The recent release of RadGenome-Chest CT has significantly advanced CT-based report generation. However, existing methods primarily focus on global features, making it challenging to capture region-specific details, which may cause certain abnormalities to go unnoticed. To address this, we propose MedRegion-CT, a region-focused Multi-Modal Large Language Model (MLLM) framework, featuring three key innovations. First, we introduce Region Representative ($R^2$) Token Pooling, which utilizes a 2D-wise pretrained vision model to efficiently extract 3D CT features. This approach generates global tokens representing overall slice features and region tokens highlighting target areas, enabling the MLLM to process comprehensive information effectively. Second, a universal segmentation model generates pseudo-masks, which are then processed by a mask encoder to extract region-centric features. This allows the MLLM to focus on clinically relevant regions, using six predefined region masks. Third, we leverage segmentation results to extract patient-specific attributions, including organ size, diameter, and locations. These are converted into text prompts, enriching the MLLM's understanding of patient-specific contexts. To ensure rigorous evaluation, we conducted benchmark experiments on report generation using the RadGenome-Chest CT. MedRegion-CT achieved state-of-the-art performance, outperforming existing methods in natural language generation quality and clinical relevance while maintaining interpretability. The code for our framework is publicly available.
Page 54 of 1691682 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.