Sort by:
Page 299 of 6626611 results

de Almeida JG, Alberich LC, Tsakou G, Marias K, Tsiknakis M, Lekadir K, Marti-Bonmati L, Papanikolaou N

pubmed logopapersAug 6 2025
Foundation models are large models trained on big data which can be used for downstream tasks. In radiology, these models can potentially address several gaps in fairness and generalization, as they can be trained on massive datasets without labelled data and adapted to tasks requiring data with a small number of descriptions. This reduces one of the limiting bottlenecks in clinical model construction-data annotation-as these models can be trained through a variety of techniques that require little more than radiological images with or without their corresponding radiological reports. However, foundation models may be insufficient as they are affected-to a smaller extent when compared with traditional supervised learning approaches-by the same issues that lead to underperforming models, such as a lack of transparency/explainability, and biases. To address these issues, we advocate that the development of foundation models should not only be pursued but also accompanied by the development of a decentralized clinical validation and continuous training framework. This does not guarantee the resolution of the problems associated with foundation models, but it enables developers, clinicians and patients to know when, how and why models should be updated, creating a clinical AI ecosystem that is better capable of serving all stakeholders. CRITICAL RELEVANCE STATEMENT: Foundation models may mitigate issues like bias and poor generalization in radiology AI, but challenges persist. We propose a decentralized, cross-institutional framework for continuous validation and training to enhance model reliability, safety, and clinical utility. KEY POINTS: Foundation models trained on large datasets reduce annotation burdens and improve fairness and generalization in radiology. Despite improvements, they still face challenges like limited transparency, explainability, and residual biases. A decentralized, cross-institutional framework for clinical validation and continuous training can strengthen reliability and inclusivity in clinical AI.

MD Shaikh Rahman, Feiroz Humayara, Syed Maudud E Rabbi, Muhammad Mahbubur Rashid

arxiv logopreprintAug 6 2025
Content-based mammographic image retrieval systems require exact BIRADS categorical matching across five distinct classes, presenting significantly greater complexity than binary classification tasks commonly addressed in literature. Current medical image retrieval studies suffer from methodological limitations including inadequate sample sizes, improper data splitting, and insufficient statistical validation that hinder clinical translation. We developed a comprehensive evaluation framework systematically comparing CNN architectures (DenseNet121, ResNet50, VGG16) with advanced training strategies including sophisticated fine-tuning, metric learning, and super-ensemble optimization. Our evaluation employed rigorous stratified data splitting (50%/20%/30% train/validation/test), 602 test queries, and systematic validation using bootstrap confidence intervals with 1,000 samples. Advanced fine-tuning with differential learning rates achieved substantial improvements: DenseNet121 (34.79% precision@10, 19.64% improvement) and ResNet50 (34.54%, 19.58% improvement). Super-ensemble optimization combining complementary architectures achieved 36.33% precision@10 (95% CI: [34.78%, 37.88%]), representing 24.93% improvement over baseline and providing 3.6 relevant cases per query. Statistical analysis revealed significant performance differences between optimization strategies (p<0.001) with large effect sizes (Cohen's d>0.8), while maintaining practical search efficiency (2.8milliseconds). Performance significantly exceeds realistic expectations for 5-class medical retrieval tasks, where literature suggests 20-25% precision@10 represents achievable performance for exact BIRADS matching. Our framework establishes new performance benchmarks while providing evidence-based architecture selection guidelines for clinical deployment in diagnostic support and quality assurance applications.

Romario Gualdrón-Hurtado, Roman Jacome, Leon Suarez, Laura Galvis, Henry Arguello

arxiv logopreprintAug 6 2025
Imaging inverse problems are commonly addressed by minimizing measurement consistency and signal prior terms. While huge attention has been paid to developing high-performance priors, even the most advanced signal prior may lose its effectiveness when paired with an ill-conditioned sensing matrix that hinders convergence and degrades reconstruction quality. In optimization theory, preconditioners allow improving the algorithm's convergence by transforming the gradient update. Traditional linear preconditioning techniques enhance convergence, but their performance remains limited due to their dependence on the structure of the sensing matrix. Learning-based linear preconditioners have been proposed, but they are optimized only for data-fidelity optimization, which may lead to solutions in the null-space of the sensing matrix. This paper employs knowledge distillation to design a nonlinear preconditioning operator. In our method, a teacher algorithm using a better-conditioned (synthetic) sensing matrix guides the student algorithm with an ill-conditioned sensing matrix through gradient matching via a preconditioning neural network. We validate our nonlinear preconditioner for plug-and-play FISTA in single-pixel, magnetic resonance, and super-resolution imaging tasks, showing consistent performance improvements and better empirical convergence.

Nicola Casali, Alessandro Brusaferri, Giuseppe Baselli, Stefano Fumagalli, Edoardo Micotti, Gianluigi Forloni, Riaz Hussein, Giovanna Rizzo, Alfonso Mastropietro

arxiv logopreprintAug 6 2025
Accurate estimation of intravoxel incoherent motion (IVIM) parameters from diffusion-weighted MRI remains challenging due to the ill-posed nature of the inverse problem and high sensitivity to noise, particularly in the perfusion compartment. In this work, we propose a probabilistic deep learning framework based on Deep Ensembles (DE) of Mixture Density Networks (MDNs), enabling estimation of total predictive uncertainty and decomposition into aleatoric (AU) and epistemic (EU) components. The method was benchmarked against non probabilistic neural networks, a Bayesian fitting approach and a probabilistic network with single Gaussian parametrization. Supervised training was performed on synthetic data, and evaluation was conducted on both simulated and an in vivo dataset. The reliability of the quantified uncertainties was assessed using calibration curves, output distribution sharpness, and the Continuous Ranked Probability Score (CRPS). MDNs produced more calibrated and sharper predictive distributions for the diffusion coefficient D and fraction f parameters, although slight overconfidence was observed in pseudo-diffusion coefficient D*. The Robust Coefficient of Variation (RCV) indicated smoother in vivo estimates for D* with MDNs compared to Gaussian model. Despite the training data covering the expected physiological range, elevated EU in vivo suggests a mismatch with real acquisition conditions, highlighting the importance of incorporating EU, which was allowed by DE. Overall, we present a comprehensive framework for IVIM fitting with uncertainty quantification, which enables the identification and interpretation of unreliable estimates. The proposed approach can also be adopted for fitting other physical models through appropriate architectural and simulation adjustments.

Yari A, Fasih P, Kamali Hakim L, Asadi A

pubmed logopapersAug 6 2025
The aim of this study was to evaluate the performance of the YOLOv8 deep learning model for detecting zygomatic fractures. Computed tomography scans with zygomatic fractures were collected, with all slices annotated to identify fracture lines across seven categories: zygomaticomaxillary suture, zygomatic arch, zygomaticofrontal suture, sphenozygomatic suture, orbital floor, zygomatic body, and maxillary sinus wall. The images were divided into training, validation, and test datasets in a 6:2:2 ratio. Performance metrics were calculated for each category. A total of 13,988 axial and 14,107 coronal slices were retrieved. The trained algorithm achieved accuracy of 94.2-97.9%. Recall exceeded 90% across all categories, with sphenozygomatic suture fractures having the highest value (96.6%). Average precision was highest for zygomatic arch fractures (0.827) and lowest for zygomatic body fractures (0.692). The highest F1 score was 96.7% for zygomaticomaxillary suture fractures, and the lowest was 82.1% for zygomatic body fractures. Area under the curve (AUC) values were also highest for zygomaticomaxillary suture (0.943) and lowest for zygomatic body fractures (0.876). The YOLOv8 model demonstrated promising results in the automated detection of zygomatic fractures, achieving the highest performance in identifying fractures of the zygomaticomaxillary suture and zygomatic arch.

Callahan SJ, Scholand MB, Kalra A, Muelly M, Reicher JJ

pubmed logopapersAug 6 2025
Interstitial lung disease (ILD) prognostication incorporates clinical history, pulmonary function testing (PFTs), and chest CT pattern classifications. The machine learning classifier, Fibresolve, includes a model to help detect CT patterns associated with idiopathic pulmonary fibrosis (IPF). We developed and tested new Fibresolve software to predict outcomes in patients with ILD. Fibresolve uses a transformer (ViT) algorithm to analyze CT imaging that additionally embeds PFTs, age, and sex to produce an overall risk score. The model was trained to optimize risk score in a dataset of 602 subjects designed to maximize predictive performance via Cox proportional hazards. Validation was completed with the first hazard ratio assessment dataset, then tested in a second datatest set. 61 % of 220 subjects died in the validation set's study period, whereas 40 % of the 407 subjects died in the second dataset's. The validation dataset's mortality hazard ratio (HR) was 3.66 (95 % CI: 2.09-6.42) and 4.66 (CI: 2.47-8.77) for the moderate and high-risk groups. In the second dataset, Fibresolve was a predictor of mortality at initial visit, with a HR of 2.79 (1.73-4.49) and 5.82 (3.53-9.60) in the moderate and high-risk groups. Similar predictive performance was seen at follow-up visits, as well as with changes in the Fibresolve scores over sequential visits. Fibresolve predicts mortality by automatically assessing combined CT, PFTs, age, and sex into a ViT model. The new software algorithm affords accurate prognostication and demonstrates the ability to detect clinical changes over time.

Peng Z, Wang Y, Qi Y, Hu H, Fu Y, Li J, Li W, Li Z, Guo W, Shen C, Jiang J, Yang B

pubmed logopapersAug 6 2025
To establish and validate the utility of computed tomography (CT) radiomics for the prognosis of patients with non-small cell lung cancer (NSCLC). Overall, 215 patients with pathologic diagnosis of NSCLC were included, chest CT images and clinical data were collected before treatment, and follow-up was conducted to assess brain metastasis and survival. Radiomics characteristics were extracted from the chest CT lung window images of each patient, key characteristics were screened, the radiomics score (Radscore) was calculated, and radiomics, clinical, and combined models were constructed using clinically independent predictive factors. A nomogram was constructed based on the final joint model to visualize prediction results. Predictive efficacy was evaluated using the concordance index (C-index), and survival (Kaplan-Meier) and calibration curves were drawn to further evaluate predictive efficacy. The training set included 151 patients (43 with brain metastasis and 108 without brain metastasis) and 64 patients (18 with brain metastasis and 46 without). Multivariate analysis revealed that lymph node metastasis, lymphocyte percentage, and neuron-specific enolase (NSE) were independent predictors of brain metastasis in patients with NSCLC. The area under the curve (AUC) of the these models were 0.733, 0.836, and 0.849, respectively, in the training set and were 0.739, 0.779, and 0.816, respectively, in the validation set. Multivariate Cox regression analysis revealed that the number of brain metastases, distant metastases elsewhere, and C-reactive protein levels were independent predictors of postoperative survival in patients with brain metastases (<i>P</i> < 0.05). The calibration curve exhibited that the predicted values of the prognostic prediction model agreed well with the actual values. The model based on CT radiomics characteristics can effectively predict NSCLC brain metastasis and its prognosis and provide guidance for individualized treatment of NSCLC patients.

Abdel-Salam M, Houssein EH, Emam MM, Samee NA, Gharehchopogh FS, Bacanin N

pubmed logopapersAug 6 2025
Intracerebral hemorrhage (ICH) is a life-threatening condition caused by bleeding in the brain, with high mortality rates, particularly in the acute phase. Accurate diagnosis through medical image segmentation plays a crucial role in early intervention and treatment. However, existing segmentation methods, such as region-growing, clustering, and deep learning, face significant limitations when applied to complex images like ICH, especially in multi-threshold image segmentation (MTIS). As the number of thresholds increases, these methods often become computationally expensive and exhibit degraded segmentation performance. To address these challenges, this paper proposes an Elite-Adaptive-Turbulent Hiking Optimization Algorithm (EATHOA), an enhanced version of the Hiking Optimization Algorithm (HOA), specifically designed for high-dimensional and multimodal optimization problems like ICH image segmentation. EATHOA integrates three novel strategies including Elite Opposition-Based Learning (EOBL) for improving population diversity and exploration, Adaptive k-Average-Best Mutation (AKAB) for dynamically balancing exploration and exploitation, and a Turbulent Operator (TO) for escaping local optima and enhancing the convergence rate. Extensive experiments were conducted on the CEC2017 and CEC2022 benchmark functions to evaluate EATHOA's global optimization performance, where it consistently outperformed other state-of-the-art algorithms. The proposed EATHOA was then applied to solve the MTIS problem in ICH images at six different threshold levels. EATHOA achieved peak values of PSNR (34.4671), FSIM (0.9710), and SSIM (0.8816), outperforming recent methods in segmentation accuracy and computational efficiency. These results demonstrate the superior performance of EATHOA and its potential as a powerful tool for medical image analysis, offering an effective and computationally efficient solution for the complex challenges of ICH image segmentation.

Ratcliffe, C., Taylor, P. N., de Bezenac, C., Das, K., Biswas, S., Marson, A., Keller, S. S.

medrxiv logopreprintAug 6 2025
IntroductionStructural neuroimaging analyses require research quality images, acquired with costly MRI acquisitions. Isotropic (3D-T1) images are desirable for quantitative analyses, however a routine compromise in the clinical setting is to acquire anisotropic (2D-T1) analogues for qualitative visual inspection. ML (Machine learning-based) software have shown promise in addressing some of the limitations of 2D-T1 scans in research applications, yet their efficacy in quantitative research is generally poorly understood. Pathology-related abnormalities of the subcortical structures have previously been identified in idiopathic generalised epilepsy (IGE), which have been overlooked based on visual inspection, through the use of quantitative morphometric analyses. As such, IGE biomarkers present a suitable model in which to evaluate the applicability of image preprocessing methods. This study therefore explores subcortical structural biomarkers of IGE, first in our silver standard 3D-T1 scans, then in 2D-T1 scans that were either untransformed, resampled using a classical interpolation approach, or synthesised with a resolution and contrast agnostic ML model (the latter of which is compared to a separate model). Methods2D-T1 and 3D-T1 MRI scans were acquired during the same scanning session for 33 individuals with drug-responsive IGE (age mean 32.16 {+/-} SD = 14.20, male n = 14) and 42 individuals with drug-resistant IGE (31.76 {+/-} 11.12, 17), all diagnosed at the Walton Centre NHS Foundation Trust Liverpool, alongside 39 age- and sex-matched healthy controls (32.32 {+/-} 8.65, 16). The untransformed 2D-T1 scans were resampled into isotropic images using NiBabel (res-T1), and preprocessed into synthetic isotropic images using SynthSR (syn-T1). For the 3D-T1, 2D-T1, res-T1, and syn-T1 images, the recon-all command from FreeSurfer 8.0.0 was used to create parcellations of 174 anatomical regions (equivalent to the 174 regional parcellations provided as part of the DL+DiReCT pipeline), defined by the aseg and Destrieux atlases, and FSL run_first_all was used to segment subcortical surface shapes. The new ML FreeSurfer pipeline, recon-all-clinical, was also tested in the 2D-T1, 3D-T1, and res-T1 images. As a model comparison for SynthSR, the DL+DiReCT pipeline was used to provide segmentations of the 2D-T1 and res-T1 images, including estimates of regional volume and thickness. Spatial overlap and intraclass correlations between the morphometrics of the eight resulting parcellations were first determined, then subcortical surface shape abnormalities associated with IGE were identified by comparing the FSL run_first_all outputs of patients with controls. ResultsWhen standardised to the metrics derived from the 3D-T1 scans, cortical volume and thickness estimates trended lower for the 2D-T1, res-T1, syn-T1, and DL+DiReCT outputs, whereas subcortical volume estimates were more coherent. Dice coefficients revealed an acceptable spatial similarity between the cortices of the 3D-T1 scans and the other images overall, and was higher in the subcortical structures. Intraclass correlation coefficients were consistently lowest when metrics were computed for model-derived inputs, and estimates of thickness were less similar to the ground truth than those of volume. For the people with epilepsy, the 3D-T1 scans showed significant surface deflations across various subcortical structures when compared to healthy controls. Analysis of the 2D-T1 scans enabled the reliable detection of a subset of subcortical abnormalities, whereas analyses of the res-T1 and syn-T1 images were more prone to false-positive results. ConclusionsResampling and ML image synthesis methods do not currently attenuate partial volume effects resulting from low through plane resolution in anisotropic MRI scans, instead quantitative analyses using 2D-T1 scans should be interpreted with caution, and researchers should consider the potential implications of preprocessing. The recon-all-clinical pipeline is promising, but requires further evaluation, especially when considered as an alternative to the classical pipeline. Key PointsO_LISurface deviations indicative of regional atrophy and hypertrophy were identified in people with idiopathic generalised epilepsy. C_LIO_LIPartial volume effects are likely to attenuate subtle morphometric abnormalities, increasing the likelihood of erroneous inference. C_LIO_LIPriors in synthetic image creation models may render them insensitive to subtle biomarkers. C_LIO_LIResampling and machine-learning based image synthesis are not currently replacements for research quality acquisitions in quantitative MRI research. C_LIO_LIThe results of studies using synthetic images should be interpreted in a separate context to those using untransformed data. C_LI

Hou B, Du H

pubmed logopapersAug 6 2025
Magnetic Resonance Imaging (MRI) is widely utilized in medical imaging due to its high resolution and non-invasive nature. However, the prolonged acquisition time significantly limits its clinical applicability. Although traditional compressed sensing (CS) techniques can accelerate MRI acquisition, they often lead to degraded reconstruction quality under high undersampling rates. Deep learning-based methods, including CNN- and GAN-based approaches, have improved reconstruction performance, yet are limited by their local receptive fields, making it challenging to effectively capture long-range dependencies. Moreover, these models typically exhibit high computational complexity, which hinders their efficient deployment in practical scenarios. To address these challenges, we propose a lightweight Multi-scale Context-Aware Generative Adversarial Network (MCA-GAN), which enhances MRI reconstruction through dual-domain generators that collaboratively optimize both k-space and image-domain representations. MCA-GAN integrates several lightweight modules, including Depthwise Separable Local Attention (DWLA) for efficient local feature extraction, Adaptive Group Rearrangement Block (AGRB) for dynamic inter-group feature optimization, Multi-Scale Spatial Context Modulation Bridge (MSCMB) for multi-scale feature fusion in skip connections, and Channel-Spatial Multi-Scale Self-Attention (CSMS) for improved global context modeling. Extensive experiments conducted on the IXI, MICCAI 2013, and MRNet knee datasets demonstrate that MCA-GAN consistently outperforms existing methods in terms of PSNR and SSIM. Compared to SepGAN, the latest lightweight model, MCA-GAN achieves a 27.3% reduction in parameter size and a 19.6% reduction in computational complexity, while attaining the shortest reconstruction time among all compared methods. Furthermore, MCA-GAN exhibits robust performance across various undersampling masks and acceleration rates. Cross-dataset generalization experiments further confirm its ability to maintain competitive reconstruction quality, underscoring its strong generalization potential. Overall, MCA-GAN improves MRI reconstruction quality while significantly reducing computational cost through a lightweight architecture and multi-scale feature fusion, offering an efficient and accurate solution for accelerated MRI.
Page 299 of 6626611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.