Sort by:
Page 130 of 1401395 results

Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study.

Chu X, Wang T, Chen M, Li J, Wang L, Wang C, Wang H, Wong ST, Chen Y, Li H

pubmed logopapersMay 26 2025
The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917-0.949) and 0.927 (95 % CI 0.907-0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details.

Lan H, Varghese BA, Sheikh-Bahaei N, Sepehrband F, Toga AW, Choupan J

pubmed logopapersMay 26 2025
In multi-center neuroimaging studies, the technical variability caused by the batch differences could hinder the ability to aggregate data across sites, and negatively impact the reliability of study-level results. Recent efforts in neuroimaging harmonization have aimed to minimize these technical gaps and reduce technical variability across batches. While Generative Adversarial Networks (GAN) has been a prominent method for addressing harmonization tasks, GAN-harmonized images suffer from artifacts or anatomical distortions. Given the advancements of denoising diffusion probabilistic model which produces high-fidelity images, we have assessed the efficacy of the diffusion model for neuroimaging harmonization. While GAN-based methods intrinsically transform imaging styles between two domains per model, we have demonstrated the diffusion model's superior capability in harmonizing images across multiple domains with single model. Our experiments highlight that the learned domain invariant anatomical condition reinforces the model to accurately preserve the anatomical details while differentiating batch differences at each diffusion step. Our proposed method has been tested using T1-weighted MRI images from two public neuroimaging datasets of ADNI1 and ABIDE II, yielding harmonization results with consistent anatomy preservation and superior FID score compared to the GAN-based methods. We have conducted multiple analyses including extensive quantitative and qualitative evaluations against the baseline models, ablation study showcasing the benefits of the learned domain invariant conditions, and improvements in the consistency of perivascular spaces segmentation analysis and volumetric analysis through harmonization.

Advancements in Medical Image Classification through Fine-Tuning Natural Domain Foundation Models

Mobina Mansoori, Sajjad Shahabodini, Farnoush Bayatmakou, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi

arxiv logopreprintMay 26 2025
Using massive datasets, foundation models are large-scale, pre-trained models that perform a wide range of tasks. These models have shown consistently improved results with the introduction of new methods. It is crucial to analyze how these trends impact the medical field and determine whether these advancements can drive meaningful change. This study investigates the application of recent state-of-the-art foundation models, DINOv2, MAE, VMamba, CoCa, SAM2, and AIMv2, for medical image classification. We explore their effectiveness on datasets including CBIS-DDSM for mammography, ISIC2019 for skin lesions, APTOS2019 for diabetic retinopathy, and CHEXPERT for chest radiographs. By fine-tuning these models and evaluating their configurations, we aim to understand the potential of these advancements in medical image classification. The results indicate that these advanced models significantly enhance classification outcomes, demonstrating robust performance despite limited labeled data. Based on our results, AIMv2, DINOv2, and SAM2 models outperformed others, demonstrating that progress in natural domain training has positively impacted the medical domain and improved classification outcomes. Our code is publicly available at: https://github.com/sajjad-sh33/Medical-Transfer-Learning.

Advancing Limited-Angle CT Reconstruction Through Diffusion-Based Sinogram Completion

Jiaqi Guo, Santiago Lopez-Tapia, Aggelos K. Katsaggelos

arxiv logopreprintMay 26 2025
Limited Angle Computed Tomography (LACT) often faces significant challenges due to missing angular information. Unlike previous methods that operate in the image domain, we propose a new method that focuses on sinogram inpainting. We leverage MR-SDEs, a variant of diffusion models that characterize the diffusion process with mean-reverting stochastic differential equations, to fill in missing angular data at the projection level. Furthermore, by combining distillation with constraining the output of the model using the pseudo-inverse of the inpainting matrix, the diffusion process is accelerated and done in a step, enabling efficient and accurate sinogram completion. A subsequent post-processing module back-projects the inpainted sinogram into the image domain and further refines the reconstruction, effectively suppressing artifacts while preserving critical structural details. Quantitative experimental results demonstrate that the proposed method achieves state-of-the-art performance in both perceptual and fidelity quality, offering a promising solution for LACT reconstruction in scientific and clinical applications.

Distinct brain age gradients across the adult lifespan reflect diverse neurobiological hierarchies.

Riccardi N, Teghipco A, Newman-Norlund S, Newman-Norlund R, Rangus I, Rorden C, Fridriksson J, Bonilha L

pubmed logopapersMay 25 2025
'Brain age' is a biological clock typically used to describe brain health with one number, but its relationship with established gradients of cortical organization remains unclear. We address this gap by leveraging a data-driven, region-specific brain age approach in 335 neurologically intact adults, using a convolutional neural network (volBrain) to estimate regional brain ages directly from structural MRI without a predefined set of morphometric properties. Six distinct gradients of brain aging are replicated in two independent cohorts. Spatial patterns of accelerated brain aging in older adults quantitatively align with the archetypal sensorimotor-to-association axis of cortical organization. Other brain aging gradients reflect neurobiological hierarchies such as gene expression and externopyramidization. Participant-level correspondences to brain age gradients are associated with cognitive and sensorimotor performance and explained behavioral variance more effectively than global brain age. These results suggest that regional brain age patterns reflect fundamental principles of cortical organization and behavior.

Noninvasive prediction of failure of the conservative treatment in lateral epicondylitis by clinicoradiological features and elbow MRI radiomics based on interpretable machine learning: a multicenter cohort study.

Cui J, Wang P, Zhang X, Zhang P, Yin Y, Bai R

pubmed logopapersMay 24 2025
To develop and validate an interpretable machine learning model based on clinicoradiological features and radiomic features based on magnetic resonance imaging (MRI) to predict the failure of conservative treatment in lateral epicondylitis (LE). This retrospective study included 420 patients with LE from three hospitals, divided into a training cohort (n = 245), an internal validation cohort (n = 115), and an external validation cohort (n = 60). Patients were categorized into conservative treatment failure (n = 133) and conservative treatment success (n = 287) groups based on the outcome of conservative treatment. We developed two predictive models: one utilizing clinicoradiological features, and another integrating clinicoradiological and radiomic features. Seven machine learning algorithms were evaluated to determine the optimal model for predicting the failure of conservative treatment. Model performance was assessed using ROC, and model interpretability was examined using SHapley Additive exPlanations (SHAP). The LightGBM algorithm was selected as the optimal model because of its superior performance. The combined model demonstrated enhanced predictive accuracy with an area under the ROC curve (AUC) of 0.96 (95% CI: 0.91, 0.99) in the external validation cohort. SHAP analysis identified the radiological feature "CET coronal tear size" and the radiomic feature "AX_log-sigma-1-0-mm-3D_glszm_SmallAreaEmphasis" as key predictors of conservative treatment failure. We developed and validated an interpretable LightGBM machine learning model that integrates clinicoradiological and radiomic features to predict the failure of conservative treatment in LE. The model demonstrates high predictive accuracy and offers valuable insights into key prognostic factors.

TK-Mamba: Marrying KAN with Mamba for Text-Driven 3D Medical Image Segmentation

Haoyu Yang, Yuxiang Cai, Jintao Chen, Xuhong Zhang, Wenhui Lei, Xiaoming Shi, Jianwei Yin, Yankai Jiang

arxiv logopreprintMay 24 2025
3D medical image segmentation is vital for clinical diagnosis and treatment but is challenged by high-dimensional data and complex spatial dependencies. Traditional single-modality networks, such as CNNs and Transformers, are often limited by computational inefficiency and constrained contextual modeling in 3D settings. We introduce a novel multimodal framework that leverages Mamba and Kolmogorov-Arnold Networks (KAN) as an efficient backbone for long-sequence modeling. Our approach features three key innovations: First, an EGSC (Enhanced Gated Spatial Convolution) module captures spatial information when unfolding 3D images into 1D sequences. Second, we extend Group-Rational KAN (GR-KAN), a Kolmogorov-Arnold Networks variant with rational basis functions, into 3D-Group-Rational KAN (3D-GR-KAN) for 3D medical imaging - its first application in this domain - enabling superior feature representation tailored to volumetric data. Third, a dual-branch text-driven strategy leverages CLIP's text embeddings: one branch swaps one-hot labels for semantic vectors to preserve inter-organ semantic relationships, while the other aligns images with detailed organ descriptions to enhance semantic alignment. Experiments on the Medical Segmentation Decathlon (MSD) and KiTS23 datasets show our method achieving state-of-the-art performance, surpassing existing approaches in accuracy and efficiency. This work highlights the power of combining advanced sequence modeling, extended network architectures, and vision-language synergy to push forward 3D medical image segmentation, delivering a scalable solution for clinical use. The source code is openly available at https://github.com/yhy-whu/TK-Mamba.

Quantitative image quality metrics enable resource-efficient quality control of clinically applied AI-based reconstructions in MRI.

White OA, Shur J, Castagnoli F, Charles-Edwards G, Whitcher B, Collins DJ, Cashmore MTD, Hall MG, Thomas SA, Thompson A, Harrison CA, Hopkinson G, Koh DM, Winfield JM

pubmed logopapersMay 24 2025
AI-based MRI reconstruction techniques improve efficiency by reducing acquisition times whilst maintaining or improving image quality. Recent recommendations from professional bodies suggest centres should perform quality assessments on AI tools. However, monitoring long-term performance presents challenges, due to model drift or system updates. Radiologist-based assessments are resource-intensive and may be subjective, highlighting the need for efficient quality control (QC) measures. This study explores using image quality metrics (IQMs) to assess AI-based reconstructions. 58 patients undergoing standard-of-care rectal MRI were imaged using AI-based and conventional T2-weighted sequences. Paired and unpaired IQMs were calculated. Sensitivity of IQMs to detect retrospective perturbations in AI-based reconstructions was assessed using control charts, and statistical comparisons between the four MR systems in the evaluation were performed. Two radiologists evaluated the image quality of the perturbed images, giving an indication of their clinical relevance. Paired IQMs demonstrated sensitivity to changes in AI-reconstruction settings, identifying deviations outside ± 2 standard deviations of the reference dataset. Unpaired metrics showed less sensitivity. Paired IQMs showed no difference in performance between 1.5 T and 3 T systems (p > 0.99), whilst minor but significant (p < 0.0379) differences were noted for unpaired IQMs. IQMs are effective for QC of AI-based MR reconstructions, offering resource-efficient alternatives to repeated radiologist evaluations. Future work should expand this to other imaging applications and assess additional measures.

Evaluation of locoregional invasiveness of early lung adenocarcinoma manifesting as ground-glass nodules via [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT imaging.

Ruan D, Shi S, Guo W, Pang Y, Yu L, Cai J, Wu Z, Wu H, Sun L, Zhao L, Chen H

pubmed logopapersMay 24 2025
Accurate differentiation of the histologic invasiveness of early-stage lung adenocarcinoma is crucial for determining surgical strategies. This study aimed to investigate the potential of [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT in assessing the invasiveness of early lung adenocarcinoma presenting as ground-glass nodules (GGNs) and identifying imaging features with strong predictive potential. This prospective study (NCT04588064) was conducted between July 2020 and July 2022, focusing on GGNs that were confirmed postoperatively to be either invasive adenocarcinoma (IAC), minimally invasive adenocarcinoma (MIA), or precursor glandular lesions (PGL). A total of 45 patients with 53 pulmonary GGNs were included in the study: 19 patients with GGNs associated with PGL-MIA and 34 with IAC. Lung nodules were segmented using the Segment Anything Model in Medical Images (MedSAM) and the PET Tumor Segmentation Extension. Clinical characteristics, along with conventional and high-throughput radiomics features from High-resolution CT (HRCT) and PET scans, were analysed. The predictive performance of these features in differentiating between PGL or MIA (PGL-MIA) and IAC was assessed using 5-fold cross-validation across six machine learning algorithms. Model validation was performed on an independent external test set (n = 11). The Chi-squared, Fisher's exact, and DeLong tests were employed to compare the performance of the models. The maximum standardised uptake value (SUVmax) derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET was identified as an independent predictor of IAC. A cut-off value of 1.82 yielded a sensitivity of 94% (32/34), specificity of 84% (16/19), and an overall accuracy of 91% (48/53) in the training set, while achieving 100% (12/12) accuracy in the external test set. Radiomics-based classification further improved diagnostic performance, achieving a sensitivity of 97% (33/34), specificity of 89% (17/19), accuracy of 94% (50/53), and an area under the receiver operating characteristic curve (AUC) of 0.97 [95% CI: 0.93-1.00]. Compared with the CT-based radiomics model and the PET-based model, the combined PET/CT radiomics model did not show significant improvement in predictive performance. The key predictive feature was [<sup>68</sup>Ga]Ga-FAPI-46 PET log-sigma-7-mm-3D_firstorder_RootMeanSquared. The SUVmax derived from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT can effectively differentiate the invasiveness of early-stage lung adenocarcinoma manifesting as GGNs. Integrating high-throughput features from [<sup>68</sup>Ga]Ga-FAPI-46 PET/CT images can considerably enhance classification accuracy. NCT04588064; URL: https://clinicaltrials.gov/study/NCT04588064 .
Page 130 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.