Sort by:
Page 9 of 2922917 results

HNOSeg-XS: Extremely Small Hartley Neural Operator for Efficient and Resolution-Robust 3D Image Segmentation.

Wong KCL, Wang H, Syeda-Mahmood T

pubmed logopapersJul 11 2025
In medical image segmentation, convolutional neural networks (CNNs) and transformers are dominant. For CNNs, given the local receptive fields of convolutional layers, long-range spatial correlations are captured through consecutive convolutions and pooling. However, as the computational cost and memory footprint can be prohibitively large, 3D models can only afford fewer layers than 2D models with reduced receptive fields and abstract levels. For transformers, although long-range correlations can be captured by multi-head attention, its quadratic complexity with respect to input size is computationally demanding. Therefore, either model may require input size reduction to allow more filters and layers for better segmentation. Nevertheless, given their discrete nature, models trained with patch-wise training or image downsampling may produce suboptimal results when applied on higher resolutions. To address this issue, here we propose the resolution-robust HNOSeg-XS architecture. We model image segmentation by learnable partial differential equations through the Fourier neural operator which has the zero-shot super-resolution property. By replacing the Fourier transform by the Hartley transform and reformulating the problem in the frequency domain, we created the HNOSeg-XS model, which is resolution robust, fast, memory efficient, and extremely parameter efficient. When tested on the BraTS'23, KiTS'23, and MVSeg'23 datasets with a Tesla V100 GPU, HNOSeg-XS showed its superior resolution robustness with fewer than 34.7k model parameters. It also achieved the overall best inference time (< 0.24 s) and memory efficiency (< 1.8 GiB) compared to the tested CNN and transformer models<sup>1</sup>.

Automated MRI protocoling in neuroradiology in the era of large language models.

Reiner LN, Chelbi M, Fetscher L, Stöckel JC, Csapó-Schmidt C, Guseynova S, Al Mohamad F, Bressem KK, Nawabi J, Siebert E, Wattjes MP, Scheel M, Meddeb A

pubmed logopapersJul 11 2025
This study investigates the automation of MRI protocoling, a routine task in radiology, using large language models (LLMs), comparing an open-source (LLama 3.1 405B) and a proprietary model (GPT-4o) with and without retrieval-augmented generation (RAG), a method for incorporating domain-specific knowledge. This retrospective study included MRI studies conducted between January and December 2023, along with institution-specific protocol assignment guidelines. Clinical questions were extracted, and a neuroradiologist established the gold standard protocol. LLMs were tasked with assigning MRI protocols and contrast medium administration with and without RAG. The results were compared to protocols selected by four radiologists. Token-based symmetric accuracy, the Wilcoxon signed-rank test, and the McNemar test were used for evaluation. Data from 100 neuroradiology reports (mean age = 54.2 years ± 18.41, women 50%) were included. RAG integration significantly improved accuracy in sequence and contrast media prediction for LLama 3.1 (Sequences: 38% vs. 70%, P < .001, Contrast Media: 77% vs. 94%, P < .001), and GPT-4o (Sequences: 43% vs. 81%, P < .001, Contrast Media: 79% vs. 92%, P = .006). GPT-4o outperformed LLama 3.1 in MRI sequence prediction (81% vs. 70%, P < .001), with comparable accuracies to the radiologists (81% ± 0.21, P = .43). Both models equaled radiologists in predicting contrast media administration (LLama 3.1 RAG: 94% vs. 91% ± 0.2, P = .37, GPT-4o RAG: 92% vs. 91% ± 0.24, P = .48). Large language models show great potential as decision-support tools for MRI protocoling, with performance similar to radiologists. RAG enhances the ability of LLMs to provide accurate, institution-specific protocol recommendations.

Semi-supervised Medical Image Segmentation Using Heterogeneous Complementary Correction Network and Confidence Contrastive Learning.

Li L, Xue M, Li S, Dong Z, Liao T, Li P

pubmed logopapersJul 11 2025
Semi-supervised medical image segmentation techniques have demonstrated significant potential and effectiveness in clinical diagnosis. The prevailing approaches using the mean-teacher (MT) framework achieve promising image segmentation results. However, due to the unreliability of the pseudo labels generated by the teacher model, existing methods still have some inherent limitations that must be considered and addressed. In this paper, we propose an innovative semi-supervised method for medical image segmentation by combining the heterogeneous complementary correction network and confidence contrastive learning (HC-CCL). Specifically, we develop a triple-branch framework by integrating a heterogeneous complementary correction (HCC) network into the MT framework. HCC serves as an auxiliary branch that corrects prediction errors in the student model and provides complementary information. To improve the capacity for feature learning in our proposed model, we introduce a confidence contrastive learning (CCL) approach with a novel sampling strategy. Furthermore, we develop a momentum style transfer (MST) method to narrow the gap between labeled and unlabeled data distributions. In addition, we introduce a Cutout-style augmentation for unsupervised learning to enhance performance. Three medical image datasets (including left atrial (LA) dataset, NIH pancreas dataset, Brats-2019 dataset) were employed to rigorously evaluate HC-CCL. Quantitative results demonstrate significant performance advantages over existing approaches, achieving state-of-the-art performance across all metrics. The implementation will be released at https://github.com/xxmmss/HC-CCL .

A novel artificial Intelligence-Based model for automated Lenke classification in adolescent idiopathic scoliosis.

Xie K, Zhu S, Lin J, Li Y, Huang J, Lei W, Yan Y

pubmed logopapersJul 11 2025
To develop an artificial intelligence (AI)-driven model for automatic Lenke classification of adolescent idiopathic scoliosis (AIS) and assess its performance. This retrospective study utilized 860 spinal radiographs from 215 AIS patients with four views, including 161 training sets and 54 testing sets. Additionally, 1220 spinal radiographs from 610 patients with only anterior-posterior (AP) and lateral (LAT) views were collected for training. The model was designed to perform keypoint detection, pedicle segmentation, and AIS classification based on a custom classification strategy. Its performance was evaluated against the gold standard using metrics such as mean absolute difference (MAD), intraclass correlation coefficient (ICC), Bland-Altman plots, Cohen's Kappa, and the confusion matrix. In comparison to the gold standard, the MAD for all predicted angles was 2.29°, with an excellent ICC. Bland-Altman analysis revealed minimal differences between the methods. For Lenke classification, the model exhibited exceptional consistency in curve type, lumbar modifier, and thoracic sagittal profile, with average Kappa values of 0.866, 0.845, and 0.827, respectively, and corresponding accuracy rates of 87.07%, 92.59%, and 92.59%. Subgroup analysis further confirmed the model's high consistency, with Kappa values ranging from 0.635 to 0.930, 0.672 to 0.926, and 0.815 to 0.847, and accuracy rates between 90.7 and 98.1%, 92.6-98.3%, and 92.6-98.1%, respectively. This novel AI system facilitates the rapid and accurate automatic Lenke classification, offering potential assistance to spinal surgeons.

Effect of data-driven motion correction for respiratory movement on lesion detectability in PET-CT: a phantom study.

de Winter MA, Gevers R, Lavalaye J, Habraken JBA, Maspero M

pubmed logopapersJul 11 2025
While data-driven motion correction (DDMC) techniques have proven to enhance the visibility of lesions affected by motion, their impact on overall detectability remains unclear. This study investigates whether DDMC improves lesion detectability in PET-CT using FDG-18F. A moving platform simulated respiratory motion in a NEMA-IEC body phantom with varying amplitudes (0, 7, 10, 20, 30 mm) and target-to-background ratios (2, 5, 10.5). Scans were reconstructed with and without DDMC, and the spherical targets' maximal and mean recovery coefficient (RC) and contrast-to-noise ratio (CNR) were measured. DDMC results in higher RC values in the target spheres. CNR values increase for small, high-motion affected targets but decrease for larger spheres with smaller amplitudes. A sub-analysis shows that DDMC increases the contrast of the sphere along with a 36% increase in background noise. While DDMC significantly enhances contrast (RC), its impact on detectability (CNR) is less profound due to increased background noise. CNR improves for small targets with high motion amplitude, potentially enhancing the detectability of low-uptake lesions. Given that the increased background noise may reduce detectability for targets unaffected by motion, we suggest that DDMC reconstructions are used best in addition to non-DDMC reconstructions.

Diffusion-weighted imaging in rectal cancer MRI from theory to practice.

Mayumi Takamune D, Miranda J, Mariussi M, Reif de Paula T, Mazaheri Y, Younus E, Jethwa KR, Knudsen CC, Bizinoto V, Cardoso D, de Arimateia Batista Araujo-Filho J, Sparapan Marques CF, Higa Nomura C, Horvat N

pubmed logopapersJul 11 2025
Diffusion-weighted imaging (DWI) has become a cornerstone of high-resolution rectal MRI, providing critical functional information that complements T2-weighted imaging (T2WI) throughout the management of rectal cancer. From baseline staging to restaging after neoadjuvant therapy and longitudinal surveillance during nonoperative management or post-surgical follow-up, DWI improves tumor detection, characterizes treatment response, and facilitates early identification of tumor regrowth or recurrence. This review offers a comprehensive overview of DWI in rectal cancer, emphasizing its technical characteristics, optimal acquisition strategies, and integration with qualitative and quantitative interpretive frameworks. The manuscript also addresses interpretive pitfalls, highlights emerging techniques such as intravoxel incoherent motion (IVIM), diffusion kurtosis imaging (DKI), and small field-of-view DWI, and explores the growing role of radiomics and artificial intelligence in advancing precision imaging. DWI, when rigorously implemented and interpreted, enhances the accuracy, reproducibility, and clinical utility of rectal MRI.

[MP-MRI in the evaluation of non-operative treatment response, for residual and recurrent tumor detection in head and neck cancer].

Gődény M

pubmed logopapersJul 11 2025
As non-surgical therapies gain acceptance in head and neck tumors, the importance of imaging has increased. New therapeutic methods (in radiation therapy, targeted biological therapy, immunotherapy) need better tumor characterization and prognostic information along with the accurate anatomy. Magnetic resonance imaging (MRI) has become the gold standard in head and neck cancer evaluation not only for staging but also for assessing tumor response, posttreatment status and complications, as well as for finding residual or recurrent tumor. Multiparametric anatomical and functional MRI (MP-MRI) is a true cancer imaging biomarker providing, in addition to high resolution tumor anatomy, more molecular and functional, qualitative and quantitative data using diffusion- weighted MRI (DW-MRI) and perfusion-dynamic contrast enhanced MRI (P-DCE-MRI), can improve the assessment of biological target volume and determine treatment response. DW-MRI provides information at the cellular level about the cell density and the integrity of the plasma membrane, based on water movement. P-DCE-MRI provides useful hemodynamic information about tissue vascularity and vascular permeability. Recent studies have shown promising results using radiomics features, MP-MRI has opened new perspectives in oncologic imaging with better realization of the latest technological advances with the help of artificial intelligence.

RadientFusion-XR: A Hybrid LBP-HOG Model for COVID-19 Detection Using Machine Learning.

K V G, Gripsy JV

pubmed logopapersJul 11 2025
The rapid and accurate detection of COVID-19 (coronavirus disease 2019) from normal and pneumonia chest x-ray images is essential for timely diagnosis and treatment. The overlapping features in radiology images make it challenging for radiologists to distinguish COVID-19 cases. This research study investigates the effectiveness of combining local binary pattern (LBP) and histogram of oriented gradients (HOG) features with machine learning algorithms to differentiate COVID-19 from normal and pneumonia cases using chest x-rays. The proposed hybrid fusion model "RadientFusion-XR" utilizes LBP and HOG features with shallow learning algorithms. The proposed hybrid HOG-LBP fusion model, RadientFusion-XR, detects COVID-19 cases from normal and pneumonia classes. This fusion model provides a comprehensive representation, enabling more precise differentiation among the three classes. This methodology presents a promising and efficient tool for early COVID-19 and pneumonia diagnosis in clinical settings, with potential integration into automated diagnostic systems. The findings highlight the potential of this hybrid feature extraction and a shallow learning approach to improve diagnostic accuracy in chest x-ray analysis significantly. The hybrid model using LBP and HOG features with an ensemble model achieved an exceptional accuracy of 99% for binary class (COVID-19, normal) and 97% for multi-class (COVID-19, normal, pneumonia), respectively. These results demonstrate the efficacy of our hybrid approach in enhancing feature representation and achieving superior classification accuracy. The proposed RadientFusion-XR model with hybrid feature extraction and shallow learning approach significantly increases the accuracy of COVID-19 and pneumonia diagnoses from chest x-rays. The interpretable nature of RadientFusion-XR, alongside its effectiveness and explainability, makes it a valuable tool for clinical applications, fostering trust and enabling informed decision-making by healthcare professionals.

An integrated strategy based on radiomics and quantum machine learning: diagnosis and clinical interpretation of pulmonary ground-glass nodules.

Huang X, Xu F, Zhu W, Yao L, He J, Su J, Zhao W, Hu H

pubmed logopapersJul 11 2025
Accurate classification of pulmonary pure ground-glass nodules (pGGNs) is essential for distinguishing invasive adenocarcinoma (IVA) from adenocarcinoma in situ (AIS) and minimally invasive adenocarcinoma (MIA), which significantly influences treatment decisions. This study aims to develop a high-precision integrated strategy by combining radiomics-based feature extraction, Quantum Machine Learning (QML) models, and SHapley Additive exPlanations (SHAP) analysis to improve diagnostic accuracy and interpretability in pGGN classification. A total of 322 pGGNs from 275 patients were retrospectively analyzed. The CT images was randomly divided into training and testing cohorts (80:20), with radiomic features extracted from the training cohort. Three QML models-Quantum Support Vector Classifier (QSVC), Pegasos QSVC, and Quantum Neural Network (QNN)-were developed and compared with a classical Support Vector Machine (SVM). SHAP analysis was applied to interpret the contribution of radiomic features to the models' predictions. All three QML models outperformed the classical SVM, with the QNN model achieving the highest improvements ([Formula: see text]) in classification metrics, including accuracy (89.23%, 95% CI: 81.54% - 95.38%), sensitivity (96.55%, 95% CI: 89.66% - 100.00%), specificity (83.33%, 95% CI: 69.44% - 94.44%), and area under the curve (AUC) (0.937, 95% CI: 0.871 - 0.983), respectively. SHAP analysis identified Low Gray Level Run Emphasis (LGLRE), Gray Level Non-uniformity (GLN), and Size Zone Non-uniformity (SZN) as the most critical features influencing classification. This study demonstrates that the proposed integrated strategy, combining radiomics, QML models, and SHAP analysis, significantly enhances the accuracy and interpretability of pGGN classification, particularly in small-sample datasets. It offers a promising tool for early, non-invasive lung cancer diagnosis and helps clinicians make more informed treatment decisions. Not applicable.

Interpretable MRI Subregional Radiomics-Deep Learning Model for Preoperative Lymphovascular Invasion Prediction in Rectal Cancer: A Dual-Center Study.

Huang T, Zeng Y, Jiang R, Zhou Q, Wu G, Zhong J

pubmed logopapersJul 11 2025
Develop a fusion model based on explainable machine learning, combining multiparametric MRI subregional radiomics and deep learning, to preoperatively predict the lymphovascular invasion status in rectal cancer. We collected data from RC patients with histopathological confirmation from two medical centers, with 301 patients used as a training set and 75 patients as an external validation set. Using K-means clustering techniques, we meticulously divided the tumor areas into multiple subregions and extracted crucial radiomic features from them. Additionally, we employed an advanced Vision Transformer (ViT) deep learning model to extract features. These features were integrated to construct the SubViT model. To better understand the decision-making process of the model, we used the Shapley Additive Properties (SHAP) tool to evaluate the model's interpretability. Finally, we comprehensively assessed the performance of the SubViT model through receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and the Delong test, comparing it with other models. In this study, the SubViT model demonstrated outstanding predictive performance in the training set, achieving an area under the curve (AUC) of 0.934 (95% confidence interval: 0.9074 to 0.9603). It also performed well in the external validation set, with an AUC of 0.884 (95% confidence interval: 0.8055 to 0.9616), outperforming both subregion radiomics and imaging-based models. Furthermore, decision curve analysis (DCA) indicated that the SubViT model provides higher clinical utility compared to other models. As an advanced composite model, the SubViT model demonstrated its efficiency in the non-invasive assessment of local vascular invasion (LVI) in rectal cancer.
Page 9 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.