Sort by:
Page 116 of 2202194 results

SPCF-YOLO: An Efficient Feature Optimization Model for Real-Time Lung Nodule Detection.

Ren Y, Shi C, Zhu D, Zhou C

pubmed logopapersJun 2 2025
Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.

MRI Radiomics based on paraspinal muscle for prediction postoperative outcomes in lumbar degenerative spondylolisthesis.

Yu Y, Xu W, Li X, Zeng X, Su Z, Wang Q, Li S, Liu C, Wang Z, Wang S, Liao L, Zhang J

pubmed logopapersJun 2 2025
This study aims to develop an paraspinal muscle-based radiomics model using a machine learning approach and assess its utility in predicting postoperative outcomes among patients with lumbar degenerative spondylolisthesis (LDS). This retrospective study included a total of 155 patients diagnosed with LDS who underwent single-level posterior lumbar interbody fusion (PLIF) surgery between January 2021 and October 2023. The patients were divided into train and test cohorts in a ratio of 8:2.Radiomics features were extracted from axial T2-weighted lumbar MRI, and seven machine learning models were developed after selecting the most relevant radiomic features using T-test, Pearson correlation, and Lasso. A combined model was then created by integrating both clinical and radiomics features. The performance of the models was evaluated through ROC, sensitivity, and specificity, while their clinical utility was assessed using AUC and Decision Curve Analysis (DCA). The LR model demonstrated robust predictive performance compared to the other machine learning models evaluated in the study. The combined model, integrating both clinical and radiomic features, exhibited an AUC of 0.822 (95% CI, 0.761-0.883) in the training cohorts and 0.826 (95% CI, 0.766-0.886) in the test cohorts, indicating substantial predictive capability. Moreover, the combined model showed superior clinical benefit and increased classification accuracy when compared to the radiomics model alone. The findings suggest that the combined model holds promise for accurately predicting postoperative outcomes in patients with LDS and could be valuable in guiding treatment strategies and assisting clinicians in making informed clinical decisions for LDS patients.

Slim UNETR++: A lightweight 3D medical image segmentation network for medical image analysis.

Jin J, Yang S, Tong J, Zhang K, Wang Z

pubmed logopapersJun 2 2025
Convolutional neural network (CNN) models, such as U-Net, V-Net, and DeepLab, have achieved remarkable results across various medical imaging modalities, and ultrasound. Additionally, hybrid Transformer-based segmentation methods have shown great potential in medical image analysis. Despite the breakthroughs in feature extraction through self-attention mechanisms, these methods are computationally intensive, especially for three-dimensional medical imaging, posing significant challenges to graphics processing unit (GPU) hardware. Consequently, the demand for lightweight models is increasing. To address this issue, we designed a high-accuracy yet lightweight model that combines the strengths of CNNs and Transformers. We introduce Slim UNEt TRansformers++ (Slim UNETR++), which builds upon Slim UNETR by incorporating Medical ConvNeXt (MedNeXt), Spatial-Channel Attention (SCA), and Efficient Paired-Attention (EPA) modules. This integration leverages the advantages of both CNN and Transformer architectures to enhance model accuracy. The core component of Slim UNETR++ is the Slim UNETR++ block, which facilitates efficient information exchange through a sparse self-attention mechanism and low-cost representation aggregation. We also introduced throughput as a performance metric to quantify data processing speed. Experimental results demonstrate that Slim UNETR++ outperforms other models in terms of accuracy and model size. On the BraTS2021 dataset, Slim UNETR++ achieved a Dice accuracy of 93.12% and a 95% Hausdorff distance (HD95) of 4.23mm, significantly surpassing mainstream relevant methods such as Swin UNETR.

Robust Uncertainty-Informed Glaucoma Classification Under Data Shift.

Rashidisabet H, Chan RVP, Leiderman YI, Vajaranant TS, Yi D

pubmed logopapersJun 2 2025
Standard deep learning (DL) models often suffer significant performance degradation on out-of-distribution (OOD) data, where test data differs from training data, a common challenge in medical imaging due to real-world variations. We propose a unified self-censorship framework as an alternative to the standard DL models for glaucoma classification using deep evidential uncertainty quantification. Our approach detects OOD samples at both the dataset and image levels. Dataset-level self-censorship enables users to accept or reject predictions for an entire new dataset based on model uncertainty, whereas image-level self-censorship refrains from making predictions on individual OOD images rather than risking incorrect classifications. We validated our approach across diverse datasets. Our dataset-level self-censorship method outperforms the standard DL model in OOD detection, achieving an average 11.93% higher area under the curve (AUC) across 14 OOD datasets. Similarly, our image-level self-censorship model improves glaucoma classification accuracy by an average of 17.22% across 4 external glaucoma datasets against baselines while censoring 28.25% more data. Our approach addresses the challenge of generalization in standard DL models for glaucoma classification across diverse datasets by selectively withholding predictions when the model is uncertain. This method reduces misclassification errors compared to state-of-the-art baselines, particularly for OOD cases. This study introduces a tunable framework that explores the trade-off between prediction accuracy and data retention in glaucoma prediction. By managing uncertainty in model outputs, the approach lays a foundation for future decision support tools aimed at improving the reliability of automated glaucoma diagnosis.

Impact of Optic Nerve Tortuosity, Globe Proptosis, and Size on Retinal Ganglion Cell Thickness Across General, Glaucoma, and Myopic Populations.

Chiang CYN, Wang X, Gardiner SK, Buist M, Girard MJA

pubmed logopapersJun 2 2025
The purpose of this study was to investigate the impact of optic nerve tortuosity (ONT), and the interaction of globe proptosis and size on retinal ganglion cell (RGC) thickness, using retinal nerve fiber layer (RNFL) thickness, across general, glaucoma, and myopic populations. This study analyzed 17,940 eyes from the UKBiobank cohort (ID 76442), including 72 glaucomatous and 2475 myopic eyes. Artificial intelligence models were developed to derive RNFL thickness corrected for ocular magnification from 3D optical coherence tomography scans and orbit features from 3D magnetic resonance images, including ONT, globe proptosis, axial length, and a novel feature: the interzygomatic line-to-posterior pole (ILPP) distance - a composite marker of globe proptosis and size. Generalized estimating equation (GEE) models evaluated associations between orbital and retinal features. RNFL thickness was positively correlated with ONT and ILPP distance (r = 0.065, P < 0.001 and r = 0.206, P < 0.001, respectively) in the general population. The same was true for glaucoma (r = 0.040, P = 0.74 and r = 0.224, P = 0.059), and for myopia (r = 0.069, P < 0.001 and r = 0.100, P < 0.001). GEE models revealed that straighter optic nerves and shorter ILPP distance were predictive of thinner RNFL in all populations. Straighter optic nerves and decreased ILPP distance could cause RNFL thinning, possibly due to greater traction forces. ILPP distance emerged as a potential biomarker of axonal health. These findings underscore the importance of orbit structures in RGC axonal health and warrant further research into orbit biomechanics.

A Deep Learning-Based Artificial Intelligence Model Assisting Thyroid Nodule Diagnosis and Management: Pilot Results for Evaluating Thyroid Malignancy in Pediatric Cohorts.

Ha EJ, Lee JH, Mak N, Duh AK, Tong E, Yeom KW, Meister KD

pubmed logopapersJun 2 2025
<b><i>Purpose:</i></b> Artificial intelligence (AI) models have shown promise in predicting malignant thyroid nodules in adults; however, research on deep learning (DL) for pediatric cases is limited. We evaluated the applicability of a DL-based model for assessing thyroid nodules in children. <b><i>Methods:</i></b> We retrospectively identified two pediatric cohorts (<i>n</i> = 128; mean age 15.5 ± 2.4 years; 103 girls) who had thyroid nodule ultrasonography (US) with histological confirmation at two institutions. The AI-Thyroid DL model, originally trained on adult data, was tested on pediatric nodules in three scenarios axial US images, longitudinal US images, and both. We conducted a subgroup analysis based on the two pediatric cohorts and age groups (≥14 years vs. < 14 years) and compared the model's performance with radiologist interpretations using the Thyroid Imaging Reporting and Data System (TIRADS). <b><i>Results:</i></b> Out of 156 nodules analyzed, 47 (30.1%) were malignant. AI-Thyroid demonstrated respective area under the receiver operating characteristic (AUROC), sensitivity, and specificity values of 0.913-0.929, 78.7-89.4%, and 79.8-91.7%, respectively. The AUROC values did not significantly differ across the image planes (all <i>p</i> > 0.05) and between the two pediatric cohorts (<i>p</i> = 0.804). No significant differences were observed between age groups in terms of sensitivity and specificity (all <i>p</i> > 0.05) while the AUROC values were higher for patients aged <14 years compared to those aged ≥14 years (all <i>p</i> < 0.01). AI-Thyroid yielded the highest AUROC values, followed by ACR-TIRADS and K-TIRADS (<i>p</i> = 0.016 and <i>p</i> < 0.001, respectively). <b><i>Conclusion:</i></b> AI-Thyroid demonstrated high performance in diagnosing pediatric thyroid cancer. Future research should focus on optimizing AI-Thyroid for pediatric use and exploring its role alongside tissue sampling in clinical practice.

Harnessing Artificial Intelligence to Predict Spontaneous Stone Passage: Development and Testing of a Machine Learning-Based Calculator.

Gupta K, Ricapito A, Lundon D, Khargi R, Connors C, Yaghoubian AJ, Gallante B, Atallah WM, Gupta M

pubmed logopapersJun 2 2025
<b><i>Objective:</i></b> We sought to use artificial intelligence (AI) to develop and test calculators to predict spontaneous stone passage (SSP) using radiographical and clinical data. <b><i>Methods:</i></b> Consecutive patients with solitary ureteral stones ≤10 mm on CT were prospectively enrolled and managed according to American Urological Association guidelines. The first 70% of patients were placed in the "training group" and used to develop the calculators. The latter 30% were enrolled in the "testing group" to externally validate the calculators. Exclusion criteria included contraindication to trial of SSP, ureteral stent, and anatomical anomaly. Demographic, clinical, and radiographical data were obtained and fed into machine learning (ML) platforms. SSP was defined as passage of stone without intervention. Calculators were derived from data using multivariate logistic regression. Discrimination, calibration, and clinical utility/net benefit of the developed models were assessed in the validation cohort. Receiver operating characteristic curves were constructed to measure their discriminative ability. <b><i>Results:</i></b> Fifty-one percent of 131 "training" patients spontaneously passed their stones. Passed stones were significantly closer to the bladder (8.6 <i>vs</i> 11.8 cm, p = 0.01) and smaller in length, width, and height. Two ML calculators were developed, one supervised machine learning (SML) and the other unsupervised machine learning (USML), and compared to an existing tool Multi-centre Cohort Study Evaluating the role of Inflammatory Markers In Patients Presenting with Acute Ureteric Colic (MIMIC). The SML calculator included maximum stone width (MSW), ureteral diameter above the stone (UDA), and distance from ureterovesical junction to bottom of stone and had an area under the curve (AUC) of 0.737 upon external validation of 58 "test" patients. Parameters selected by USML included MSW, UDA, and use of an anticholinergic, and it had an AUC of 0.706. The MIMIC calculator's AUC was 0.588 (0.489-0.686). <b><i>Conclusion:</i></b> We used AI to develop calculators that outperformed an existing tool and can help providers and patients make a better-informed decision for the treatment of ureteral stones.

Multi-Organ metabolic profiling with [<sup>18</sup>F]F-FDG PET/CT predicts pathological response to neoadjuvant immunochemotherapy in resectable NSCLC.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.

Fine-tuned large Language model for extracting newly identified acute brain infarcts based on computed tomography or magnetic resonance imaging reports.

Fujita N, Yasaka K, Kiryu S, Abe O

pubmed logopapersJun 2 2025
This study aimed to develop an automated early warning system using a large language model (LLM) to identify acute to subacute brain infarction from free-text computed tomography (CT) or magnetic resonance imaging (MRI) radiology reports. In this retrospective study, 5,573, 1,883, and 834 patients were included in the training (mean age, 67.5 ± 17.2 years; 2,831 males), validation (mean age, 61.5 ± 18.3 years; 994 males), and test (mean age, 66.5 ± 16.1 years; 488 males) datasets. An LLM (Japanese Bidirectional Encoder Representations from Transformers model) was fine-tuned to classify the CT and MRI reports into three groups (group 0, newly identified acute to subacute infarction; group 1, known acute to subacute infarction or old infarction; group 2, without infarction). The training and validation processes were repeated 15 times, and the best-performing model on the validation dataset was selected to further evaluate its performance on the test dataset. The best fine-tuned model exhibited sensitivities of 0.891, 0.905, and 0.959 for groups 0, 1, and 2, respectively, in the test dataset. The macrosensitivity (the average of sensitivity for all groups) and accuracy were 0.918 and 0.923, respectively. The model's performance in extracting newly identified acute brain infarcts was high, with an area under the receiver operating characteristic curve of 0.979 (95% confidence interval, 0.956-1.000). The average prediction time was 0.115 ± 0.037 s per patient. A fine-tuned LLM could extract newly identified acute to subacute brain infarcts based on CT or MRI findings with high performance.

Accelerating 3D radial MPnRAGE using a self-supervised deep factor model.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.
Page 116 of 2202194 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.