Sort by:
Page 636 of 7647636 results

Jin J, Yang S, Tong J, Zhang K, Wang Z

pubmed logopapersJun 2 2025
Convolutional neural network (CNN) models, such as U-Net, V-Net, and DeepLab, have achieved remarkable results across various medical imaging modalities, and ultrasound. Additionally, hybrid Transformer-based segmentation methods have shown great potential in medical image analysis. Despite the breakthroughs in feature extraction through self-attention mechanisms, these methods are computationally intensive, especially for three-dimensional medical imaging, posing significant challenges to graphics processing unit (GPU) hardware. Consequently, the demand for lightweight models is increasing. To address this issue, we designed a high-accuracy yet lightweight model that combines the strengths of CNNs and Transformers. We introduce Slim UNEt TRansformers++ (Slim UNETR++), which builds upon Slim UNETR by incorporating Medical ConvNeXt (MedNeXt), Spatial-Channel Attention (SCA), and Efficient Paired-Attention (EPA) modules. This integration leverages the advantages of both CNN and Transformer architectures to enhance model accuracy. The core component of Slim UNETR++ is the Slim UNETR++ block, which facilitates efficient information exchange through a sparse self-attention mechanism and low-cost representation aggregation. We also introduced throughput as a performance metric to quantify data processing speed. Experimental results demonstrate that Slim UNETR++ outperforms other models in terms of accuracy and model size. On the BraTS2021 dataset, Slim UNETR++ achieved a Dice accuracy of 93.12% and a 95% Hausdorff distance (HD95) of 4.23mm, significantly surpassing mainstream relevant methods such as Swin UNETR.

Rashidisabet H, Chan RVP, Leiderman YI, Vajaranant TS, Yi D

pubmed logopapersJun 2 2025
Standard deep learning (DL) models often suffer significant performance degradation on out-of-distribution (OOD) data, where test data differs from training data, a common challenge in medical imaging due to real-world variations. We propose a unified self-censorship framework as an alternative to the standard DL models for glaucoma classification using deep evidential uncertainty quantification. Our approach detects OOD samples at both the dataset and image levels. Dataset-level self-censorship enables users to accept or reject predictions for an entire new dataset based on model uncertainty, whereas image-level self-censorship refrains from making predictions on individual OOD images rather than risking incorrect classifications. We validated our approach across diverse datasets. Our dataset-level self-censorship method outperforms the standard DL model in OOD detection, achieving an average 11.93% higher area under the curve (AUC) across 14 OOD datasets. Similarly, our image-level self-censorship model improves glaucoma classification accuracy by an average of 17.22% across 4 external glaucoma datasets against baselines while censoring 28.25% more data. Our approach addresses the challenge of generalization in standard DL models for glaucoma classification across diverse datasets by selectively withholding predictions when the model is uncertain. This method reduces misclassification errors compared to state-of-the-art baselines, particularly for OOD cases. This study introduces a tunable framework that explores the trade-off between prediction accuracy and data retention in glaucoma prediction. By managing uncertainty in model outputs, the approach lays a foundation for future decision support tools aimed at improving the reliability of automated glaucoma diagnosis.

Chiang CYN, Wang X, Gardiner SK, Buist M, Girard MJA

pubmed logopapersJun 2 2025
The purpose of this study was to investigate the impact of optic nerve tortuosity (ONT), and the interaction of globe proptosis and size on retinal ganglion cell (RGC) thickness, using retinal nerve fiber layer (RNFL) thickness, across general, glaucoma, and myopic populations. This study analyzed 17,940 eyes from the UKBiobank cohort (ID 76442), including 72 glaucomatous and 2475 myopic eyes. Artificial intelligence models were developed to derive RNFL thickness corrected for ocular magnification from 3D optical coherence tomography scans and orbit features from 3D magnetic resonance images, including ONT, globe proptosis, axial length, and a novel feature: the interzygomatic line-to-posterior pole (ILPP) distance - a composite marker of globe proptosis and size. Generalized estimating equation (GEE) models evaluated associations between orbital and retinal features. RNFL thickness was positively correlated with ONT and ILPP distance (r = 0.065, P < 0.001 and r = 0.206, P < 0.001, respectively) in the general population. The same was true for glaucoma (r = 0.040, P = 0.74 and r = 0.224, P = 0.059), and for myopia (r = 0.069, P < 0.001 and r = 0.100, P < 0.001). GEE models revealed that straighter optic nerves and shorter ILPP distance were predictive of thinner RNFL in all populations. Straighter optic nerves and decreased ILPP distance could cause RNFL thinning, possibly due to greater traction forces. ILPP distance emerged as a potential biomarker of axonal health. These findings underscore the importance of orbit structures in RGC axonal health and warrant further research into orbit biomechanics.

Ha EJ, Lee JH, Mak N, Duh AK, Tong E, Yeom KW, Meister KD

pubmed logopapersJun 2 2025
<b><i>Purpose:</i></b> Artificial intelligence (AI) models have shown promise in predicting malignant thyroid nodules in adults; however, research on deep learning (DL) for pediatric cases is limited. We evaluated the applicability of a DL-based model for assessing thyroid nodules in children. <b><i>Methods:</i></b> We retrospectively identified two pediatric cohorts (<i>n</i> = 128; mean age 15.5 ± 2.4 years; 103 girls) who had thyroid nodule ultrasonography (US) with histological confirmation at two institutions. The AI-Thyroid DL model, originally trained on adult data, was tested on pediatric nodules in three scenarios axial US images, longitudinal US images, and both. We conducted a subgroup analysis based on the two pediatric cohorts and age groups (≥14 years vs. < 14 years) and compared the model's performance with radiologist interpretations using the Thyroid Imaging Reporting and Data System (TIRADS). <b><i>Results:</i></b> Out of 156 nodules analyzed, 47 (30.1%) were malignant. AI-Thyroid demonstrated respective area under the receiver operating characteristic (AUROC), sensitivity, and specificity values of 0.913-0.929, 78.7-89.4%, and 79.8-91.7%, respectively. The AUROC values did not significantly differ across the image planes (all <i>p</i> > 0.05) and between the two pediatric cohorts (<i>p</i> = 0.804). No significant differences were observed between age groups in terms of sensitivity and specificity (all <i>p</i> > 0.05) while the AUROC values were higher for patients aged <14 years compared to those aged ≥14 years (all <i>p</i> < 0.01). AI-Thyroid yielded the highest AUROC values, followed by ACR-TIRADS and K-TIRADS (<i>p</i> = 0.016 and <i>p</i> < 0.001, respectively). <b><i>Conclusion:</i></b> AI-Thyroid demonstrated high performance in diagnosing pediatric thyroid cancer. Future research should focus on optimizing AI-Thyroid for pediatric use and exploring its role alongside tissue sampling in clinical practice.

Gupta K, Ricapito A, Lundon D, Khargi R, Connors C, Yaghoubian AJ, Gallante B, Atallah WM, Gupta M

pubmed logopapersJun 2 2025
<b><i>Objective:</i></b> We sought to use artificial intelligence (AI) to develop and test calculators to predict spontaneous stone passage (SSP) using radiographical and clinical data. <b><i>Methods:</i></b> Consecutive patients with solitary ureteral stones ≤10 mm on CT were prospectively enrolled and managed according to American Urological Association guidelines. The first 70% of patients were placed in the "training group" and used to develop the calculators. The latter 30% were enrolled in the "testing group" to externally validate the calculators. Exclusion criteria included contraindication to trial of SSP, ureteral stent, and anatomical anomaly. Demographic, clinical, and radiographical data were obtained and fed into machine learning (ML) platforms. SSP was defined as passage of stone without intervention. Calculators were derived from data using multivariate logistic regression. Discrimination, calibration, and clinical utility/net benefit of the developed models were assessed in the validation cohort. Receiver operating characteristic curves were constructed to measure their discriminative ability. <b><i>Results:</i></b> Fifty-one percent of 131 "training" patients spontaneously passed their stones. Passed stones were significantly closer to the bladder (8.6 <i>vs</i> 11.8 cm, p = 0.01) and smaller in length, width, and height. Two ML calculators were developed, one supervised machine learning (SML) and the other unsupervised machine learning (USML), and compared to an existing tool Multi-centre Cohort Study Evaluating the role of Inflammatory Markers In Patients Presenting with Acute Ureteric Colic (MIMIC). The SML calculator included maximum stone width (MSW), ureteral diameter above the stone (UDA), and distance from ureterovesical junction to bottom of stone and had an area under the curve (AUC) of 0.737 upon external validation of 58 "test" patients. Parameters selected by USML included MSW, UDA, and use of an anticholinergic, and it had an AUC of 0.706. The MIMIC calculator's AUC was 0.588 (0.489-0.686). <b><i>Conclusion:</i></b> We used AI to develop calculators that outperformed an existing tool and can help providers and patients make a better-informed decision for the treatment of ureteral stones.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.

Fujita N, Yasaka K, Kiryu S, Abe O

pubmed logopapersJun 2 2025
This study aimed to develop an automated early warning system using a large language model (LLM) to identify acute to subacute brain infarction from free-text computed tomography (CT) or magnetic resonance imaging (MRI) radiology reports. In this retrospective study, 5,573, 1,883, and 834 patients were included in the training (mean age, 67.5 ± 17.2 years; 2,831 males), validation (mean age, 61.5 ± 18.3 years; 994 males), and test (mean age, 66.5 ± 16.1 years; 488 males) datasets. An LLM (Japanese Bidirectional Encoder Representations from Transformers model) was fine-tuned to classify the CT and MRI reports into three groups (group 0, newly identified acute to subacute infarction; group 1, known acute to subacute infarction or old infarction; group 2, without infarction). The training and validation processes were repeated 15 times, and the best-performing model on the validation dataset was selected to further evaluate its performance on the test dataset. The best fine-tuned model exhibited sensitivities of 0.891, 0.905, and 0.959 for groups 0, 1, and 2, respectively, in the test dataset. The macrosensitivity (the average of sensitivity for all groups) and accuracy were 0.918 and 0.923, respectively. The model's performance in extracting newly identified acute brain infarcts was high, with an area under the receiver operating characteristic curve of 0.979 (95% confidence interval, 0.956-1.000). The average prediction time was 0.115 ± 0.037 s per patient. A fine-tuned LLM could extract newly identified acute to subacute brain infarcts based on CT or MRI findings with high performance.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.

Aali A, Arvinte M, Kumar S, Arefeen YI, Tamir JI

pubmed logopapersJun 2 2025
To examine the effect of incorporating self-supervised denoising as a pre-processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K-space data employed for training are typically multi-coil and inherently noisy. Although DL-based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise-free datasets is impractical. We leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL-based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model-Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL-based methods in solving accelerated multi-coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2-weighted brain and fat-suppressed proton-density knee scans. We observed that self-supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2-weighted brain data, and 24, 14, and 4 dB for fat-suppressed knee data. We showed that denoising is an essential pre-processing technique capable of improving the efficacy of DL-based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise-free reference MRI scans.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.
Page 636 of 7647636 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.