Sort by:
Page 98 of 2252246 results

Multi-Organ metabolic profiling with [<sup>18</sup>F]F-FDG PET/CT predicts pathological response to neoadjuvant immunochemotherapy in resectable NSCLC.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.

Fine-tuned large Language model for extracting newly identified acute brain infarcts based on computed tomography or magnetic resonance imaging reports.

Fujita N, Yasaka K, Kiryu S, Abe O

pubmed logopapersJun 2 2025
This study aimed to develop an automated early warning system using a large language model (LLM) to identify acute to subacute brain infarction from free-text computed tomography (CT) or magnetic resonance imaging (MRI) radiology reports. In this retrospective study, 5,573, 1,883, and 834 patients were included in the training (mean age, 67.5 ± 17.2 years; 2,831 males), validation (mean age, 61.5 ± 18.3 years; 994 males), and test (mean age, 66.5 ± 16.1 years; 488 males) datasets. An LLM (Japanese Bidirectional Encoder Representations from Transformers model) was fine-tuned to classify the CT and MRI reports into three groups (group 0, newly identified acute to subacute infarction; group 1, known acute to subacute infarction or old infarction; group 2, without infarction). The training and validation processes were repeated 15 times, and the best-performing model on the validation dataset was selected to further evaluate its performance on the test dataset. The best fine-tuned model exhibited sensitivities of 0.891, 0.905, and 0.959 for groups 0, 1, and 2, respectively, in the test dataset. The macrosensitivity (the average of sensitivity for all groups) and accuracy were 0.918 and 0.923, respectively. The model's performance in extracting newly identified acute brain infarcts was high, with an area under the receiver operating characteristic curve of 0.979 (95% confidence interval, 0.956-1.000). The average prediction time was 0.115 ± 0.037 s per patient. A fine-tuned LLM could extract newly identified acute to subacute brain infarcts based on CT or MRI findings with high performance.

Accelerating 3D radial MPnRAGE using a self-supervised deep factor model.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.

Robust multi-coil MRI reconstruction via self-supervised denoising.

Aali A, Arvinte M, Kumar S, Arefeen YI, Tamir JI

pubmed logopapersJun 2 2025
To examine the effect of incorporating self-supervised denoising as a pre-processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K-space data employed for training are typically multi-coil and inherently noisy. Although DL-based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise-free datasets is impractical. We leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL-based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model-Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL-based methods in solving accelerated multi-coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2-weighted brain and fat-suppressed proton-density knee scans. We observed that self-supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2-weighted brain data, and 24, 14, and 4 dB for fat-suppressed knee data. We showed that denoising is an essential pre-processing technique capable of improving the efficacy of DL-based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise-free reference MRI scans.

Attention-enhanced residual U-Net: lymph node segmentation method with bimodal MRI images.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.

Decision support using machine learning for predicting adequate bladder filling in prostate radiotherapy: a feasibility study.

Saiyo N, Assawanuwat K, Janthawanno P, Paduka S, Prempetch K, Chanphol T, Sakchatchawan B, Thongsawad S

pubmed logopapersJun 2 2025
This study aimed to develop a model for predicting the bladder volume ratio between daily CBCT and CT to determine adequate bladder filling in patients undergoing treatment for prostate cancer with external beam radiation therapy (EBRT). The model was trained using 465 datasets obtained from 34 prostate cancer patients. A total of 16 features were collected as input data, which included basic patient information, patient health status, blood examination laboratory results, and specific radiation therapy information. The ratio of the bladder volume between daily CBCT (dCBCT) and planning CT (pCT) was used as the model response. The model was trained using a bootstrap aggregation (bagging) algorithm with two machine learning (ML) approaches: classification and regression. The model accuracy was validated using other 93 datasets. For the regression approach, the accuracy of the model was evaluated based on the root mean square error (RMSE) and mean absolute error (MAE). By contrast, the model performance of the classification approach was assessed using sensitivity, specificity, and accuracy scores. The ML model showed promising results in the prediction of the bladder volume ratio between dCBCT and pCT, with an RMSE of 0.244 and MAE of 0.172 for the regression approach, sensitivity of 95.24%, specificity of 92.16%, and accuracy of 93.55% for the classification approach. The prediction model could potentially help the radiological technologist determine whether the bladder is full before treatment, thereby reducing the requirement for re-scan CBCT. HIGHLIGHTS: The bagging model demonstrates strong performance in predicting optimal bladder filling. The model achieves promising results with 95.24% sensitivity and 92.16% specificity. It supports therapists in assessing bladder fullness prior to treatment. It helps reduce the risk of requiring repeat CBCT scans.

Performance Comparison of Machine Learning Using Radiomic Features and CNN-Based Deep Learning in Benign and Malignant Classification of Vertebral Compression Fractures Using CT Scans.

Yeom JC, Park SH, Kim YJ, Ahn TR, Kim KG

pubmed logopapersJun 2 2025
Distinguishing benign from malignant vertebral compression fractures is critical for clinical management but remains challenging on contrast-enhanced abdominal CT, which lacks the soft tissue contrast of MRI. This study evaluates and compares radiomic feature-based machine learning and convolutional neural network-based deep learning models for classifying VCFs using abdominal CT. A retrospective cohort of 447 vertebral compression fractures (196 benign, 251 malignant) from 286 patients was analyzed. Radiomic features were extracted using PyRadiomics, with Recursive Feature Elimination selecting six key texture-based features (e.g., Run Variance, Dependence Non-Uniformity Normalized), highlighting textural heterogeneity as a malignancy marker. Machine learning models (XGBoost, SVM, KNN, Random Forest) and a 3D CNN were trained on CT data, with performance assessed via precision, recall, F1 score, accuracy, and AUC. The deep learning model achieved marginally superior overall performance, with a statistically significant higher AUC (77.66% vs. 75.91%, p < 0.05) and better precision, F1 score, and accuracy compared to the top-performing machine learning model (XGBoost). Deep learning's attention maps localized diagnostically relevant regions, mimicking radiologists' focus, whereas radiomics lacked spatial interpretability despite offering quantifiable biomarkers. This study underscores the complementary strengths of machine learning and deep learning: radiomics provides interpretable features tied to tumor heterogeneity, while DL autonomously extracts high-dimensional patterns with spatial explainability. Integrating both approaches could enhance diagnostic accuracy and clinician trust in abdominal CT-based VCF assessment. Limitations include retrospective single-center data and potential selection bias. Future multi-center studies with diverse protocols and histopathological validation are warranted to generalize these findings.

Current AI technologies in cancer diagnostics and treatment.

Tiwari A, Mishra S, Kuo TR

pubmed logopapersJun 2 2025
Cancer continues to be a significant international health issue, which demands the invention of new methods for early detection, precise diagnoses, and personalized treatments. Artificial intelligence (AI) has rapidly become a groundbreaking component in the modern era of oncology, offering sophisticated tools across the range of cancer care. In this review, we performed a systematic survey of the current status of AI technologies used for cancer diagnoses and therapeutic approaches. We discuss AI-facilitated imaging diagnostics using a range of modalities such as computed tomography, magnetic resonance imaging, positron emission tomography, ultrasound, and digital pathology, highlighting the growing role of deep learning in detecting early-stage cancers. We also explore applications of AI in genomics and biomarker discovery, liquid biopsies, and non-invasive diagnoses. In therapeutic interventions, AI-based clinical decision support systems, individualized treatment planning, and AI-facilitated drug discovery are transforming precision cancer therapies. The review also evaluates the effects of AI on radiation therapy, robotic surgery, and patient management, including survival predictions, remote monitoring, and AI-facilitated clinical trials. Finally, we discuss important challenges such as data privacy, interpretability, and regulatory issues, and recommend future directions that involve the use of federated learning, synthetic biology, and quantum-boosted AI. This review highlights the groundbreaking potential of AI to revolutionize cancer care by making diagnostics, treatments, and patient management more precise, efficient, and personalized.

MobileTurkerNeXt: investigating the detection of Bankart and SLAP lesions using magnetic resonance images.

Gurger M, Esmez O, Key S, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 2 2025
The landscape of computer vision is predominantly shaped by two groundbreaking methodologies: transformers and convolutional neural networks (CNNs). In this study, we aim to introduce an innovative mobile CNN architecture designed for orthopedic imaging that efficiently identifies both Bankart and SLAP lesions. Our approach involved the collection of two distinct magnetic resonance (MR) image datasets, with the primary goal of automating the detection of Bankart and SLAP lesions. A novel mobile CNN, dubbed MobileTurkerNeXt, forms the cornerstone of this research. This newly developed model, comprising roughly 1 million trainable parameters, unfolds across four principal stages: the stem, main, downsampling, and output phases. The stem phase incorporates three convolutional layers to initiate feature extraction. In the main phase, we introduce an innovative block, drawing inspiration from ConvNeXt, EfficientNet, and ResNet architectures. The downsampling phase utilizes patchify average pooling and pixel-wise convolution to effectively reduce spatial dimensions, while the output phase is meticulously engineered to yield classification outcomes. Our experimentation with MobileTurkerNeXt spanned three comparative scenarios: Bankart versus normal, SLAP versus normal, and a tripartite comparison of Bankart, SLAP, and normal cases. The model demonstrated exemplary performance, achieving test classification accuracies exceeding 96% across these scenarios. The empirical results underscore the MobileTurkerNeXt's superior classification process in differentiating among Bankart, SLAP, and normal conditions in orthopedic imaging. This underscores the potential of our proposed mobile CNN in advancing diagnostic capabilities and contributing significantly to the field of medical image analysis.

Robust Detection of Out-of-Distribution Shifts in Chest X-ray Imaging.

Karimi F, Farnia F, Bae KT

pubmed logopapersJun 2 2025
This study addresses the critical challenge of detecting out-of-distribution (OOD) chest X-rays, where subtle view differences between lateral and frontal radiographs can lead to diagnostic errors. We develop a GAN-based framework that learns the inherent feature distribution of frontal views from the MIMIC-CXR dataset through latent space optimization and Kolmogorov-Smirnov statistical testing. Our approach generates similarity scores to reliably identify OOD cases, achieving exceptional performance with 100% precision, and 97.5% accuracy in detecting lateral views. The method demonstrates consistent reliability across operating conditions, maintaining accuracy above 92.5% and precision exceeding 93% under varying detection thresholds. These results provide both theoretical insights and practical solutions for OOD detection in medical imaging, demonstrating how GANs can establish feature representations for identifying distributional shifts. By significantly improving model reliability when encountering view-based anomalies, our framework enhances the clinical applicability of deep learning systems, ultimately contributing to improved diagnostic safety and patient outcomes.
Page 98 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.