Sort by:
Page 43 of 1411405 results

Habitat Radiomics Based on MRI for Predicting Metachronous Liver Metastasis in Locally Advanced Rectal Cancer: a Two‑center Study.

Shi S, Jiang T, Liu H, Wu Y, Singh A, Wang Y, Xie J, Li X

pubmed logopapersJun 1 2025
This study aimed to explore the feasibility of using habitat radiomics based on magnetic resonance imaging (MRI) to predict metachronous liver metastasis (MLM) in locally advanced rectal cancer (LARC) patients. A nomogram was developed by integrating multiple factors to enhance predictive accuracy. Retrospective data from 385 LARC patients across two centers were gathered. The data from Center 1 were split into a training set of 203 patients and an internal validation set of 87 patients, while Center 2 provided an external test set of 95 patients. K - means clustering was used on T2 - weighted images, and the region of interest was extended at different thicknesses. After feature extraction and selection, four machine - learning algorithms were utilized to build radiomics models. A nomogram was created by combining habitat radiomics, conventional radiomics, and clinical independent predictors. Model performance was evaluated by the AUC, and clinical utility was assessed through calibration curve and DCA. Habitat radiomics outperformed other single models in predicting MLM, with AUCs of 0.926, 0.864, and 0.851 in respective sets. The integrated nomogram achieved even higher AUCs of 0.959, 0.925, and 0.889. DCA and calibration curve analysis showed its high net benefit and good calibration. MRI - based habitat radiomics can effectively predict MLM in LARC patients. The integrated nomogram has optimal predictive performance and improves model accuracy significantly.

Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model.

Vrettos K, Vassalou EE, Vamvakerou G, Karantanas AH, Klontzas ME

pubmed logopapersJun 1 2025
Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images. A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github. The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis. In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.

Multivariate Classification of Adolescent Major Depressive Disorder Using Whole-brain Functional Connectivity.

Li Z, Shen Y, Zhang M, Li X, Wu B

pubmed logopapersJun 1 2025
Adolescent major depressive disorder (MDD) is a serious mental health condition that has been linked to abnormal functional connectivity (FC) patterns within the brain. However, whether FC could be used as a potential biomarker for diagnosis of adolescent MDD is still unclear. The aim of our study was to investigate the potential diagnostic value of whole-brain FC in adolescent MDD. Resting-state functional magnetic resonance imaging data were obtained from 94 adolescents with MDD and 78 healthy adolescents. The whole brain was segmented into 90 regions of interest (ROIs) using the automated anatomical labeling atlas. FC was assessed by calculating the Pearson correlation coefficient of the average time series between each pair of ROIs. A multivariate pattern analysis was employed to classify patients from controls using the whole-brain FC as input features. The linear support vector machine classifier achieved an accuracy of 69.18% using the optimal functional connection features. The consensus functional connections were mainly located within and between large-scale brain networks. The top 10 nodes with the highest weight in the classification model were mainly located in the default mode, salience, auditory, and sensorimotor networks. Our findings highlighted the importance of functional network connectivity in the neurobiology of adolescent MDD, and suggested the possibility of altered FC and high-weight regions as complementary diagnostic markers in adolescents with depression.

Prediction Model and Nomogram for Amyloid Positivity Using Clinical and MRI Features in Individuals With Subjective Cognitive Decline.

Li Q, Cui L, Guan Y, Li Y, Xie F, Guo Q

pubmed logopapersJun 1 2025
There is an urgent need for the precise prediction of cerebral amyloidosis using noninvasive and accessible indicators to facilitate the early diagnosis of individuals with the preclinical stage of Alzheimer's disease (AD). Two hundred and four individuals with subjective cognitive decline (SCD) were enrolled in this study. All subjects completed neuropsychological assessments and underwent 18F-florbetapir PET, structural MRI, and functional MRI. A total of 315 features were extracted from the MRI, demographics, and neuropsychological scales and selected using the least absolute shrinkage and selection operator (LASSO). The logistic regression (LR) model, based on machine learning, was trained to classify SCD as either β-amyloid (Aβ) positive or negative. A nomogram was established using a multivariate LR model to predict the risk of Aβ+. The performance of the prediction model and nomogram was assessed with area under the curve (AUC) and calibration. The final model was based on the right rostral anterior cingulate thickness, the grey matter volume of the right inferior temporal, the ReHo of the left posterior cingulate gyrus and right superior temporal gyrus, as well as MoCA-B and AVLT-R. In the training set, the model achieved a good AUC of 0.78 for predicting Aβ+, with an accuracy of 0.72. The validation of the model also yielded a favorable discriminatory ability with an AUC of 0.88 and an accuracy of 0.83. We have established and validated a model based on cognitive, sMRI, and fMRI data that exhibits adequate discrimination. This model has the potential to predict amyloid status in the SCD group and provide a noninvasive, cost-effective way that might facilitate early screening, clinical diagnosis, and drug clinical trials.

3-D contour-aware U-Net for efficient rectal tumor segmentation in magnetic resonance imaging.

Lu Y, Dang J, Chen J, Wang Y, Zhang T, Bai X

pubmed logopapersJun 1 2025
Magnetic resonance imaging (MRI), as a non-invasive detection method, is crucial for the clinical diagnosis and treatment plan of rectal cancer. However, due to the low contrast of rectal tumor signal in MRI, segmentation is often inaccurate. In this paper, we propose a new three-dimensional rectal tumor segmentation method CAU-Net based on T2-weighted MRI images. The method adopts a convolutional neural network to extract multi-scale features from MRI images and uses a Contour-Aware decoder and attention fusion block (AFB) for contour enhancement. We also introduce adversarial constraint to improve augmentation performance. Furthermore, we construct a dataset of 108 MRI-T2 volumes for the segmentation of locally advanced rectal cancer. Finally, CAU-Net achieved a DSC of 0.7112 and an ASD of 2.4707, which outperforms other state-of-the-art methods. Various experiments on this dataset show that CAU-Net has high accuracy and efficiency in rectal tumor segmentation. In summary, proposed method has important clinical application value and can provide important support for medical image analysis and clinical treatment of rectal cancer. With further development and application, this method has the potential to improve the accuracy of rectal cancer diagnosis and treatment.

Ensemble learning of deep CNN models and two stage level prediction of Cobb angle on surface topography in adolescents with idiopathic scoliosis.

Hassan M, Gonzalez Ruiz JM, Mohamed N, Burke TN, Mei Q, Westover L

pubmed logopapersJun 1 2025
This study employs Convolutional Neural Networks (CNNs) as feature extractors with appended regression layers for the non-invasive prediction of Cobb Angle (CA) from Surface Topography (ST) scans in adolescents with Idiopathic Scoliosis (AIS). The aim is to minimize radiation exposure during critical growth periods by offering a reliable, non-invasive assessment tool. The efficacy of various CNN-based feature extractors-DenseNet121, EfficientNetB0, ResNet18, SqueezeNet, and a modified U-Net-was evaluated on a dataset of 654 ST scans using a regression analysis framework for accurate CA prediction. The dataset comprised 590 training and 64 testing scans. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and accuracy in classifying scoliosis severity (mild, moderate, severe) based on CA measurements. The EfficientNetB0 feature extractor outperformed other models, demonstrating strong performance on the training set (R=0.96, R=20.93) and achieving an MAE of 6.13<sup>∘</sup> and RMSE of 7.5<sup>∘</sup> on the test set. In terms of scoliosis severity classification, it achieved high precision (84.62%) and specificity (95.65% for mild cases and 82.98% for severe cases), highlighting its clinical applicability in AIS management. The regression-based approach using the EfficientNetB0 as a feature extractor presents a significant advancement for accurately determining CA from ST scans, offering a promising tool for improving scoliosis severity categorization and management in adolescents.

Healthcare resource utilization for the management of neonatal head shape deformities: a propensity-matched analysis of AI-assisted and conventional approaches.

Shin J, Caron G, Stoltz P, Martin JE, Hersh DS, Bookland MJ

pubmed logopapersJun 1 2025
Overuse of radiography studies and underuse of conservative therapies for cranial deformities in neonates is a known inefficiency in pediatric craniofacial healthcare. This study sought to establish whether the introduction of artificial intelligence (AI)-generated craniometrics and craniometric interpretations into craniofacial clinical workflow improved resource utilization patterns in the initial evaluation and management of neonatal cranial deformities. A retrospective chart review of pediatric patients referred for head shape concerns between January 2019 and June 2023 was conducted. Patient demographics, final encounter diagnosis, review of an AI analysis, and provider orders were documented. Patients were divided based on whether an AI cranial deformity analysis was documented as reviewed during the index evaluation, then both groups were propensity matched. Rates of index-encounter radiology studies, physical therapy (PT), orthotic therapy, and craniofacial specialist follow-up evaluations were compared using logistic regression and ANOVA analyses. One thousand patient charts were reviewed (663 conventional encounters, 337 AI-assisted encounters). One-to-one propensity matching was performed between these groups. AI models were significantly more likely to be reviewed during telemedicine encounters and advanced practice provider (APP) visits (54.8% telemedicine vs 11.4% in-person, p < 0.0001; 12.3% physician vs 44.4% APP, p < 0.0001). All AI diagnoses of craniosynostosis versus benign deformities were congruent with final diagnoses. AI model review was associated with a significant increase in the use of orthotic therapies for neonatal cranial deformities (31.5% vs 38.6%, p = 0.0132) but not PT or specialist follow-up evaluations. Radiology ordering rates did not correlate with AI-interpreted data review. As neurosurgeons and pediatricians continue to work to limit neonatal radiation exposure and contain healthcare costs, AI-assisted clinical care could be a cheap and easily scalable diagnostic adjunct for reducing reliance on radiography and encouraging adherence to established clinical guidelines. In practice, however, providers appear to default to preexisting diagnostic biases and underweight AI-generated data and interpretations, ultimately negating any potential advantages offered by AI. AI engineers and specialty leadership should prioritize provider education and user interface optimization to improve future adoption of validated AI diagnostic tools.

Artificial Intelligence for Teaching Case Curation: Evaluating Model Performance on Imaging Report Discrepancies.

Bartley M, Huemann Z, Hu J, Tie X, Ross AB, Kennedy T, Warner JD, Bradshaw T, Lawrence EM

pubmed logopapersJun 1 2025
Assess the feasibility of using a large language model (LLM) to identify valuable radiology teaching cases through report discrepancy detection. Retrospective study included after-hours head CT and musculoskeletal radiograph exams from January 2017 to December 2021. Discrepancy level between trainee's preliminary interpretation and final attending report was annotated on a 5-point scale. RadBERT, an LLM pretrained on a vast corpus of radiology text, was fine-tuned for discrepancy detection. For comparison and to ensure the robustness of the approach, Mixstral 8×7B, Mistral 7B, and Llama2 were also evaluated. The model's performance in detecting discrepancies was evaluated using a randomly selected hold-out test set. A subset of discrepant cases identified by the LLM was compared to a random case set by recording clinical parameters, discrepant pathology, and evaluating possible educational value. F1 statistic was used for model comparison. Pearson's chi-squared test was employed to assess discrepancy prevalence and score between groups (significance set at p<0.05). The fine-tuned LLM model achieved an overall accuracy of 90.5% with a specificity of 95.5% and a sensitivity of 66.3% for discrepancy detection. The model sensitivity significantly improved with higher discrepancy scores, 49% (34/70) for score 2 versus 67% (47/62) for score 3, and 81% (35/43) for score 4/5 (p<0.05 compared to score 2). LLM-curated set showed a significant increase in the prevalence of all discrepancies and major discrepancies (scores 4 or 5) compared to a random case set (P<0.05 for both). Evaluation of the clinical characteristics from both the random and discrepant case sets demonstrated a broad mix of pathologies and discrepancy types. An LLM can detect trainee report discrepancies, including both higher and lower-scoring discrepancies, and may improve case set curation for resident education as well as serve as a trainee oversight tool.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.
Page 43 of 1411405 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.