Sort by:
Page 38 of 2352345 results

PUUMA (Placental patch and whole-Uterus dual-branch U-Mamba-based Architecture): Functional MRI Prediction of Gestational Age at Birth and Preterm Risk

Diego Fajardo-Rojas, Levente Baljer, Jordina Aviles Verdera, Megan Hall, Daniel Cromb, Mary A. Rutherford, Lisa Story, Emma C. Robinson, Jana Hutter

arxiv logopreprintSep 8 2025
Preterm birth is a major cause of mortality and lifelong morbidity in childhood. Its complex and multifactorial origins limit the effectiveness of current clinical predictors and impede optimal care. In this study, a dual-branch deep learning architecture (PUUMA) was developed to predict gestational age (GA) at birth using T2* fetal MRI data from 295 pregnancies, encompassing a heterogeneous and imbalanced population. The model integrates both global whole-uterus and local placental features. Its performance was benchmarked against linear regression using cervical length measurements obtained by experienced clinicians from anatomical MRI and other Deep Learning architectures. The GA at birth predictions were assessed using mean absolute error. Accuracy, sensitivity, and specificity were used to assess preterm classification. Both the fully automated MRI-based pipeline and the cervical length regression achieved comparable mean absolute errors (3 weeks) and good sensitivity (0.67) for detecting preterm birth, despite pronounced class imbalance in the dataset. These results provide a proof of concept for automated prediction of GA at birth from functional MRI, and underscore the value of whole-uterus functional imaging in identifying at-risk pregnancies. Additionally, we demonstrate that manual, high-definition cervical length measurements derived from MRI, not currently routine in clinical practice, offer valuable predictive information. Future work will focus on expanding the cohort size and incorporating additional organ-specific imaging to improve generalisability and predictive performance.

Breast Cancer Detection in Thermographic Images via Diffusion-Based Augmentation and Nonlinear Feature Fusion

Sepehr Salem, M. Moein Esfahani, Jingyu Liu, Vince Calhoun

arxiv logopreprintSep 8 2025
Data scarcity hinders deep learning for medical imaging. We propose a framework for breast cancer classification in thermograms that addresses this using a Diffusion Probabilistic Model (DPM) for data augmentation. Our DPM-based augmentation is shown to be superior to both traditional methods and a ProGAN baseline. The framework fuses deep features from a pre-trained ResNet-50 with handcrafted nonlinear features (e.g., Fractal Dimension) derived from U-Net segmented tumors. An XGBoost classifier trained on these fused features achieves 98.0\% accuracy and 98.1\% sensitivity. Ablation studies and statistical tests confirm that both the DPM augmentation and the nonlinear feature fusion are critical, statistically significant components of this success. This work validates the synergy between advanced generative models and interpretable features for creating highly accurate medical diagnostic tools.

Validation of a CT-brain analysis tool for measuring global cortical atrophy in older patient cohorts

Sukhdeep Bal, Emma Colbourne, Jasmine Gan, Ludovica Griffanti, Taylor Hanayik, Nele Demeyere, Jim Davies, Sarah T Pendlebury, Mark Jenkinson

arxiv logopreprintSep 8 2025
Quantification of brain atrophy currently requires visual rating scales which are time consuming and automated brain image analysis is warranted. We validated our automated deep learning (DL) tool measuring the Global Cerebral Atrophy (GCA) score against trained human raters, and associations with age and cognitive impairment, in representative older (>65 years) patients. CT-brain scans were obtained from patients in acute medicine (ORCHARD-EPR), acute stroke (OCS studies) and a legacy sample. Scans were divided in a 60/20/20 ratio for training, optimisation and testing. CT-images were assessed by two trained raters (rater-1=864 scans, rater-2=20 scans). Agreement between DL tool-predicted GCA scores (range 0-39) and the visual ratings was evaluated using mean absolute error (MAE) and Cohen's weighted kappa. Among 864 scans (ORCHARD-EPR=578, OCS=200, legacy scans=86), MAE between the DL tool and rater-1 GCA scores was 3.2 overall, 3.1 for ORCHARD-EPR, 3.3 for OCS and 2.6 for the legacy scans and half had DL-predicted GCA error between -2 and 2. Inter-rater agreement was Kappa=0.45 between the DL-tool and rater-1, and 0.41 between the tool and rater- 2 whereas it was lower at 0.28 for rater-1 and rater-2. There was no difference in GCA scores from the DL-tool and the two raters (one-way ANOVA, p=0.35) or in mean GCA scores between the DL-tool and rater-1 (paired t-test, t=-0.43, p=0.66), the tool and rater-2 (t=1.35, p=0.18) or between rater-1 and rater-2 (t=0.99, p=0.32). DL-tool GCA scores correlated with age and cognitive scores (both p<0.001). Our DL CT-brain analysis tool measured GCA score accurately and without user input in real-world scans acquired from older patients. Our tool will enable extraction of standardised quantitative measures of atrophy at scale for use in health data research and will act as proof-of-concept towards a point-of-care clinically approved tool.

Machine Learning Uncovers Novel Predictors of Peptide Receptor Radionuclide Therapy Eligibility in Neuroendocrine Neoplasms.

Sipka G, Farkas I, Bakos A, Maráz A, Mikó ZS, Czékus T, Bukva M, Urbán S, Pávics L, Besenyi Z

pubmed logopapersSep 8 2025
<i>Background:</i> Neuroendocrine neoplasms (NENs) are a diverse group of malignancies in which somatostatin receptor expression can be crucial in guiding therapy. We aimed to evaluate the effectiveness of [<sup>99m</sup>Tc]Tc-EDDA/HYNIC-TOC SPECT/CT in differentiating neuroendocrine tumor histology, selecting candidates for radioligand therapy, and identifying correlations between somatostatin receptor expression and non-imaging parameters in metastatic NENs. <i>Methods:</i> This retrospective study included 65 patients (29 women, 36 men, mean age 61) with metastatic neuroendocrine neoplasms confirmed by histology, follow-up, or imaging, comprising 14 poorly differentiated carcinomas and 51 well-differentiated tumors. Somatostatin receptor SPECT/CT results were assessed visually and semiquantitatively, with mathematical models incorporating histological, oncological, immunohistochemical, and laboratory parameters, followed by biostatistical analysis. <i>Results:</i> Of 392 lesions evaluated, the majority were metastases in the liver, lymph nodes, and bones. Mathematical models estimated somatostatin receptor expression accurately (70-83%) based on clinical parameters alone. Key factors included tumor origin, oncological treatments, and the immunohistochemical marker CK7. Associations were found between age, grade, disease extent, and markers (CEA, CA19-9, AFP). <i>Conclusions:</i> Our findings suggest that [<sup>99m</sup>Tc]Tc-EDDA/HYNIC-TOC SPECT/CT effectively evaluates somatostatin receptor expression in NENs. Certain immunohistochemical and laboratory parameters, beyond recognized factors, show potential prognostic value, supporting individualized treatment strategies.

Predicting Rejection Risk in Heart Transplantation: An Integrated Clinical-Histopathologic Framework for Personalized Post-Transplant Care

Kim, D. D., Madabhushi, A., Margulies, K. B., Peyster, E. G.

medrxiv logopreprintSep 8 2025
BackgroundCardiac allograft rejection (CAR) remains the leading cause of early graft failure after heart transplantation (HT). Current diagnostics, including histologic grading of endomyocardial biopsy (EMB) and blood-based assays, lack accurate predictive power for future CAR risk. We developed a predictive model integrating routine clinical data with quantitative morphologic features extracted from routine EMBs to demonstrate the precision-medicine potential of mining existing data sources in post-HT care. MethodsIn a retrospective cohort of 484 HT recipients with 1,188 EMB encounters within 6 months post-transplant, we extracted 370 quantitative pathology features describing lymphocyte infiltration and stromal architecture from digitized H&E-stained slides. Longitudinal clinical data comprising 268 variables--including lab values, immunosuppression records, and prior rejection history--were aggregated per patient. Using the XGBoost algorithm with rigorous cross-validation, we compared models based on four different data sources: clinical-only, morphology-only, cross-sectional-only, and fully integrated longitudinal data. The top predictors informed the derivation of a simplified Integrated Rejection Risk Index (IRRI), which relies on just 4 clinical and 4 morphology risk facts. Model performance was evaluated by AUROC, AUPRC, and time-to-event hazard ratios. ResultsThe fully integrated longitudinal model achieved superior predictive accuracy (AUROC 0.86, AUPRC 0.74). IRRI stratified patients into risk categories with distinct future CAR hazards: high-risk patients showed a markedly increased CAR risk (HR=6.15, 95% CI: 4.17-9.09), while low-risk patients had significantly reduced risk (HR=0.52, 95% CI: 0.33-0.84). This performance exceeded models based on just cross-sectional or single-domain data, demonstrating the value of multi-modal, temporal data integration. ConclusionsBy integrating longitudinal clinical and biopsy morphologic features, IRRI provides a scalable, interpretable tool for proactive CAR risk assessment. This precision-based approach could support risk-adaptive surveillance and immunosuppression management strategies, offering a promising pathway toward safer, more personalized post-HT care with the potential to reduce unnecessary procedures and improve outcomes. Clinical PerspectiveWhat is new? O_LICurrent tools for cardiac allograft monitoring detect rejection only after it occurs and are not designed to forecast future risk. This leads to missed opportunities for early intervention, avoidable patient injury, unnecessary testing, and inefficiencies in care. C_LIO_LIWe developed a machine learning-based risk index that integrates clinical features, quantitative biopsy morphology, and longitudinal temporal trends to create a robust predictive framework. C_LIO_LIThe Integrated Rejection Risk Index (IRRI) provides highly accurate prediction of future allograft rejection, identifying both high- and low-risk patients up to 90 days in advance - a capability entirely absent from current transplant management. C_LI What are the clinical implications? O_LIIntegrating quantitative histopathology with clinical data provides a more precise, individualized estimate of rejection risk in heart transplant recipients. C_LIO_LIThis framework has the potential to guide post-transplant surveillance intensity, immunosuppressive management, and patient counseling. C_LIO_LIAutomated biopsy analysis could be incorporated into digital pathology workflows, enabling scalable, multicenter application in real-world transplant care. C_LI

Enabling micro-assessments of skills in the simulated setting using temporal artificial intelligence-models.

Bang Andersen I, Søndergaard Svendsen MB, Risgaard AL, Sander Danstrup C, Todsen T, Tolsgaard MG, Friis ML

pubmed logopapersSep 7 2025
Assessing skills in simulated settings is resource-intensive and lacks validated metrics. Advances in AI offer the potential for automated competence assessment, addressing these limitations. This study aimed to develop and validate a machine learning AI model for automated evaluation during simulation-based thyroid ultrasound (US) training. Videos from eight experts and 21 novices performing thyroid US on a simulator were analyzed. Frames were processed into sequences of 1, 10, and 50 seconds. A convolutional neural network with a pre-trained ResNet-50 base and a long short-term memory layer analyzed these sequences. The model was trained to distinguish competence levels (competent=1, not competent=0) using fourfold cross-validation, with performance metrics including precision, recall, F1 score, and accuracy. Bayesian updating and adaptive thresholding assessed performance over time. The AI model effectively differentiated expert and novice US performance. The 50-second sequences achieved the highest accuracy (70%) and F1 score (0.76). Experts showed significantly longer durations above the threshold (15.71s) compared to novices (9.31s, p= .030). A long short-term memory-based AI model provides near real-time, automated assessments of competence in US training. Utilizing temporal video data enables detailed micro-assessments of complex procedures, which may enhance interpretability and be applied across various procedural domains.

Machine Learning Model for Selection of Cementless Total Knee Arthroplasty Candidates Utilizing Patient and Radiographic Parameters.

Duncan AE, Malkani AL, Stoltz MJ, Ahmed N, Mullick M, Whitaker JE, Swiergosz A, Smith LS, Dourado A

pubmed logopapersSep 7 2025
The use of cementless total knee arthroplasty (TKA) has significantly increased over the past decade. However, there is no objective criteria or consensus on parameters for patient selection for cementless TKA. The purpose of this study was to develop a machine learning model based on patient and radiographic parameters that could identify patients indicated for cementless TKA. We developed an explainable recommendation model using multiple patient and radiographic parameters (BMI, Age, Gender, Hounsfield Units [HU] from CT for density of tibia). The predictive model was trained on medical, operative, and radiographic data of 217 patients who underwent primary TKA. HU density measurements of four quadrants of the proximal tibia were obtained at region of interest on preoperative CT scans. which were then incorporated into the model as a surrogate for bone mineral density. The model employs Local Interpretable Model-agnostic Explanations in combination with bagging ensemble techniques for artificial neural networks. Model testing on the 217-patient cohort included 22 cemented and 38 cementless TKA cases. The model successfully identified 19 cemented patients (sensitivity: 86.4%) and 37 cementless patients (specificity: 97.4%) with an AUC = 0.94. Use of cementless TKA has grown significantly. There are currently no standard radiographic criteria for patient selection. Our machine learning model demonstrated 97.4% specificity and should improve with more training data. Future improvements will include incorporating additional cases and developing automated HU extraction techniques.

Prediction of Pulmonary Ground-Glass Nodule Progression State on Initial Screening CT Using a Radiomics-Based Model.

Jin L, Liu Z, Sun Y, Gao P, Ma Z, Ye H, Liu Z, Dong X, Sun Y, Han J, Lv L, Guan D, Li M

pubmed logopapersSep 7 2025
Diagnosing pulmonary ground-glass nodules (GGNs) on chest CT imaging remains challenging in clinical practice. Moreover, different stages of GGNs may require different clinical treatments. Hence, we sought to predict the progressive state of pulmonary GGNs (absorption or persistence) for accurate clinical treatment and decision-making. We retrospectively enrolled 672 patients (absorption group: 299; control group: 373) from two medical centres from January 2017 to March 2023. Clinical information and radiomic features extracted from regions of interest of all patients on chest CT imaging were collected. All patients were randomly divided into training and test sets at a ratio of 7:3. Three models were constructed-Rad-score (Model 1), clinical factor (Model 2), and clinical factors and Rad-score (Model 3)-to identify GGN progression. In the test dataset, two radiologists (with over 8 years of experience in chest imaging) evaluated the models' performance. Receiver operating characteristic curves, accuracy, sensitivity, and specificity were analysed. In the test set, the area under the curve (AUC) of Model 1 and Model 2 was 0.907 [0.868-0.946] and 0.918 [0.88-0.955], respectively. Model 3 achieved the best predictive performance, with an AUC of 0.959 [0.936-0.982], an accuracy of 0.881, a sensitivity of 0.902, and a specificity of 0.856. The intraclass correlation coefficient of Model 3 (0.86) showed better performance than radiologists (0.83 and 0.71). We developed and validated a radiomics-based machine-learning method that achieved good performance in predicting the progressive state of GGNs on initial computed tomography. The model may improve follow-up management of GGNs.

Early postnatal characteristics and differential diagnosis of choledochal cyst and cystic biliary atresia.

Tian Y, Chen S, Ji C, Wang XP, Ye M, Chen XY, Luo JF, Li X, Li L

pubmed logopapersSep 7 2025
Choledochal cysts (CC) and cystic biliary atresia (CBA) present similarly in early infancy but require different treatment approaches. While CC surgery can be delayed until 3-6 months of age in asymptomatic patients, CBA requires intervention within 60 days to prevent cirrhosis. To develop a diagnostic model for early differentiation between these conditions. A total of 319 patients with hepatic hilar cysts (< 60 days old at surgery) were retrospectively analyzed; these patients were treated at three hospitals between 2011 and 2022. Clinical features including biochemical markers and ultrasonographic measurements were compared between CC (<i>n</i> = 274) and CBA (<i>n</i> = 45) groups. Least absolute shrinkage and selection operator regression identified key diagnostic features, and 11 machine learning models were developed and compared. The CBA group showed higher levels of total bile acid, total bilirubin, γ-glutamyl transferase, aspartate aminotransferase, and alanine aminotransferase, and direct bilirubin, while longitudinal diameter of the cysts and transverse diameter of the cysts were larger in the CC group. The multilayer perceptron model demonstrated optimal performance with 95.8% accuracy, 92.9% sensitivity, 96.3% specificity, and an area under the curve of 0.990. Decision curve analysis confirmed its clinical utility. Based on the model, we developed user-friendly diagnostic software for clinical implementation. Our machine learning approach differentiates CC from CBA in early infancy using routinely available clinical parameters. Early accurate diagnosis facilitates timely surgical intervention for CBA cases, potentially improving patient outcomes.

Multi-task learning for classification and prediction of adolescent idiopathic scoliosis based on fringe-projection three-dimensional imaging.

Feng CK, Chen YJ, Dinh QT, Tran KT, Liu CY

pubmed logopapersSep 6 2025
This study aims to address the limitations of radiographic imaging and single-task learning models in adolescent idiopathic scoliosis assessment by developing a noninvasive, radiation-free diagnostic framework. A multi-task deep learning model was trained using structured back surface data acquired via fringe projection three-dimensional imaging. The model was designed to simultaneously predict the Cobb angle, curve type (thoracic, lumbar, mixed, none), and curve direction (left, right, none) by learning shared morphological features. The multi-task model achieved a mean absolute error (MAE) of 2.9° and a root mean square error (RMSE) of 6.9° for Cobb angle prediction, outperforming the single-task baseline (5.4° MAE, 12.5° RMSE). It showed strong correlation with radiographic measurements (R = 0.96, R² = 0.91). For curve classification, it reached 89% sensitivity in lumbar and mixed types, and 80% and 75% sensitivity for right and left directions, respectively, with an 87% positive predictive value for right-sided curves. The proposed multi-task learning model demonstrates that jointly learning related clinical tasks allows for the extraction of more robust and clinically meaningful geometric features from surface data. It outperforms traditional single-task approaches in both accuracy and stability. This framework provides a safe, efficient, and non-invasive alternative to X-ray-based scoliosis assessment and has the potential to support real-time screening and long-term monitoring of adolescent idiopathic scoliosis in clinical practice.
Page 38 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.