Sort by:
Page 5 of 1391390 results

Machine learning and quantitative computed tomography radiomics prediction of postoperative functional recovery in paraplegic dogs.

Low D, Rutherford S

pubmed logopapersOct 2 2025
To develop a computed tomography (CT)-radiomics-based machine-learning algorithm for prediction of functional recovery in paraplegic dogs with acute intervertebral disc extrusion (IVDE). Multivariable prediction model development. Paraplegic dogs with acute IVDE: 128 deep-pain positive and 86 deep-pain negative (DPN). Radiomics features from noncontrast CT were combined with deep-pain perception in an extreme gradient algorithm using an 80:20 train-test split. Model performance was assessed on the independent test set (Test<sub>full</sub>) and on the test set of DPN dogs (Test<sub>DPN</sub>). Deep-pain perception alone served as the control. Recovery of ambulation was recorded in 165/214 dogs (77.1%) after decompressive surgery. The model had an area under the receiver operating characteristic curve (AUC) of .9118 (95% CI: .8366-.9872), accuracy of 86.1% (95% CI: 74.4%-95.4%), sensitivity of 82.4% (95% CI: 68.6%-93.9%), and specificity of 100.0% (95% CI: 100.0%-100.0%) on Test<sub>full</sub>, and an AUC of .7692 (95% CI: .6250-.9000), accuracy of 72.7% (95% CI: 50.0%-90.9%), sensitivity of 53.8% (95% CI: 25.0%-80.0%), and specificity of 100.0% (95% CI: 100.0%-100.0%) on Test<sub>DPN</sub>. Deep-pain perception had an AUC of .8088 (95% CI: .7273-.8871), accuracy of 69.8% (95% CI: 55.8%-83.7%), sensitivity of 61.8% (95% CI: 45.5%-77.4%), and specificity of 100.0% (95% CI: 100.0%-100.0%), which was different from that of the model (p = .02). Noncontrast CT-based radiomics provided prognostic information in dogs with severe spinal cord injury secondary to acute intervertebral disc extrusion. The model outperformed deep-pain perception alone in identifying dogs that recovered ambulation following decompressive surgery. Radiomics features from noncontrast CT, when integrated into a multimodal machine-learning algorithm, may be useful as an assistive tool for surgical decision making.

AortaDiff: A Unified Multitask Diffusion Framework For Contrast-Free AAA Imaging

Yuxuan Ou, Ning Bi, Jiazhen Pan, Jiancheng Yang, Boliang Yu, Usama Zidan, Regent Lee, Vicente Grau

arxiv logopreprintOct 1 2025
While contrast-enhanced CT (CECT) is standard for assessing abdominal aortic aneurysms (AAA), the required iodinated contrast agents pose significant risks, including nephrotoxicity, patient allergies, and environmental harm. To reduce contrast agent use, recent deep learning methods have focused on generating synthetic CECT from non-contrast CT (NCCT) scans. However, most adopt a multi-stage pipeline that first generates images and then performs segmentation, which leads to error accumulation and fails to leverage shared semantic and anatomical structures. To address this, we propose a unified deep learning framework that generates synthetic CECT images from NCCT scans while simultaneously segmenting the aortic lumen and thrombus. Our approach integrates conditional diffusion models (CDM) with multi-task learning, enabling end-to-end joint optimization of image synthesis and anatomical segmentation. Unlike previous multitask diffusion models, our approach requires no initial predictions (e.g., a coarse segmentation mask), shares both encoder and decoder parameters across tasks, and employs a semi-supervised training strategy to learn from scans with missing segmentation labels, a common constraint in real-world clinical data. We evaluated our method on a cohort of 264 patients, where it consistently outperformed state-of-the-art single-task and multi-stage models. For image synthesis, our model achieved a PSNR of 25.61 dB, compared to 23.80 dB from a single-task CDM. For anatomical segmentation, it improved the lumen Dice score to 0.89 from 0.87 and the challenging thrombus Dice score to 0.53 from 0.48 (nnU-Net). These segmentation enhancements led to more accurate clinical measurements, reducing the lumen diameter MAE to 4.19 mm from 5.78 mm and the thrombus area error to 33.85% from 41.45% when compared to nnU-Net. Code is available at https://github.com/yuxuanou623/AortaDiff.git.

Machine learning combined with CT-based radiomics predicts the prognosis of oesophageal squamous cell carcinoma.

Liu M, Lu R, Wang B, Fan J, Wang Y, Zhu J, Luo J

pubmed logopapersOct 1 2025
This retrospective study aims to develop a machine learning model integrating preoperative CT radiomics and clinicopathological data to predict 3-year recurrence and recurrence patterns in postoperative oesophageal squamous cell carcinoma. Tumour regions were segmented using 3D-Slicer, and radiomic features were extracted via Python. LASSO regression selected prognostic features for model integration. Clinicopathological data include tumour length, lymph node positivity, differentiation grade, and neurovascular infiltration. Ultimately, a machine learning model was established by combining the screened imaging feature data and clinicopathological data and validating model performance. A nomogram was constructed for survival prediction, and risk stratification was carried out through the prediction results of the machine learning model and the nomogram. Survival analysis was performed for stage-based patient subgroups across risk stratifications to identify adjuvant therapy-benefiting cohorts. Patients were randomly divided into a 7:3 ratio of 368 patients in the training cohorts and 158 patients in the validation cohorts. The LASSO regression screens out 6 recurrence prediction and 9 recurrence pattern prediction features, respectively. Among 526 patients (mean age 63; 427 males), the model achieved high accuracy in predicting recurrence (training cohort AUC: 0.826 [logistic regression]/0.820 [SVM]; validation cohort: 0.830/0.825) and recurrence patterns (training:0.801/0.799; validation:0.806/0.798). Risk stratification based on a machine learning model and nomogram predictions revealed that adjuvant therapy significantly improved disease-free survival in stages II-III patients with predicted recurrence and low survival (HR 0.372, 95% CI: 0.206-0.669; p < 0.001). Machine learning models exhibit excellent performance in predicting recurrence after surgery for squamous oesophageal cancer. Radiomic features of contrast-enhanced CT imaging can predict the prognosis of patients with oesophageal squamous cell carcinoma, which in turn can help clinicians stratify risk and screen out patient populations that could benefit from adjuvant therapy, thereby aiding medical decision-making. There is a lack of prognostic models for oesophageal squamous cell carcinoma in current research. The prognostic prediction model that we have developed has high accuracy by combining radiomics features and clinicopathologic data. This model aids in risk stratification of patients and aids clinical decision-making through predictive outcomes.

Human Readers versus AI-Based Systems in ASPECTS Scoring for Acute Ischemic Stroke: A Systematic Review and Meta-Analysis with Region-Specific Guidance.

Azzam AY, Hadadi I, Al-Shahrani LM, Shanqeeti UA, Alqurqush NA, Alsehli MA, Alali RS, Tammar RS, Morsy MM, Essibayi MA

pubmed logopapersOct 1 2025
The Alberta Stroke Program Early CT Score (ASPECTS) is widely used to evaluate early ischemic changes and guide thrombectomy decisions in acute stroke patients. However, significant interobserver variability in manual ASPECTS assessment presents a challenge. Recent advances in artificial intelligence have enabled the development of automated ASPECTS scoring systems; however, their comparative performance against expert interpretation remains insufficiently studied. We conducted a systematic review and meta-analysis following PRISMA 2020 guidelines. We searched multiple scientific databases for studies comparing automated and manual ASPECTS on Non-Contrast Computed Tomography (NCCT). Interobserver reliability was assessed using pooled interclass correlation coefficients (ICCs). Subgroup analyses were made using software types, reference standards, time windows, and computed tomography-based factors. Eleven studies with a total of 1,976 patients were included. Automated ASPECTS demonstrated good reliability against reference standards (ICC: 0.72), comparable to expert readings (ICC: 0.62). RAPID ASPECTS performed highest (ICC: 0.86), especially for high-stakes decision-making. AI advantages were most significant with thin-slice CT (≤2.5mm; +0.16), intermediate time windows (120-240min; +0.16), and higher NIHSS scores (p=0.026). AI-driven ASPECTS systems perform comparably or even better in some cases than human readers in detecting early ischemic changes, especially in specific scenarios. Strategic utilization focusing on high-impact scenarios and region-specific performance patterns offers better diagnostic accuracy, reduced interpretation times, and better and wiser treatment selection in acute stroke care.

Intelligent extraction of CT image landmarks for improving cam-type femoroacetabular impingement assessment.

Tayyebinezhad S, Fatehi M, Arabalibeik H, Ghadiri H

pubmed logopapersOct 1 2025
Femoroacetabular impingement (FAI) with cam-type morphology is a common hip disorder that can result in groin pain and eventually osteoarthritis. The pre-operative assessment is based on parameters obtained from x-ray or computed tomography (CT) scans, namely alpha angle (AA) and femoral head-neck offset (FHNO). The goal of our study was to develop a computer-aided detection (CAD) system to automatically select the hip region and measure diagnostic parameters from CT scans to overcome the limitations of the tedious and time-consuming process of subjectively selecting CT image slices to obtain parameters. 271 cases of ordinary abdominopelvic CT examination were collected retrospectively from two hospitals between 2018 and 2022, each equipped with a distinct CT scanner. First, a convolution neural network (CNN) was designed to select hip region slices among abdominopelvic CT scan image series. This CNN was trained using 80 CT scans divided into 50%, 20%, and 30% for training, validation and testing groups, respectively. Second, the most appropriate oblique slice passing through the femoral head-neck complex was selected, and AA and FHNO landmarks were calculated using image-processing algorithms. The best oblique slices were selected/measured manually for each hip as ground truth and its related parameters. CT hip-region selection using CNN yielded 99.34% accuracy. Pearson correlation coefficient between manual and automatic parameters measurement were 0.964 and 0.856 for AA and FHNO, respectively. The results of this study are promising for future development of a CAD software application for screening CT scans that may aid physicians to assess FAI. Question Femoroacetabular impingement is a common, underdiagnosed hip disorder requiring time-consuming image-based measurements. Can AI improve the efficiency and consistency of its radiologic assessment? Findings Automated slice selection and landmark detection using a hybrid AI method improved measurement efficiency and accuracy, with minimal bias confirmed through Bland-Altman analysis. Clinical relevance An AI-based method enables faster, more consistent evaluation of cam-type femoroacetabular impingement in routine CT images, supporting earlier identification and reducing dependency on operator experience in clinical workflows.

Current and novel approaches for critical care management of aneurysmal subarachnoid hemorrhage in critical care.

Zoumprouli A, Carden R, Bilotta F

pubmed logopapersOct 1 2025
This review highlights recent advancements and evidence-based approaches in the critical care management of aneurysmal subarachnoid hemorrhage (aSAH), focusing on developments from the past 18 months. It addresses key challenges [rebleeding prevention, delayed cerebral ischemia (DCI), hydrocephalus, transfusion strategies, and temperature management], emphasizing multidisciplinary care and personalized treatment. Recent studies underscore the importance of systolic blood pressure control (<160 mmHg) to reduce rebleeding risk before aneurysm securing. Novel prognostic tools, including the modified 5-item frailty index and quantitative imaging software, show promise in improving outcome prediction. Prophylactic lumbar drainage may reduce DCI and improve neurological outcomes, while milrinone and computed tomography perfusion-guided therapies are being explored for vasospasm management. Transfusion strategies suggest a hemoglobin threshold of 9 g/dl may optimize outcomes. Temperature management remains contentious, but consensus recommends maintaining normothermia (36.0-37.5 °C) with continuous monitoring. Advances in aSAH care emphasize precision medicine, leveraging technology [e.g. Artificial intelligence (AI), quantitative imaging], and multidisciplinary collaboration. Key unresolved questions warrant multicenter trials to validate optimal blood pressure, transfusion, and temperature targets alongside emerging therapies for DCI.

Graph neural network model using radiomics for lung CT image segmentation.

Faizi MK, Qiang Y, Shagar MMB, Wei Y, Qiao Y, Zhao J, Urrehman Z

pubmed logopapersOct 1 2025
Early detection of lung cancer is critical for improving treatment outcomes, and automatic lung image segmentation plays a key role in diagnosing lung-related diseases such as cancer, COVID-19, and respiratory disorders. Challenges include overlapping anatomical structures, complex pixel-level feature fusion, and intricate morphology of lung tissues all of which impede segmentation accuracy. To address these issues, this paper introduces GEANet, a novel framework for lung segmentation in CT images. GEANet utilizes an encoder-decoder architecture enriched with radiomics-derived features. Additionally, it incorporates Graph Neural Network (GNN) modules to effectively capture the complex heterogeneity of tumors. Additionally, a boundary refinement module is incorporated to improve image reconstruction and boundary delineation accuracy. The framework utilizes a hybrid loss function combining Focal Loss and IoU Loss to address class imbalance and enhance segmentation robustness. Experimental results on benchmark datasets demonstrate that GEANet outperforms eight state-of-the-art methods across various metrics, achieving superior segmentation accuracy while maintaining computational efficiency.

From Concept to Code: AI- Powered CODE-ICH Transforming Acute Neurocritical Response for Hemorrhagic Strokes

salman, s., corro, r., menser, t., Sanghavi, D., kramer, c., moreno franco, p., Freeman, W. D.

medrxiv logopreprintOct 1 2025
BackgroundIntracerebral hemorrhage (ICH) is among the most devastating forms of stroke, characterized by high early mortality and limited time-sensitive treatment protocols compared to ischemic stroke. The absence of standardized emergency response frameworks and the shortcomings of conventional scoring systems highlight the urgent need for innovation in neurocritical care. ObjectiveThis paper introduces and evaluates the CODE-ICH framework, along with two AI-powered tools HEADS-UP and SAHVAI designed to transform acute ICH management through real-time detection, volumetric analysis, and predictive modeling. MethodsWe describe the development and implementation of HEADS-UP, a cloud-based AI system for early ICH detection in underserved populations, and SAHVAI, a convolutional neural network-based tool for subarachnoid hemorrhage volume quantification. These tools were integrated into a novel paging and workflow system at a comprehensive stroke center to facilitate ultra-early intervention. ResultsSAHVAI achieved 99.8% accuracy in volumetric analysis and provided 2D, 3D, and 4D visualization of hemorrhage progression. HEADS-UP enabled rapid triage and transfer, reducing reliance on subjective interpretation. Together, these tools operationalized the time is brain principle for hemorrhagic stroke and supported proactive, data-driven care in the neuro-intensive care unit (NICU). ConclusionCODE-ICH, HEADS-UP, and SAHVAI represent a paradigm shift in hemorrhagic stroke care, delivering scalable, explainable, and multimodal AI solutions that enhance clinical decision-making, minimize delays, and promote equitable access to neurocritical care.

PixelPrint 4D : A 3D Printing Method of Fabricating Patient-Specific Deformable CT Phantoms for Respiratory Motion Applications.

Im JY, Micah N, Perkins AE, Mei K, Geagan M, Roshkovan L, Noël PB

pubmed logopapersOct 1 2025
Respiratory motion poses a significant challenge for clinical workflows in diagnostic imaging and radiation therapy. Many technologies such as motion artifact reduction and tumor tracking have been developed to compensate for its effect. To assess these technologies, respiratory motion phantoms (RMPs) are required as preclinical testing environments, for instance, in computed tomography (CT). However, current CT RMPs are highly simplified and do not exhibit realistic tissue structures or deformation patterns. With the rise of more complex motion compensation technologies such as deep learning-based algorithms, there is a need for more realistic RMPs. This work introduces PixelPrint 4D , a 3D printing method for fabricating lifelike, patient-specific deformable lung phantoms for CT imaging. A 4DCT dataset of a lung cancer patient was acquired. The volumetric image data of the right lung at end inhalation was converted into 3D printer instructions using the previously developed PixelPrint software. A flexible 3D printing material was used to replicate variable densities voxel-by-voxel within the phantom. The accuracy of the phantom was assessed by acquiring CT scans of the phantom at rest, and under various levels of compression. These phantom images were then compiled into a pseudo-4DCT dataset and compared to the reference patient 4DCT images. Metrics used to assess the phantom structural accuracy included mean attenuation errors, 2-sample 2-sided Kolmogorov-Smirnov (KS) test on histograms, and structural similarity index (SSIM). The phantom deformation properties were assessed by calculating displacement errors of the tumor and throughout the full lung volume, attenuation change errors, and Jacobian errors, as well as the relationship between Jacobian and attenuation changes. The phantom closely replicated patient lung structures, textures, and attenuation profiles. SSIM was measured as 0.93 between the patient and phantom lung, suggesting a high level of structural accuracy. Furthermore, it exhibited realistic nonrigid deformation patterns. The mean tumor motion errors in the phantom were ≤0.7 ± 0.6 mm in each orthogonal direction. Finally, the relationship between attenuation and local volume changes in the phantom had a strong correlation with that of the patient, with analysis of covariance yielding P  = 0.83 and f  = 0.04, suggesting no significant difference between the phantom and patient. PixelPrint 4D facilitates the creation of highly realistic RMPs, exceeding the capabilities of existing models to provide enhanced testing environments for a wide range of emerging CT technologies.

AI-Driven CBCT Analysis for Surgical Decision-Making and Mucosal Damage Prediction in Sinus Lift Surgery for patients with low RBH.

Deng Y, He Y, Liu C, Gao Z, Yu S, Cao S, Li C, Zhu Q, Ma P

pubmed logopapersOct 1 2025
Decision-making for maxillary sinus floor elevation (MSFE) surgery in patients with low residual bone height (<4 mm) presents significant challenges, particularly in selecting surgical approaches and predicting intraoperative mucosal perforation. Traditional methods rely heavily on physician experience, lack standardization and objectivity, and often fail to meet the demands of precision medicine. This study aims to build an intelligent decision-making system based on deep learning to optimize surgical selection and predict the risk of mucosal perforation, providing clinicians with a reliable auxiliary tool. This study retrospectively analysed the cone-beam computed tomography imaging data of 79 patients who underwent MSFE and constructed a three-dimensional (3D) deep-learning model based on the overall CT data of the patients for surgical procedure selection and prediction of mucosal perforation. The model innovatively introduced the Convolutional Block Attention Module mechanism and depthwise separable convolution technology to enhance the model's ability to capture spatial features and computational efficiency. The model was rigorously trained and validated on multiple datasets, with visualization achieved through attention heatmaps to improve interpretability. The modified EfficientNet model achieved an F1 score of 0.6 in the procedure decision task of MSFE. For predicting mucosal perforation, the improved ResNet model achieved an accuracy of 0.8485 and an F1-score of 0.7273 on the mixed dataset. In the experimental group, the improved ResNet model achieved an accuracy of 0.8235, a recall of 0.7619, and an F1-score of 0.7302. In the control group, the model also maintained stable performance, with an F1-score of 0.6483. Overall, the 3D convolutional model enhanced the accuracy and stability of mucosal perforation prediction by leveraging the spatial features of cone-beam computed tomography imaging, demonstrating a certain degree of generalization capability. This study is the first to construct a deep learning-based 3D intelligent decision-making model for MSFE. These findings confirm the model's effectiveness in surgical decision-making and in predicting the risk of mucosal perforation. The system provides an objective decision-making basis for clinicians, improves the standardization level of complex case management, and demonstrates potential for clinical application.
Page 5 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.