Sort by:
Page 32 of 1611605 results

Deep learning-based clinical decision support system for intracerebral hemorrhage: an imaging-based AI-driven framework for automated hematoma segmentation and trajectory planning.

Gan Z, Xu X, Li F, Kikinis R, Zhang J, Chen X

pubmed logopapersJul 1 2025
Intracerebral hemorrhage (ICH) remains a critical neurosurgical emergency with high mortality and long-term disability. Despite advancements in minimally invasive techniques, procedural precision remains limited by hematoma complexity and resource disparities, particularly in underserved regions where 68% of global ICH cases occur. Therefore, the authors aimed to introduce a deep learning-based decision support and planning system to democratize surgical planning and reduce operator dependence. A retrospective cohort of 347 patients (31,024 CT slices) from a single hospital (March 2016-June 2024) was analyzed. The framework integrated nnU-Net-based hematoma and skull segmentation, CT reorientation via ocular landmarks (mean angular correction 20.4° [SD 8.7°]), safety zone delineation with dual anatomical corridors, and trajectory optimization prioritizing maximum hematoma traversal and critical structure avoidance. A validated scoring system was implemented for risk stratification. With the artificial intelligence (AI)-driven system, the automated segmentation accuracy reached clinical-grade performance (Dice similarity coefficient 0.90 [SD 0.14] for hematoma and 0.99 [SD 0.035] for skull), with strong interrater reliability (intraclass correlation coefficient 0.91). For trajectory planning of supratentorial hematomas, the system achieved a low-risk trajectory in 80.8% (252/312) and a moderate-risk trajectory in 15.4% (48/312) of patients, while replanning was required due to high-risk designations in 3.8% of patients (12/312). This AI-driven system demonstrated robust efficacy for supratentorial ICH, addressing 60% of prevalent hemorrhage subtypes. While limitations remain in infratentorial hematomas, this novel automated hematoma segmentation and surgical planning system could be helpful in assisting less-experienced neurosurgeons with limited resources in primary healthcare settings.

Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures.

Bottini M, Zanier O, Da Mutten R, Gandia-Gonzalez ML, Edström E, Elmi-Terander A, Regli L, Serra C, Staartjes VE

pubmed logopapersJul 1 2025
This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique. A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity. The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images. This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.

Does alignment alone predict mechanical complications after adult spinal deformity surgery? A machine learning comparison of alignment, bone quality, and soft tissue.

Sundrani S, Doss DJ, Johnson GW, Jain H, Zakieh O, Wegner AM, Lugo-Pico JG, Abtahi AM, Stephens BF, Zuckerman SL

pubmed logopapersJul 1 2025
Mechanical complications are a vexing occurrence after adult spinal deformity (ASD) surgery. While achieving ideal spinal alignment in ASD surgery is critical, alignment alone may not fully explain all mechanical complications. The authors sought to determine which combination of inputs produced the most sensitive and specific machine learning model to predict mechanical complications using postoperative alignment, bone quality, and soft tissue data. A retrospective cohort study was performed in patients undergoing ASD surgery from 2009 to 2021. Inclusion criteria were a fusion ≥ 5 levels, sagittal/coronal deformity, and at least 2 years of follow-up. The primary exposure variables were 1) alignment, evaluated in both the sagittal and coronal planes using the L1-pelvic angle ± 3°, L4-S1 lordosis, sagittal vertical axis, pelvic tilt, and coronal vertical axis; 2) bone quality, evaluated by the T-score from a dual-energy x-ray absorptiometry scan; and 3) soft tissue, evaluated by the paraspinal muscle-to-vertebral body ratio and fatty infiltration. The primary outcome was mechanical complications. Alongside demographic data in each model, 7 machine learning models with all combinations of domains (alignment, bone quality, and soft tissue) were trained. The positive predictive value (PPV) was calculated for each model. Of 231 patients (24% male) undergoing ASD surgery with a mean age of 64 ± 17 years, 147 (64%) developed at least one mechanical complication. The model with alignment alone performed poorly, with a PPV of 0.85. However, the model with alignment, bone quality, and soft tissue achieved a high PPV of 0.90, sensitivity of 0.67, and specificity of 0.84. Moreover, the model with alignment alone failed to predict 15 complications of 100, whereas the model with all three domains only failed to predict 10 of 100. These results support the notion that not every mechanical failure is explained by alignment alone. The authors found that a combination of alignment, bone quality, and soft tissue provided the most accurate prediction of mechanical complications after ASD surgery. While achieving optimal alignment is essential, additional data including bone and soft tissue are necessary to minimize mechanical complications.

Orbital CT deep learning models in thyroid eye disease rival medical specialists' performance in optic neuropathy prediction in a quaternary referral center and revealed impact of the bony walls.

Kheok SW, Hu G, Lee MH, Wong CP, Zheng K, Htoon HM, Lei Z, Tan ASM, Chan LL, Ooi BC, Seah LL

pubmed logopapersJul 1 2025
To develop and evaluate orbital CT deep learning (DL) models in optic neuropathy (ON) prediction in patients diagnosed with thyroid eye disease (TED), using partial versus entire 2D versus 3D images for input. Patients with TED ±ON diagnosed at a quaternary-level practice and who underwent orbital CT between 2002 and 2017 were included. DL models were developed using annotated CT data. The DL models were used to evaluate the hold-out test set. ON classification performances were compared between models and medical specialists, and saliency maps applied to randomized cases. 36/252 orbits in 126 TED patients (mean age, 51 years; 81 women) had clinically confirmed ON. With 2D image input for ON prediction, our models achieved (a) sensitivity 89%, AUC 0.86 on entire coronal orbital apex including bony walls, and (b) specificity 92%, AUC 0.79 on partial axial lateral orbital wall only annotations. ON classification performance was similar (<i>p</i> = 0.58) between DL model and medical specialists. DL models trained on 2D CT annotations rival medical specialists in ON classification, with potential to objectively enhance clinical triage for sight-saving intervention and incorporate model variants in the workflow to harness differential performance metrics.

<sup>18</sup>F-FDG dose reduction using deep learning-based PET reconstruction.

Akita R, Takauchi K, Ishibashi M, Kondo S, Ono S, Yokomachi K, Ochi Y, Kiguchi M, Mitani H, Nakamura Y, Awai K

pubmed logopapersJul 1 2025
A deep learning-based image reconstruction (DLR) algorithm that can reduce the statistical noise has been developed for PET/CT imaging. It may reduce the administered dose of <sup>18</sup>F-FDG and minimize radiation exposure while maintaining diagnostic quality. This retrospective study evaluated whether the injected <sup>18</sup>F-FDG dose could be reduced by applying DLR to PET images. To this aim, we compared the quantitative image quality metrics and the false-positive rate between DLR with a reduced <sup>18</sup>F-FDG dose and Ordered Subsets Expectation Maximization (OSEM) with a standard dose. This study included 90 oncology patients who underwent <sup>18</sup>F-FDG PET/CT. They were divided into 3 groups (30 patients each): group A (<sup>18</sup>F-FDG dose per body weight [BW]: 2.00-2.99 MBq/kg; PET image reconstruction: DLR), group B (3.00-3.99 MBq/kg; DLR), and group C (standard dose group; 4.00-4.99 MBq/kg; OSEM). The evaluation was performed using the signal-to-noise ratio (SNR), target-to-background ratio (TBR), and false-positive rate. DLR yielded significantly higher SNRs in groups A and B than group C (p < 0.001). There was no significant difference in the TBR between groups A and C, and between groups B and C (p = 0.983 and 0.605, respectively). In group B, more than 80% of patients weighing less than 75 kg had at most one false positive result. In contrast, in group B patients weighing 75 kg or more, as well as in group A, less than 80% of patients had at most one false-positives. Our findings suggest that the injected <sup>18</sup>F-FDG dose can be reduced to 3.0 MBq/kg in patients weighing less than 75 kg by applying DLR. Compared to the recommended dose in the European Association of Nuclear Medicine (EANM) guidelines for 90 s per bed position (4.7 MBq/kg), this represents a dose reduction of 36%. Further optimization of DLR algorithms is required to maintain comparable diagnostic accuracy in patients weighing 75 kg or more.

Association between antithrombotic medications and intracranial hemorrhage among older patients with mild traumatic brain injury: a multicenter cohort study.

Benhamed A, Crombé A, Seux M, Frassin L, L'Huillier R, Mercier E, Émond M, Millon D, Desmeules F, Tazarourte K, Gorincour G

pubmed logopapersJul 1 2025
To measure the association between antithrombotic (AT) medications (anticoagulant and antiplatelet) and risk for traumatic intracranial hemorrhage (ICH) in older adults with a mild traumatic brain injury (mTBI). We conducted a retrospective multicenter study across 103 emergency departments affiliated with a teleradiology company dedicated to emergency imaging between 2020 and 2022. Older adults (≥65 years old) with mTBI, with a head computed tomography scan, were included. Natural language processing models were used to label-free texts of emergency physician forms and radiology reports; and a multivariable logistic regression model to measure the association between AT medications and occurrence of ICH. A total of 5948 patients [median age 84.6 (74.3-89.1) years, 58.1% females] were included, of whom 781 (13.1%) had an ICH. Among them, 3177 (53.4%) patients were treated with at least one AT agent. No AT medication was associated with a higher risk for ICH: antiplatelet odds ratio 0.98 95% confidence interval (0.81-1.18), direct oral anticoagulant 0.82 (0.60-1.09), and vitamin K antagonist 0.66 (0.37-1.10). Conversely, a high-level fall [1.68 (1.15-2.4)], a Glasgow coma scale of 14 [1.83 (1.22-2.68)], a cutaneous head impact [1.5 (1.17-1.92)], vomiting [1.59 (1.18-2.14)], amnesia [1.35 (1.02-1.79)], a suspected skull vault fracture [9.3 (14.2-26.5)] or of facial bones fracture [1.34 (1.02-1.75)] were associated with a higher risk for ICH. This study found no association between AT medications and an increased risk of ICH among older patients with mTBI suggesting that routine neuroimaging in this population may offer limited benefit and that additional variables should be considered in the imaging decision.

Improving Tuberculosis Detection in Chest X-Ray Images Through Transfer Learning and Deep Learning: Comparative Study of Convolutional Neural Network Architectures.

Mirugwe A, Tamale L, Nyirenda J

pubmed logopapersJul 1 2025
Tuberculosis (TB) remains a significant global health challenge, as current diagnostic methods are often resource-intensive, time-consuming, and inaccessible in many high-burden communities, necessitating more efficient and accurate diagnostic methods to improve early detection and treatment outcomes. This study aimed to evaluate the performance of 6 convolutional neural network architectures-Visual Geometry Group-16 (VGG16), VGG19, Residual Network-50 (ResNet50), ResNet101, ResNet152, and Inception-ResNet-V2-in classifying chest x-ray (CXR) images as either normal or TB-positive. The impact of data augmentation on model performance, training times, and parameter counts was also assessed. The dataset of 4200 CXR images, comprising 700 labeled as TB-positive and 3500 as normal cases, was used to train and test the models. Evaluation metrics included accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve. The computational efficiency of each model was analyzed by comparing training times and parameter counts. VGG16 outperformed the other architectures, achieving an accuracy of 99.4%, precision of 97.9%, recall of 98.6%, F1-score of 98.3%, and area under the receiver operating characteristic curve of 98.25%. This superior performance is significant because it demonstrates that a simpler model can deliver exceptional diagnostic accuracy while requiring fewer computational resources. Surprisingly, data augmentation did not improve performance, suggesting that the original dataset's diversity was sufficient. Models with large numbers of parameters, such as ResNet152 and Inception-ResNet-V2, required longer training times without yielding proportionally better performance. Simpler models like VGG16 offer a favorable balance between diagnostic accuracy and computational efficiency for TB detection in CXR images. These findings highlight the need to tailor model selection to task-specific requirements, providing valuable insights for future research and clinical implementations in medical image classification.

How I Do It: Three-Dimensional MR Neurography and Zero Echo Time MRI for Rendering of Peripheral Nerve and Bone.

Lin Y, Tan ET, Campbell G, Breighner RE, Fung M, Wolfe SW, Carrino JA, Sneag DB

pubmed logopapersJul 1 2025
MR neurography sequences provide excellent nerve-to-background soft tissue contrast, whereas a zero echo time (ZTE) MRI sequence provides cortical bone contrast. By demonstrating the spatial relationship between nerves and bones, a combination of rendered three-dimensional (3D) MR neurography and ZTE sequences provides a roadmap for clinical decision-making, particularly for surgical intervention. In this article, the authors describe the method for fused rendering of peripheral nerve and bone by combining nerve and bone structures from 3D MR neurography and 3D ZTE MRI, respectively. The described method includes scanning acquisition, postprocessing that entails deep learning-based reconstruction techniques, and rendering techniques. Representative case examples demonstrate the steps and clinical use of these techniques. Challenges in nerve and bone rendering are also discussed.

GAN-based Denoising for Scan Time Reduction and Motion Correction of 18F FP-CIT PET/CT: A Multicenter External Validation Study.

Han H, Choo K, Jeon TJ, Lee S, Seo S, Kim D, Kim SJ, Lee SH, Yun M

pubmed logopapersJul 1 2025
AI-driven scan time reduction is rapidly transforming medical imaging with benefits such as improved patient comfort and enhanced efficiency. A Dual Contrastive Learning Generative Adversarial Network (DCLGAN) was developed to predict full-time PET scans from shorter, noisier scans, improving challenges in imaging patients with movement disorders. 18F FP-CIT PET/CT data from 391 patients with suspected Parkinsonism were used [250 training/validation, 141 testing (hospital A)]. Ground truth (GT) images were reconstructed from 15-minute scans, while denoised images (DIs) were generated from 1-, 3-, 5-, and 10-minute scans. Image quality was assessed using normalized root mean square error (NRMSE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), visual analysis, and clinical metrics like BPND and ISR for diagnosis of non-neurodegenerative Parkinson disease (NPD), idiopathic PD (IPD), and atypical PD (APD). External validation used data from 2 hospitals with different scanners (hospital B: 1-, 3-, 5-, and 10-min; hospital C: 1-, 3-, and 5-min). In addition, motion artifact reduction was evaluated using the Dice similarity coefficient (DSC). In hospital A, NRMSE, PSNR, and SSIM values improved with scan duration, with the 5-minute DIs achieving optimal quality (NRMSE 0.008, PSNR 42.13, SSIM 0.98). Visual analysis rated DIs from scans ≥3 minutes as adequate or higher. The mean BPND differences (95% CI) for each DIs were 0.19 (-0.01, 0.40), 0.11 (-0.02, 0.24), 0.08 (-0.03, 0.18), and 0.01 (-0.06, 0.07), with the CIs significantly decreasing. ISRs with the highest effect sizes for differentiating NPD, IPD, and APD (PP/AP, PP/VS, PC/VP) remained stable post-denoising. External validation showed 10-minute DIs (hospital B) and 1-minute DIs (hospital C) reached benchmarks of hospital A's image quality metrics, with similar trends in visual analysis and BPND CIs. Furthermore, motion artifact correction in 9 patients yielded DSC improvements from 0.89 to 0.95 in striatal regions. The DL-model is capable of generating high-quality 18F FP-CIT PET images from shorter scans to enhance patient comfort, minimize motion artifacts, and maintain diagnostic precision. Furthermore, our study plays an important role in providing insights into how imaging quality assessment metrics can be used to determine the appropriate scan duration for different scanners with varying sensitivities.

Computed Tomography Advancements in Plaque Analysis: From Histology to Comprehensive Plaque Burden Assessment.

Catapano F, Lisi C, Figliozzi S, Scialò V, Politi LS, Francone M

pubmed logopapersJul 1 2025
Advancements in coronary computed tomography angiography (CCTA) facilitated the transition from traditional histological approaches to comprehensive plaque burden assessment. Recent updates in the European Society of Cardiology (ESC) guidelines emphasize CCTA's role in managing chronic coronary syndrome by enabling detailed monitoring of atherosclerotic plaque progression. Limitations of conventional CCTA, such as spatial resolution challenges in accurately characterizing plaque components like thin-cap fibroatheromas and necrotic lipid-rich cores, are addressed with photon-counting detector CT (PCD-CT) technology. PCD-CT offers enhanced spatial resolution and spectral imaging, improving the detection and characterization of high-risk plaque features while reducing artifacts. The integration of artificial intelligence (AI) in plaque analysis enhances diagnostic accuracy through automated plaque characterization and radiomics. These technological advancements support a comprehensive approach to plaque assessment, incorporating hemodynamic evaluations, morphological metrics, and AI-driven analysis, thereby enabling personalized patient care and improved prediction of acute clinical events.
Page 32 of 1611605 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.