Sort by:
Page 201 of 3963955 results

Artificial intelligence image analysis for Hounsfield units in preoperative thoracolumbar CT scans: an automated screening for osteoporosis in patients undergoing spine surgery.

Feng E, Jayasuriya NM, Nathani KR, Katsos K, Machlab LA, Johnson GW, Freedman BA, Bydon M

pubmed logopapersJul 1 2025
This study aimed to develop an artificial intelligence (AI) model for automatically detecting Hounsfield unit (HU) values at the L1 vertebra in preoperative thoracolumbar CT scans. This model serves as a screening tool for osteoporosis in patients undergoing spine surgery, offering an alternative to traditional bone mineral density measurement methods like dual-energy x-ray absorptiometry. The authors utilized two CT scan datasets, comprising 501 images, which were split into training, validation, and test subsets. The nnU-Net framework was used for segmentation, followed by an algorithm to calculate HU values from the L1 vertebra. The model's performance was validated against manual HU calculations by expert raters on 56 CT scans. Statistical measures included the Dice coefficient, Pearson correlation coefficient, intraclass correlation coefficient (ICC), and Bland-Altman plots to assess the agreement between AI and human-derived HU measurements. The AI model achieved a high Dice coefficient of 0.91 for vertebral segmentation. The Pearson correlation coefficient between AI-derived HU and human-derived HU values was 0.96, indicating strong agreement. ICC values for interrater reliability were 0.95 and 0.94 for raters 1 and 2, respectively. The mean difference between AI and human HU values was 7.0 HU, with limits of agreement ranging from -21.1 to 35.2 HU. A paired t-test showed no significant difference between AI and human measurements (p = 0.21). The AI model demonstrated strong agreement with human experts in measuring HU values, validating its potential as a reliable tool for automated osteoporosis screening in spine surgery patients. This approach can enhance preoperative risk assessment and perioperative bone health optimization. Future research should focus on external validation and inclusion of diverse patient demographics to ensure broader applicability.

PROTEUS: A Physically Realistic Contrast-Enhanced Ultrasound Simulator-Part I: Numerical Methods.

Blanken N, Heiles B, Kuliesh A, Versluis M, Jain K, Maresca D, Lajoinie G

pubmed logopapersJul 1 2025
Ultrasound contrast agents (UCAs) have been used as vascular reporters for the past 40 years. The ability to enhance vascular features in ultrasound images with engineered lipid-shelled microbubbles has enabled breakthroughs such as the detection of tissue perfusion or super-resolution imaging of the microvasculature. However, advances in the field of contrast-enhanced ultrasound are hindered by experimental variables that are difficult to control in a laboratory setting, such as complex vascular geometries, the lack of ground truth, and tissue nonlinearities. In addition, the demand for large datasets to train deep learning-based computational ultrasound imaging methods calls for the development of a simulation tool that can reproduce the physics of ultrasound wave interactions with tissues and microbubbles. Here, we introduce a physically realistic contrast-enhanced ultrasound simulator (PROTEUS) consisting of four interconnected modules that account for blood flow dynamics in segmented vascular geometries, intravascular microbubble trajectories, ultrasound wave propagation, and nonlinear microbubble scattering. The first part of this study describes the numerical methods that enabled this development. We demonstrate that PROTEUS can generate contrast-enhanced radio-frequency (RF) data in various vascular architectures across the range of medical ultrasound frequencies. PROTEUS offers a customizable framework to explore novel ideas in the field of contrast-enhanced ultrasound imaging. It is released as an open-source tool for the scientific community.

GAN-based Denoising for Scan Time Reduction and Motion Correction of 18F FP-CIT PET/CT: A Multicenter External Validation Study.

Han H, Choo K, Jeon TJ, Lee S, Seo S, Kim D, Kim SJ, Lee SH, Yun M

pubmed logopapersJul 1 2025
AI-driven scan time reduction is rapidly transforming medical imaging with benefits such as improved patient comfort and enhanced efficiency. A Dual Contrastive Learning Generative Adversarial Network (DCLGAN) was developed to predict full-time PET scans from shorter, noisier scans, improving challenges in imaging patients with movement disorders. 18F FP-CIT PET/CT data from 391 patients with suspected Parkinsonism were used [250 training/validation, 141 testing (hospital A)]. Ground truth (GT) images were reconstructed from 15-minute scans, while denoised images (DIs) were generated from 1-, 3-, 5-, and 10-minute scans. Image quality was assessed using normalized root mean square error (NRMSE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), visual analysis, and clinical metrics like BPND and ISR for diagnosis of non-neurodegenerative Parkinson disease (NPD), idiopathic PD (IPD), and atypical PD (APD). External validation used data from 2 hospitals with different scanners (hospital B: 1-, 3-, 5-, and 10-min; hospital C: 1-, 3-, and 5-min). In addition, motion artifact reduction was evaluated using the Dice similarity coefficient (DSC). In hospital A, NRMSE, PSNR, and SSIM values improved with scan duration, with the 5-minute DIs achieving optimal quality (NRMSE 0.008, PSNR 42.13, SSIM 0.98). Visual analysis rated DIs from scans ≥3 minutes as adequate or higher. The mean BPND differences (95% CI) for each DIs were 0.19 (-0.01, 0.40), 0.11 (-0.02, 0.24), 0.08 (-0.03, 0.18), and 0.01 (-0.06, 0.07), with the CIs significantly decreasing. ISRs with the highest effect sizes for differentiating NPD, IPD, and APD (PP/AP, PP/VS, PC/VP) remained stable post-denoising. External validation showed 10-minute DIs (hospital B) and 1-minute DIs (hospital C) reached benchmarks of hospital A's image quality metrics, with similar trends in visual analysis and BPND CIs. Furthermore, motion artifact correction in 9 patients yielded DSC improvements from 0.89 to 0.95 in striatal regions. The DL-model is capable of generating high-quality 18F FP-CIT PET images from shorter scans to enhance patient comfort, minimize motion artifacts, and maintain diagnostic precision. Furthermore, our study plays an important role in providing insights into how imaging quality assessment metrics can be used to determine the appropriate scan duration for different scanners with varying sensitivities.

An efficient attention Densenet with LSTM for lung disease detection and classification using X-ray images supported by adaptive R2-Unet-based image segmentation.

Betha SK, Dev DR, Sunkara K, Kodavanti PV, Putta A

pubmed logopapersJul 1 2025
Lung diseases represent one of the most prevalent health challenges globally, necessitating accurate diagnosis to improve patient outcomes. This work presents a novel deep learning-aided lung disease classification framework comprising three key phases: image acquisition, segmentation, and classification. Initially, chest X-ray images are taken from standard datasets. The lung regions are segmented using an Adaptive Recurrent Residual U-Net (AR2-UNet), whose parameters are optimised using Enhanced Pufferfish Optimisation Algorithm (EPOA) to enhance segmentation accuracy. The segmented images are processed using "Attention-based Densenet with Long Short Term Memory(ADNet-LSTM)" for robust categorisation. Investigational results demonstrate that the proposed model achieves the highest classification accuracy of 93.92%, significantly outperforming several baseline models including ResNet with 90.77%, Inception with 89.55%, DenseNet with 89.66%, and "Long Short Term Memory (LSTM)" with 91.79%. Thus, the proposed framework offers a dependable and efficient solution for lung disease detection, supporting clinicians in early and accurate diagnosis.

How I Do It: Three-Dimensional MR Neurography and Zero Echo Time MRI for Rendering of Peripheral Nerve and Bone.

Lin Y, Tan ET, Campbell G, Breighner RE, Fung M, Wolfe SW, Carrino JA, Sneag DB

pubmed logopapersJul 1 2025
MR neurography sequences provide excellent nerve-to-background soft tissue contrast, whereas a zero echo time (ZTE) MRI sequence provides cortical bone contrast. By demonstrating the spatial relationship between nerves and bones, a combination of rendered three-dimensional (3D) MR neurography and ZTE sequences provides a roadmap for clinical decision-making, particularly for surgical intervention. In this article, the authors describe the method for fused rendering of peripheral nerve and bone by combining nerve and bone structures from 3D MR neurography and 3D ZTE MRI, respectively. The described method includes scanning acquisition, postprocessing that entails deep learning-based reconstruction techniques, and rendering techniques. Representative case examples demonstrate the steps and clinical use of these techniques. Challenges in nerve and bone rendering are also discussed.

Spondyloarthritis Research and Treatment Network (SPARTAN) Clinical and Imaging Year in Review 2024.

Ferrandiz-Espadin R, Liew JW

pubmed logopapersJul 1 2025
Diagnostic delay remains a critical challenge in axial spondyloarthritis (axSpA). This review highlights key clinical and imaging research from 2024 that addresses this persistent issue, with a focus on the evolving roles of MRI, artificial intelligence (AI), and updated Canadian management recommendations. Multiple studies published in 2024 emphasized the continued problem of diagnostic delay in axSpA. Studies support the continued use of sacroiliac joint MRI as a central diagnostic tool for axSpA, particularly in patients with chronic back pain and associated conditions like uveitis, psoriasis (PsO), or inflammatory bowel disease. AI-based tools for interpreting sacroiliac joint MRIs demonstrated moderate agreement with expert assessments, offering a potential solution to variability and limited access to expert musculoskeletal radiology. These innovations may support earlier diagnosis and reduce misclassification. Innovative models of care, including patient-initiated telemedicine visits, reduced in-person visit frequency without compromising clinical outcomes in patients with stable axSpA. Updated Canadian treatment guidelines introduced more robust data on Janus kinase (JAK) inhibitors and offered stronger support for tapering biologics in patients with sustained low disease activity or remission, while advising against abrupt discontinuation. This clinical and imaging year in review covers challenges and innovations in axSpA, emphasizing the need for early access to care and the development of tools to support prompt diagnosis and sustained continuity of care.

Improving Tuberculosis Detection in Chest X-Ray Images Through Transfer Learning and Deep Learning: Comparative Study of Convolutional Neural Network Architectures.

Mirugwe A, Tamale L, Nyirenda J

pubmed logopapersJul 1 2025
Tuberculosis (TB) remains a significant global health challenge, as current diagnostic methods are often resource-intensive, time-consuming, and inaccessible in many high-burden communities, necessitating more efficient and accurate diagnostic methods to improve early detection and treatment outcomes. This study aimed to evaluate the performance of 6 convolutional neural network architectures-Visual Geometry Group-16 (VGG16), VGG19, Residual Network-50 (ResNet50), ResNet101, ResNet152, and Inception-ResNet-V2-in classifying chest x-ray (CXR) images as either normal or TB-positive. The impact of data augmentation on model performance, training times, and parameter counts was also assessed. The dataset of 4200 CXR images, comprising 700 labeled as TB-positive and 3500 as normal cases, was used to train and test the models. Evaluation metrics included accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve. The computational efficiency of each model was analyzed by comparing training times and parameter counts. VGG16 outperformed the other architectures, achieving an accuracy of 99.4%, precision of 97.9%, recall of 98.6%, F1-score of 98.3%, and area under the receiver operating characteristic curve of 98.25%. This superior performance is significant because it demonstrates that a simpler model can deliver exceptional diagnostic accuracy while requiring fewer computational resources. Surprisingly, data augmentation did not improve performance, suggesting that the original dataset's diversity was sufficient. Models with large numbers of parameters, such as ResNet152 and Inception-ResNet-V2, required longer training times without yielding proportionally better performance. Simpler models like VGG16 offer a favorable balance between diagnostic accuracy and computational efficiency for TB detection in CXR images. These findings highlight the need to tailor model selection to task-specific requirements, providing valuable insights for future research and clinical implementations in medical image classification.

Association between antithrombotic medications and intracranial hemorrhage among older patients with mild traumatic brain injury: a multicenter cohort study.

Benhamed A, Crombé A, Seux M, Frassin L, L'Huillier R, Mercier E, Émond M, Millon D, Desmeules F, Tazarourte K, Gorincour G

pubmed logopapersJul 1 2025
To measure the association between antithrombotic (AT) medications (anticoagulant and antiplatelet) and risk for traumatic intracranial hemorrhage (ICH) in older adults with a mild traumatic brain injury (mTBI). We conducted a retrospective multicenter study across 103 emergency departments affiliated with a teleradiology company dedicated to emergency imaging between 2020 and 2022. Older adults (≥65 years old) with mTBI, with a head computed tomography scan, were included. Natural language processing models were used to label-free texts of emergency physician forms and radiology reports; and a multivariable logistic regression model to measure the association between AT medications and occurrence of ICH. A total of 5948 patients [median age 84.6 (74.3-89.1) years, 58.1% females] were included, of whom 781 (13.1%) had an ICH. Among them, 3177 (53.4%) patients were treated with at least one AT agent. No AT medication was associated with a higher risk for ICH: antiplatelet odds ratio 0.98 95% confidence interval (0.81-1.18), direct oral anticoagulant 0.82 (0.60-1.09), and vitamin K antagonist 0.66 (0.37-1.10). Conversely, a high-level fall [1.68 (1.15-2.4)], a Glasgow coma scale of 14 [1.83 (1.22-2.68)], a cutaneous head impact [1.5 (1.17-1.92)], vomiting [1.59 (1.18-2.14)], amnesia [1.35 (1.02-1.79)], a suspected skull vault fracture [9.3 (14.2-26.5)] or of facial bones fracture [1.34 (1.02-1.75)] were associated with a higher risk for ICH. This study found no association between AT medications and an increased risk of ICH among older patients with mTBI suggesting that routine neuroimaging in this population may offer limited benefit and that additional variables should be considered in the imaging decision.

<sup>18</sup>F-FDG dose reduction using deep learning-based PET reconstruction.

Akita R, Takauchi K, Ishibashi M, Kondo S, Ono S, Yokomachi K, Ochi Y, Kiguchi M, Mitani H, Nakamura Y, Awai K

pubmed logopapersJul 1 2025
A deep learning-based image reconstruction (DLR) algorithm that can reduce the statistical noise has been developed for PET/CT imaging. It may reduce the administered dose of <sup>18</sup>F-FDG and minimize radiation exposure while maintaining diagnostic quality. This retrospective study evaluated whether the injected <sup>18</sup>F-FDG dose could be reduced by applying DLR to PET images. To this aim, we compared the quantitative image quality metrics and the false-positive rate between DLR with a reduced <sup>18</sup>F-FDG dose and Ordered Subsets Expectation Maximization (OSEM) with a standard dose. This study included 90 oncology patients who underwent <sup>18</sup>F-FDG PET/CT. They were divided into 3 groups (30 patients each): group A (<sup>18</sup>F-FDG dose per body weight [BW]: 2.00-2.99 MBq/kg; PET image reconstruction: DLR), group B (3.00-3.99 MBq/kg; DLR), and group C (standard dose group; 4.00-4.99 MBq/kg; OSEM). The evaluation was performed using the signal-to-noise ratio (SNR), target-to-background ratio (TBR), and false-positive rate. DLR yielded significantly higher SNRs in groups A and B than group C (p < 0.001). There was no significant difference in the TBR between groups A and C, and between groups B and C (p = 0.983 and 0.605, respectively). In group B, more than 80% of patients weighing less than 75 kg had at most one false positive result. In contrast, in group B patients weighing 75 kg or more, as well as in group A, less than 80% of patients had at most one false-positives. Our findings suggest that the injected <sup>18</sup>F-FDG dose can be reduced to 3.0 MBq/kg in patients weighing less than 75 kg by applying DLR. Compared to the recommended dose in the European Association of Nuclear Medicine (EANM) guidelines for 90 s per bed position (4.7 MBq/kg), this represents a dose reduction of 36%. Further optimization of DLR algorithms is required to maintain comparable diagnostic accuracy in patients weighing 75 kg or more.

Orbital CT deep learning models in thyroid eye disease rival medical specialists' performance in optic neuropathy prediction in a quaternary referral center and revealed impact of the bony walls.

Kheok SW, Hu G, Lee MH, Wong CP, Zheng K, Htoon HM, Lei Z, Tan ASM, Chan LL, Ooi BC, Seah LL

pubmed logopapersJul 1 2025
To develop and evaluate orbital CT deep learning (DL) models in optic neuropathy (ON) prediction in patients diagnosed with thyroid eye disease (TED), using partial versus entire 2D versus 3D images for input. Patients with TED ±ON diagnosed at a quaternary-level practice and who underwent orbital CT between 2002 and 2017 were included. DL models were developed using annotated CT data. The DL models were used to evaluate the hold-out test set. ON classification performances were compared between models and medical specialists, and saliency maps applied to randomized cases. 36/252 orbits in 126 TED patients (mean age, 51 years; 81 women) had clinically confirmed ON. With 2D image input for ON prediction, our models achieved (a) sensitivity 89%, AUC 0.86 on entire coronal orbital apex including bony walls, and (b) specificity 92%, AUC 0.79 on partial axial lateral orbital wall only annotations. ON classification performance was similar (<i>p</i> = 0.58) between DL model and medical specialists. DL models trained on 2D CT annotations rival medical specialists in ON classification, with potential to objectively enhance clinical triage for sight-saving intervention and incorporate model variants in the workflow to harness differential performance metrics.
Page 201 of 3963955 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.