Sort by:
Page 33 of 1611610 results

Association between antithrombotic medications and intracranial hemorrhage among older patients with mild traumatic brain injury: a multicenter cohort study.

Benhamed A, Crombé A, Seux M, Frassin L, L'Huillier R, Mercier E, Émond M, Millon D, Desmeules F, Tazarourte K, Gorincour G

pubmed logopapersJul 1 2025
To measure the association between antithrombotic (AT) medications (anticoagulant and antiplatelet) and risk for traumatic intracranial hemorrhage (ICH) in older adults with a mild traumatic brain injury (mTBI). We conducted a retrospective multicenter study across 103 emergency departments affiliated with a teleradiology company dedicated to emergency imaging between 2020 and 2022. Older adults (≥65 years old) with mTBI, with a head computed tomography scan, were included. Natural language processing models were used to label-free texts of emergency physician forms and radiology reports; and a multivariable logistic regression model to measure the association between AT medications and occurrence of ICH. A total of 5948 patients [median age 84.6 (74.3-89.1) years, 58.1% females] were included, of whom 781 (13.1%) had an ICH. Among them, 3177 (53.4%) patients were treated with at least one AT agent. No AT medication was associated with a higher risk for ICH: antiplatelet odds ratio 0.98 95% confidence interval (0.81-1.18), direct oral anticoagulant 0.82 (0.60-1.09), and vitamin K antagonist 0.66 (0.37-1.10). Conversely, a high-level fall [1.68 (1.15-2.4)], a Glasgow coma scale of 14 [1.83 (1.22-2.68)], a cutaneous head impact [1.5 (1.17-1.92)], vomiting [1.59 (1.18-2.14)], amnesia [1.35 (1.02-1.79)], a suspected skull vault fracture [9.3 (14.2-26.5)] or of facial bones fracture [1.34 (1.02-1.75)] were associated with a higher risk for ICH. This study found no association between AT medications and an increased risk of ICH among older patients with mTBI suggesting that routine neuroimaging in this population may offer limited benefit and that additional variables should be considered in the imaging decision.

Improving Tuberculosis Detection in Chest X-Ray Images Through Transfer Learning and Deep Learning: Comparative Study of Convolutional Neural Network Architectures.

Mirugwe A, Tamale L, Nyirenda J

pubmed logopapersJul 1 2025
Tuberculosis (TB) remains a significant global health challenge, as current diagnostic methods are often resource-intensive, time-consuming, and inaccessible in many high-burden communities, necessitating more efficient and accurate diagnostic methods to improve early detection and treatment outcomes. This study aimed to evaluate the performance of 6 convolutional neural network architectures-Visual Geometry Group-16 (VGG16), VGG19, Residual Network-50 (ResNet50), ResNet101, ResNet152, and Inception-ResNet-V2-in classifying chest x-ray (CXR) images as either normal or TB-positive. The impact of data augmentation on model performance, training times, and parameter counts was also assessed. The dataset of 4200 CXR images, comprising 700 labeled as TB-positive and 3500 as normal cases, was used to train and test the models. Evaluation metrics included accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve. The computational efficiency of each model was analyzed by comparing training times and parameter counts. VGG16 outperformed the other architectures, achieving an accuracy of 99.4%, precision of 97.9%, recall of 98.6%, F1-score of 98.3%, and area under the receiver operating characteristic curve of 98.25%. This superior performance is significant because it demonstrates that a simpler model can deliver exceptional diagnostic accuracy while requiring fewer computational resources. Surprisingly, data augmentation did not improve performance, suggesting that the original dataset's diversity was sufficient. Models with large numbers of parameters, such as ResNet152 and Inception-ResNet-V2, required longer training times without yielding proportionally better performance. Simpler models like VGG16 offer a favorable balance between diagnostic accuracy and computational efficiency for TB detection in CXR images. These findings highlight the need to tailor model selection to task-specific requirements, providing valuable insights for future research and clinical implementations in medical image classification.

How I Do It: Three-Dimensional MR Neurography and Zero Echo Time MRI for Rendering of Peripheral Nerve and Bone.

Lin Y, Tan ET, Campbell G, Breighner RE, Fung M, Wolfe SW, Carrino JA, Sneag DB

pubmed logopapersJul 1 2025
MR neurography sequences provide excellent nerve-to-background soft tissue contrast, whereas a zero echo time (ZTE) MRI sequence provides cortical bone contrast. By demonstrating the spatial relationship between nerves and bones, a combination of rendered three-dimensional (3D) MR neurography and ZTE sequences provides a roadmap for clinical decision-making, particularly for surgical intervention. In this article, the authors describe the method for fused rendering of peripheral nerve and bone by combining nerve and bone structures from 3D MR neurography and 3D ZTE MRI, respectively. The described method includes scanning acquisition, postprocessing that entails deep learning-based reconstruction techniques, and rendering techniques. Representative case examples demonstrate the steps and clinical use of these techniques. Challenges in nerve and bone rendering are also discussed.

GAN-based Denoising for Scan Time Reduction and Motion Correction of 18F FP-CIT PET/CT: A Multicenter External Validation Study.

Han H, Choo K, Jeon TJ, Lee S, Seo S, Kim D, Kim SJ, Lee SH, Yun M

pubmed logopapersJul 1 2025
AI-driven scan time reduction is rapidly transforming medical imaging with benefits such as improved patient comfort and enhanced efficiency. A Dual Contrastive Learning Generative Adversarial Network (DCLGAN) was developed to predict full-time PET scans from shorter, noisier scans, improving challenges in imaging patients with movement disorders. 18F FP-CIT PET/CT data from 391 patients with suspected Parkinsonism were used [250 training/validation, 141 testing (hospital A)]. Ground truth (GT) images were reconstructed from 15-minute scans, while denoised images (DIs) were generated from 1-, 3-, 5-, and 10-minute scans. Image quality was assessed using normalized root mean square error (NRMSE), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), visual analysis, and clinical metrics like BPND and ISR for diagnosis of non-neurodegenerative Parkinson disease (NPD), idiopathic PD (IPD), and atypical PD (APD). External validation used data from 2 hospitals with different scanners (hospital B: 1-, 3-, 5-, and 10-min; hospital C: 1-, 3-, and 5-min). In addition, motion artifact reduction was evaluated using the Dice similarity coefficient (DSC). In hospital A, NRMSE, PSNR, and SSIM values improved with scan duration, with the 5-minute DIs achieving optimal quality (NRMSE 0.008, PSNR 42.13, SSIM 0.98). Visual analysis rated DIs from scans ≥3 minutes as adequate or higher. The mean BPND differences (95% CI) for each DIs were 0.19 (-0.01, 0.40), 0.11 (-0.02, 0.24), 0.08 (-0.03, 0.18), and 0.01 (-0.06, 0.07), with the CIs significantly decreasing. ISRs with the highest effect sizes for differentiating NPD, IPD, and APD (PP/AP, PP/VS, PC/VP) remained stable post-denoising. External validation showed 10-minute DIs (hospital B) and 1-minute DIs (hospital C) reached benchmarks of hospital A's image quality metrics, with similar trends in visual analysis and BPND CIs. Furthermore, motion artifact correction in 9 patients yielded DSC improvements from 0.89 to 0.95 in striatal regions. The DL-model is capable of generating high-quality 18F FP-CIT PET images from shorter scans to enhance patient comfort, minimize motion artifacts, and maintain diagnostic precision. Furthermore, our study plays an important role in providing insights into how imaging quality assessment metrics can be used to determine the appropriate scan duration for different scanners with varying sensitivities.

Computed Tomography Advancements in Plaque Analysis: From Histology to Comprehensive Plaque Burden Assessment.

Catapano F, Lisi C, Figliozzi S, Scialò V, Politi LS, Francone M

pubmed logopapersJul 1 2025
Advancements in coronary computed tomography angiography (CCTA) facilitated the transition from traditional histological approaches to comprehensive plaque burden assessment. Recent updates in the European Society of Cardiology (ESC) guidelines emphasize CCTA's role in managing chronic coronary syndrome by enabling detailed monitoring of atherosclerotic plaque progression. Limitations of conventional CCTA, such as spatial resolution challenges in accurately characterizing plaque components like thin-cap fibroatheromas and necrotic lipid-rich cores, are addressed with photon-counting detector CT (PCD-CT) technology. PCD-CT offers enhanced spatial resolution and spectral imaging, improving the detection and characterization of high-risk plaque features while reducing artifacts. The integration of artificial intelligence (AI) in plaque analysis enhances diagnostic accuracy through automated plaque characterization and radiomics. These technological advancements support a comprehensive approach to plaque assessment, incorporating hemodynamic evaluations, morphological metrics, and AI-driven analysis, thereby enabling personalized patient care and improved prediction of acute clinical events.

Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy.

Ahmed AM, Madden L, Stewart M, Chow BVY, Mylonas A, Brown R, Metz G, Shepherd M, Coronel C, Ambrose L, Turk A, Crispin M, Kneebone A, Hruby G, Keall P, Booth JT

pubmed logopapersJul 1 2025
In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images. Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95). The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour. The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.

Development and validation of an MRI spatiotemporal interaction model for early noninvasive prediction of neoadjuvant chemotherapy response in breast cancer: a multicentre study.

Tang W, Jin C, Kong Q, Liu C, Chen S, Ding S, Liu B, Feng Z, Li Y, Dai Y, Zhang L, Chen Y, Han X, Liu S, Chen D, Weng Z, Liu W, Wei X, Jiang X, Zhou Q, Mao N, Guo Y

pubmed logopapersJul 1 2025
The accurate and early evaluation of response to neoadjuvant chemotherapy (NAC) in breast cancer is crucial for optimizing treatment strategies and minimizing unnecessary interventions. While deep learning (DL)-based approaches have shown promise in medical imaging analysis, existing models often fail to comprehensively integrate spatial and temporal tumor dynamics. This study aims to develop and validate a spatiotemporal interaction (STI) model based on longitudinal MRI data to predict pathological complete response (pCR) to NAC in breast cancer patients. This study included retrospective and prospective datasets from five medical centers in China, collected from June 2018 to December 2024. These datasets were assigned to the primary cohort (including training and internal validation sets), external validation cohorts, and a prospective validation cohort. DCE-MRI scans from both pre-NAC (T0) and early-NAC (T1) stages were collected for each patient, along with surgical pathology results. A Siamese network-based STI model was developed, integrating spatial features from tumor segmentation with temporal dependencies using a transformer-based multi-head attention mechanism. This model was designed to simultaneously capture spatial heterogeneity and temporal dynamics, enabling accurate prediction of NAC response. The STI model's performance was evaluated using the area under the ROC curve (AUC) and Precision-Recall curve (AP), accuracy, sensitivity, and specificity. Additionally, the I-SPY1 and I-SPY2 datasets were used for Kaplan-Meier survival analysis and to explore the biological basis of the STI model, respectively. The prospective cohort was registered with Chinese Clinical Trial Registration Centre (ChiCTR2500102170). A total of 1044 patients were included in this study, with the pCR rate ranging from 23.8% to 35.9%. The STI model demonstrated good performance in early prediction of NAC response in breast cancer. In the external validation cohorts, the AUC values were 0.923 (95% CI: 0.859-0.987), 0.892 (95% CI: 0.821-0.963), and 0.913 (95% CI: 0.835-0.991), all outperforming the single-timepoint T0 or T1 models, as well as models with spatial information added (all p < 0.05, Delong test). Additionally, the STI model significantly outperformed the clinical model (p < 0.05, Delong test) and radiologists' predictions. In the prospective validation cohort, the STI model identified 90.2% (37/41) of non-pCR and 82.6% (19/23) of pCR patients, reducing misclassification rates by 58.7% and 63.3% compared to radiologists. This indicates that these patients might benefit from treatment adjustment or continued therapy in the early NAC stage. Survival analysis showed a significant correlation between the STI model and both recurrence-free survival (RFS) and overall survival (OS) in breast cancer patients. Further investigation revealed that favorable NAC responses predicted by the STI model were closely linked to upregulated immune-related genes and enhanced immune cell infiltration. Our study established a novel noninvasive STI model that integrates the spatiotemporal evolution of MRI before and during NAC to achieve early and accurate pCR prediction, offering potential guidance for personalized treatment. This study was supported by the National Natural Science Foundation of China (82302314, 62271448, 82171920, 81901711), Basic and Applied Basic Research Foundation of Guangdong Province (2022A1515110792, 2023A1515220097, 2024A1515010653), Medical Scientific Research Foundation of Guangdong Province (A2023073, A2024116), Science and Technology Projects in Guangzhou (2023A04J1275, 2024A03J1030, 2025A03J4163, 2025A03J4162); Guangzhou First People's Hospital Frontier Medical Technology Project (QY-C04).

A Contrast-Enhanced Ultrasound Cine-Based Deep Learning Model for Predicting the Response of Advanced Hepatocellular Carcinoma to Hepatic Arterial Infusion Chemotherapy Combined With Systemic Therapies.

Han X, Peng C, Ruan SM, Li L, He M, Shi M, Huang B, Luo Y, Liu J, Wen H, Wang W, Zhou J, Lu M, Chen X, Zou R, Liu Z

pubmed logopapersJul 1 2025
Recently, a hepatic arterial infusion chemotherapy (HAIC)-associated combination therapeutic regimen, comprising HAIC and systemic therapies (molecular targeted therapy plus immunotherapy), referred to as HAIC combination therapy, has demonstrated promising anticancer effects. Identifying individuals who may potentially benefit from HAIC combination therapy could contribute to improved treatment decision-making for patients with advanced hepatocellular carcinoma (HCC). This dual-center study was a retrospective analysis of prospectively collected data with advanced HCC patients who underwent HAIC combination therapy and pretreatment contrast-enhanced ultrasound (CEUS) evaluations from March 2019 to March 2023. Two deep learning models, AE-3DNet and 3DNet, along with a time-intensity curve-based model, were developed for predicting therapeutic responses from pretreatment CEUS cine images. Diagnostic metrics, including the area under the receiver-operating-characteristic curve (AUC), were calculated to compare the performance of the models. Survival analysis was used to assess the relationship between predicted responses and prognostic outcomes. The model of AE-3DNet was constructed on the top of 3DNet, with innovative incorporation of spatiotemporal attention modules to enhance the capacity for dynamic feature extraction. 326 patients were included, 243 of whom formed the internal validation cohort, which was utilized for model development and fivefold cross-validation, while the rest formed the external validation cohort. Objective response (OR) or non-objective response (non-OR) were observed in 63% (206/326) and 37% (120/326) of the participants, respectively. Among the three efficacy prediction models assessed, AE-3DNet performed superiorly with AUC values of 0.84 and 0.85 in the internal and external validation cohorts, respectively. AE-3DNet's predicted response survival curves closely resembled actual clinical outcomes. The deep learning model of AE-3DNet developed based on pretreatment CEUS cine performed satisfactorily in predicting the responses of advanced HCC to HAIC combination therapy, which may serve as a promising tool for guiding combined therapy and individualized treatment strategies. Trial Registration: NCT02973685.

Deep Learning Models for CT Segmentation of Invasive Pulmonary Aspergillosis, Mucormycosis, Bacterial Pneumonia and Tuberculosis: A Multicentre Study.

Li Y, Huang F, Chen D, Zhang Y, Zhang X, Liang L, Pan J, Tan L, Liu S, Lin J, Li Z, Hu G, Chen H, Peng C, Ye F, Zheng J

pubmed logopapersJul 1 2025
The differential diagnosis of invasive pulmonary aspergillosis (IPA), pulmonary mucormycosis (PM), bacterial pneumonia (BP) and pulmonary tuberculosis (PTB) are challenging due to overlapping clinical and imaging features. Manual CT lesion segmentation is time-consuming, deep-learning (DL)-based segmentation models offer a promising solution, yet disease-specific models for these infections remain underexplored. We aimed to develop and validate dedicated CT segmentation models for IPA, PM, BP and PTB to enhance diagnostic accuracy. Methods:Retrospective multi-centre data (115 IPA, 53 PM, 130 BP, 125 PTB) were used for training/internal validation, with 21 IPA, 8PM, 30 BP and 31 PTB cases for external validation. Expert-annotated lesions served as ground truth. An improved 3D U-Net architecture was employed for segmentation, with preprocessing steps including normalisations, cropping and data augmentation. Performance was evaluated using Dice coefficients. Results:Internal validation achieved Dice scores of 78.83% (IPA), 93.38% (PM), 80.12% (BP) and 90.47% (PTB). External validation showed slightly reduced but robust performance: 75.09% (IPA), 77.53% (PM), 67.40% (BP) and 80.07% (PTB). The PM model demonstrated exceptional generalisability, scoring 83.41% on IPA data. Cross-validation revealed mutual applicability, with IPA/PTB models achieving > 75% Dice for each other's lesions. BP segmentation showed lower but clinically acceptable performance ( >72%), likely due to complex radiological patterns. Disease-specific DL segmentation models exhibited high accuracy, particularly for PM and PTB. While IPA and BP models require refinement, all demonstrated cross-disease utility, suggesting immediate clinical value for preliminary lesion annotation. Future efforts should enhance datasets and optimise models for intricate cases.

Synthetic Versus Classic Data Augmentation: Impacts on Breast Ultrasound Image Classification.

Medghalchi Y, Zakariaei N, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 1 2025
The effectiveness of deep neural networks (DNNs) for the ultrasound image analysis depends on the availability and accuracy of the training data. However, the large-scale data collection and annotation, particularly in medical fields, is often costly and time consuming, especially when healthcare professionals are already burdened with their clinical responsibilities. Ensuring that a model remains robust across different imaging conditions-such as variations in ultrasound devices and manual transducer operation-is crucial in the ultrasound image analysis. The data augmentation is a widely used solution, as it increases both the size and diversity of datasets, thereby enhancing the generalization performance of DNNs. With the advent of generative networks such as generative adversarial networks (GANs) and diffusion-based models, the synthetic data generation has emerged as a promising augmentation technique. However, comprehensive studies comparing classic and generative method-based augmentation methods are lacking, particularly in ultrasound-based breast cancer imaging, where variability in breast density, tumor morphology, and operator skill poses significant challenges. This study aims to compare the effectiveness of classic and generative network-based data augmentation techniques in improving the performance and robustness of breast ultrasound image classification models. Specifically, we seek to determine whether the computational intensity of generative networks is justified in data augmentation. This analysis will provide valuable insights into the role and benefits of each technique in enhancing the diagnostic accuracy of DNN for breast cancer diagnosis. The code for this work will be available at: ht.tps://github.com/yasamin-med/SCDA.git.
Page 33 of 1611610 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.