Sort by:
Page 370 of 7327315 results

Peng Z, Wang Y, Qi Y, Hu H, Fu Y, Li J, Li W, Li Z, Guo W, Shen C, Jiang J, Yang B

pubmed logopapersAug 6 2025
To establish and validate the utility of computed tomography (CT) radiomics for the prognosis of patients with non-small cell lung cancer (NSCLC). Overall, 215 patients with pathologic diagnosis of NSCLC were included, chest CT images and clinical data were collected before treatment, and follow-up was conducted to assess brain metastasis and survival. Radiomics characteristics were extracted from the chest CT lung window images of each patient, key characteristics were screened, the radiomics score (Radscore) was calculated, and radiomics, clinical, and combined models were constructed using clinically independent predictive factors. A nomogram was constructed based on the final joint model to visualize prediction results. Predictive efficacy was evaluated using the concordance index (C-index), and survival (Kaplan-Meier) and calibration curves were drawn to further evaluate predictive efficacy. The training set included 151 patients (43 with brain metastasis and 108 without brain metastasis) and 64 patients (18 with brain metastasis and 46 without). Multivariate analysis revealed that lymph node metastasis, lymphocyte percentage, and neuron-specific enolase (NSE) were independent predictors of brain metastasis in patients with NSCLC. The area under the curve (AUC) of the these models were 0.733, 0.836, and 0.849, respectively, in the training set and were 0.739, 0.779, and 0.816, respectively, in the validation set. Multivariate Cox regression analysis revealed that the number of brain metastases, distant metastases elsewhere, and C-reactive protein levels were independent predictors of postoperative survival in patients with brain metastases (<i>P</i> < 0.05). The calibration curve exhibited that the predicted values of the prognostic prediction model agreed well with the actual values. The model based on CT radiomics characteristics can effectively predict NSCLC brain metastasis and its prognosis and provide guidance for individualized treatment of NSCLC patients.

Abdel-Salam M, Houssein EH, Emam MM, Samee NA, Gharehchopogh FS, Bacanin N

pubmed logopapersAug 6 2025
Intracerebral hemorrhage (ICH) is a life-threatening condition caused by bleeding in the brain, with high mortality rates, particularly in the acute phase. Accurate diagnosis through medical image segmentation plays a crucial role in early intervention and treatment. However, existing segmentation methods, such as region-growing, clustering, and deep learning, face significant limitations when applied to complex images like ICH, especially in multi-threshold image segmentation (MTIS). As the number of thresholds increases, these methods often become computationally expensive and exhibit degraded segmentation performance. To address these challenges, this paper proposes an Elite-Adaptive-Turbulent Hiking Optimization Algorithm (EATHOA), an enhanced version of the Hiking Optimization Algorithm (HOA), specifically designed for high-dimensional and multimodal optimization problems like ICH image segmentation. EATHOA integrates three novel strategies including Elite Opposition-Based Learning (EOBL) for improving population diversity and exploration, Adaptive k-Average-Best Mutation (AKAB) for dynamically balancing exploration and exploitation, and a Turbulent Operator (TO) for escaping local optima and enhancing the convergence rate. Extensive experiments were conducted on the CEC2017 and CEC2022 benchmark functions to evaluate EATHOA's global optimization performance, where it consistently outperformed other state-of-the-art algorithms. The proposed EATHOA was then applied to solve the MTIS problem in ICH images at six different threshold levels. EATHOA achieved peak values of PSNR (34.4671), FSIM (0.9710), and SSIM (0.8816), outperforming recent methods in segmentation accuracy and computational efficiency. These results demonstrate the superior performance of EATHOA and its potential as a powerful tool for medical image analysis, offering an effective and computationally efficient solution for the complex challenges of ICH image segmentation.

Ratcliffe, C., Taylor, P. N., de Bezenac, C., Das, K., Biswas, S., Marson, A., Keller, S. S.

medrxiv logopreprintAug 6 2025
IntroductionStructural neuroimaging analyses require research quality images, acquired with costly MRI acquisitions. Isotropic (3D-T1) images are desirable for quantitative analyses, however a routine compromise in the clinical setting is to acquire anisotropic (2D-T1) analogues for qualitative visual inspection. ML (Machine learning-based) software have shown promise in addressing some of the limitations of 2D-T1 scans in research applications, yet their efficacy in quantitative research is generally poorly understood. Pathology-related abnormalities of the subcortical structures have previously been identified in idiopathic generalised epilepsy (IGE), which have been overlooked based on visual inspection, through the use of quantitative morphometric analyses. As such, IGE biomarkers present a suitable model in which to evaluate the applicability of image preprocessing methods. This study therefore explores subcortical structural biomarkers of IGE, first in our silver standard 3D-T1 scans, then in 2D-T1 scans that were either untransformed, resampled using a classical interpolation approach, or synthesised with a resolution and contrast agnostic ML model (the latter of which is compared to a separate model). Methods2D-T1 and 3D-T1 MRI scans were acquired during the same scanning session for 33 individuals with drug-responsive IGE (age mean 32.16 {+/-} SD = 14.20, male n = 14) and 42 individuals with drug-resistant IGE (31.76 {+/-} 11.12, 17), all diagnosed at the Walton Centre NHS Foundation Trust Liverpool, alongside 39 age- and sex-matched healthy controls (32.32 {+/-} 8.65, 16). The untransformed 2D-T1 scans were resampled into isotropic images using NiBabel (res-T1), and preprocessed into synthetic isotropic images using SynthSR (syn-T1). For the 3D-T1, 2D-T1, res-T1, and syn-T1 images, the recon-all command from FreeSurfer 8.0.0 was used to create parcellations of 174 anatomical regions (equivalent to the 174 regional parcellations provided as part of the DL+DiReCT pipeline), defined by the aseg and Destrieux atlases, and FSL run_first_all was used to segment subcortical surface shapes. The new ML FreeSurfer pipeline, recon-all-clinical, was also tested in the 2D-T1, 3D-T1, and res-T1 images. As a model comparison for SynthSR, the DL+DiReCT pipeline was used to provide segmentations of the 2D-T1 and res-T1 images, including estimates of regional volume and thickness. Spatial overlap and intraclass correlations between the morphometrics of the eight resulting parcellations were first determined, then subcortical surface shape abnormalities associated with IGE were identified by comparing the FSL run_first_all outputs of patients with controls. ResultsWhen standardised to the metrics derived from the 3D-T1 scans, cortical volume and thickness estimates trended lower for the 2D-T1, res-T1, syn-T1, and DL+DiReCT outputs, whereas subcortical volume estimates were more coherent. Dice coefficients revealed an acceptable spatial similarity between the cortices of the 3D-T1 scans and the other images overall, and was higher in the subcortical structures. Intraclass correlation coefficients were consistently lowest when metrics were computed for model-derived inputs, and estimates of thickness were less similar to the ground truth than those of volume. For the people with epilepsy, the 3D-T1 scans showed significant surface deflations across various subcortical structures when compared to healthy controls. Analysis of the 2D-T1 scans enabled the reliable detection of a subset of subcortical abnormalities, whereas analyses of the res-T1 and syn-T1 images were more prone to false-positive results. ConclusionsResampling and ML image synthesis methods do not currently attenuate partial volume effects resulting from low through plane resolution in anisotropic MRI scans, instead quantitative analyses using 2D-T1 scans should be interpreted with caution, and researchers should consider the potential implications of preprocessing. The recon-all-clinical pipeline is promising, but requires further evaluation, especially when considered as an alternative to the classical pipeline. Key PointsO_LISurface deviations indicative of regional atrophy and hypertrophy were identified in people with idiopathic generalised epilepsy. C_LIO_LIPartial volume effects are likely to attenuate subtle morphometric abnormalities, increasing the likelihood of erroneous inference. C_LIO_LIPriors in synthetic image creation models may render them insensitive to subtle biomarkers. C_LIO_LIResampling and machine-learning based image synthesis are not currently replacements for research quality acquisitions in quantitative MRI research. C_LIO_LIThe results of studies using synthetic images should be interpreted in a separate context to those using untransformed data. C_LI

Hou B, Du H

pubmed logopapersAug 6 2025
Magnetic Resonance Imaging (MRI) is widely utilized in medical imaging due to its high resolution and non-invasive nature. However, the prolonged acquisition time significantly limits its clinical applicability. Although traditional compressed sensing (CS) techniques can accelerate MRI acquisition, they often lead to degraded reconstruction quality under high undersampling rates. Deep learning-based methods, including CNN- and GAN-based approaches, have improved reconstruction performance, yet are limited by their local receptive fields, making it challenging to effectively capture long-range dependencies. Moreover, these models typically exhibit high computational complexity, which hinders their efficient deployment in practical scenarios. To address these challenges, we propose a lightweight Multi-scale Context-Aware Generative Adversarial Network (MCA-GAN), which enhances MRI reconstruction through dual-domain generators that collaboratively optimize both k-space and image-domain representations. MCA-GAN integrates several lightweight modules, including Depthwise Separable Local Attention (DWLA) for efficient local feature extraction, Adaptive Group Rearrangement Block (AGRB) for dynamic inter-group feature optimization, Multi-Scale Spatial Context Modulation Bridge (MSCMB) for multi-scale feature fusion in skip connections, and Channel-Spatial Multi-Scale Self-Attention (CSMS) for improved global context modeling. Extensive experiments conducted on the IXI, MICCAI 2013, and MRNet knee datasets demonstrate that MCA-GAN consistently outperforms existing methods in terms of PSNR and SSIM. Compared to SepGAN, the latest lightweight model, MCA-GAN achieves a 27.3% reduction in parameter size and a 19.6% reduction in computational complexity, while attaining the shortest reconstruction time among all compared methods. Furthermore, MCA-GAN exhibits robust performance across various undersampling masks and acceleration rates. Cross-dataset generalization experiments further confirm its ability to maintain competitive reconstruction quality, underscoring its strong generalization potential. Overall, MCA-GAN improves MRI reconstruction quality while significantly reducing computational cost through a lightweight architecture and multi-scale feature fusion, offering an efficient and accurate solution for accelerated MRI.

Skalidis I, Sayah N, Benamer H, Amabile N, Laforgia P, Champagne S, Hovasse T, Garot J, Garot P, Akodad M

pubmed logopapersAug 6 2025
Integration of AI and XR in TAVR is revolutionizing the management of severe aortic stenosis by enhancing diagnostic accuracy, risk stratification, and pre-procedural planning. Advanced algorithms now facilitate precise electrocardiographic, echocardiographic, and CT-based assessments that reduce observer variability and enable patient-specific risk prediction. Immersive XR technologies, including augmented, virtual, and mixed reality, improve spatial visualization of complex cardiac anatomy and support real-time procedural guidance. Despite these advancements, standardized protocols, regulatory frameworks, and ethical safeguards remain necessary for widespread clinical adoption.

Yang T, Wang Y, Zhu G, Liu W, Cao J, Liu Y, Lu F, Yang J

pubmed logopapersAug 6 2025
Efficient and accurate preoperative assessment of the right-sided heart structural complex (RSHSc) is crucial for planning transcatheter tricuspid valve replacement (TTVR). However, current manual methods remain time-consuming and inconsistent. To address this unmet clinical need, this study aimed to develop and validate TRI-PLAN, the first fully automated, deep learning (DL)-based framework for pre-TTVR assessment. A total of 140 preprocedural computed tomography angiography (CTA) scans (63,962 slices) from patients with severe tricuspid regurgitation (TR) at two high-volume cardiac centers in China were retrospectively included. The patients were divided into a training cohort (n = 100), an internal validation cohort (n = 20), and an external validation cohort (n = 20). TRI-PLAN was developed by a dual-stage right heart assessment network (DRA-Net) to segment the RSHSc and localize the tricuspid annulus (TA), followed by automated measurement of key anatomical parameters and right ventricular ejection fraction (RVEF). Performance was comprehensively evaluated in terms of accuracy, interobserver benchmark comparison, clinical usability, and workflow efficiency. TRI-PLAN achieved expert-level segmentation accuracy (volumetric Dice 0.952/0.955; surface Dice 0.934/0.940), precise localization (standard deviation 1.18/1.14 mm), excellent measurement agreement (ICC 0.984/0.979) and reliable RVEF evaluation (R = 0.97, bias<5 %) across internal and external cohorts. In addition, TRI-PLAN obtained a direct acceptance rate of 80 % and reduced total assessment time from 30 min manually to under 2 min (>95 % time saving). TRI-PLAN provides an accurate, efficient, and clinically applicable solution for pre-TTVR assessment, with strong potential to streamline TTVR planning and enhance procedural outcomes.

Dadashkarimi, M.

medrxiv logopreprintAug 6 2025
Dynamic Positron Emission Tomography (PET) scans offer rich spatiotemporal data for detecting malignancies, but their high-dimensionality and noise pose significant challenges. We introduce a novel framework, the Equivariant Spatiotemporal Transformer with MDL-Guided Feature Selection (EST-MDL), which integrates group-theoretic symmetries, Kolmogorov complexity, and Minimum Description Length (MDL) principles. By enforcing spatial and temporal symmetries (e.g., translations and rotations) and leveraging MDL for robust feature selection, our model achieves improved generalization and interpretability. Evaluated on three realworld PET datasets--LUNG-PET, BRAIN-PET, and BREAST-PET--our approach achieves AUCs of 0.94, 0.92, and 0.95, respectively, outperforming CNNs, Vision Transformers (ViTs), and Graph Neural Networks (GNNs) in AUC, sensitivity, specificity, and computational efficiency. This framework offers a robust, interpretable solution for malignancy detection in clinical settings.

Al-Mashhadani, M., Ajaz, F., Guraya, S. S., Ennab, F.

medrxiv logopreprintAug 6 2025
BackgroundLarge Language Models (LLMs) represent an ever-emerging and rapidly evolving generative artificial intelligence (AI) modality with promising developments in the field of medical education. LLMs can provide automated feedback services to medical trainees (i.e. medical students, residents, fellows, etc.) and possibly serve a role in medical imaging education. AimThis systematic review aims to comprehensively explore the current applications and educational outcomes of LLMs in providing automated feedback on medical imaging reports. MethodsThis study employs a comprehensive systematic review strategy, involving an extensive search of the literature (Pubmed, Scopus, Embase, and Cochrane), data extraction, and synthesis of the data. ConclusionThis systematic review will highlight the best practices of LLM use in automated feedback of medical imaging reports and guide further development of these models.

Pouyan Navard, Yasemin Ozkut, Srikar Adhikari, Elaine Situ-LaCasse, Josie Acuña, Adrienne Yarnish, Alper Yilmaz

arxiv logopreprintAug 5 2025
Retinal detachment (RD) is a vision-threatening condition that requires timely intervention to preserve vision. Macular involvement -- whether the macula is still intact (macula-intact) or detached (macula-detached) -- is the key determinant of visual outcomes and treatment urgency. Point-of-care ultrasound (POCUS) offers a fast, non-invasive, cost-effective, and accessible imaging modality widely used in diverse clinical settings to detect RD. However, ultrasound image interpretation is limited by a lack of expertise among healthcare providers, especially in resource-limited settings. Deep learning offers the potential to automate ultrasound-based assessment of RD. However, there are no ML ultrasound algorithms currently available for clinical use to detect RD and no prior research has been done on assessing macular status using ultrasound in RD cases -- an essential distinction for surgical prioritization. Moreover, no public dataset currently supports macular-based RD classification using ultrasound video clips. We introduce Eye Retinal DEtachment ultraSound, ERDES, the first open-access dataset of ocular ultrasound clips labeled for (i) presence of retinal detachment and (ii) macula-intact versus macula-detached status. The dataset is intended to facilitate the development and evaluation of machine learning models for detecting retinal detachment. We also provide baseline benchmarks using multiple spatiotemporal convolutional neural network (CNN) architectures. All clips, labels, and training code are publicly available at https://osupcvlab.github.io/ERDES/.

Jayasuriya NM, Feng E, Nathani KR, Delawan M, Katsos K, Bhagra O, Freedman BA, Bydon M

pubmed logopapersAug 5 2025
Bone health is a critical determinant of spine surgery outcomes, yet many patients undergo procedures without adequate preoperative assessment due to limitations in current bone quality assessment methods. This study aimed to develop and validate an artificial intelligence-based algorithm that predicts Vertebral Bone Quality (VBQ) scores from routine MRI scans, enabling improved preoperative identification of patients at risk for poor surgical outcomes. This study utilized 257 lumbar spine T1-weighted MRI scans from the SPIDER challenge dataset. VBQ scores were calculated through a three-step process: selecting the mid-sagittal slice, measuring vertebral body signal intensity from L1-L4, and normalizing by cerebrospinal fluid signal intensity. A YOLOv8 model was developed to automate region of interest placement and VBQ score calculation. The system was validated against manual annotations from 47 lumbar spine surgery patients, with performance evaluated using precision, recall, mean average precision, intraclass correlation coefficient, Pearson correlation, RMSE, and mean error. The YOLOv8 model demonstrated high accuracy in vertebral body detection (precision: 0.9429, recall: 0.9076, [email protected]: 0.9403, mAP@[0.5:0.95]: 0.8288). Strong interrater reliability was observed with ICC values of 0.95 (human-human), 0.88 and 0.93 (human-AI). Pearson correlations for VBQ scores between human and AI measurements were 0.86 and 0.9, with RMSE values of 0.58 and 0.42 respectively. The AI-based algorithm accurately predicts VBQ scores from routine lumbar MRIs. This approach has potential to enhance early identification and intervention for patients with poor bone health, leading to improved surgical outcomes. Further external validation is recommended to ensure generalizability and clinical applicability.
Page 370 of 7327315 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.