Sort by:
Page 39 of 3523516 results

Artificial intelligence with feature fusion empowered enhanced brain stroke detection and classification for disabled persons using biomedical images.

Alsieni M, Alyoubi KH

pubmed logopapersAug 9 2025
Brain stroke is an illness which affects almost every age group, particularly people over 65. There are two significant kinds of strokes: ischemic and hemorrhagic strokes. Blockage of brain vessels causes an ischemic stroke, while cracks in blood vessels in or around the brain cause a hemorrhagic stroke. In the prompt analysis of brain stroke, patients can live an easier life. Recognizing strokes using medical imaging is crucial for early diagnosis and treatment planning. Conversely, access to innovative imaging methods is restricted, particularly in emerging states, so it is challenging to analyze brain stroke cases of disabled people appropriately. Hence, the development of more accurate, faster, and more reliable diagnostic models for the timely recognition and efficient treatment of ischemic stroke is greatly needed. Artificial intelligence technologies, primarily deep learning (DL), have been widely employed in medical imaging, utilizing automated detection methods. This paper presents an Enhanced Brain Stroke Detection and Classification using Artificial Intelligence with Feature Fusion Technologies (EBSDC-AIFFT) model. This paper aims to develop an enhanced brain stroke detection system for individuals with disabilities, utilizing biomedical images to improve diagnostic accuracy. Initially, the image pre-processing stage involves various steps, including resizing, normalization, data augmentation, and data splitting, to enhance image quality. In addition, the EBSDC-AIFFT model combines the Inception-ResNet-v2 model, the convolutional block attention module-ResNet18 method, and the multi-axis vision transformer technique for feature extraction. Finally, the variational autoencoder (VAE) model is implemented for the classification process. The performance validation of the EBSDC-AIFFT technique is performed under the brain stroke CT image dataset. The comparison study of the EBSDC-AIFFT technique demonstrated a superior accuracy value of 99.09% over existing models.

"AI tumor delineation for all breathing phases in early-stage NSCLC".

DelaO-Arevalo LR, Sijtsema NM, van Dijk LV, Langendijk JA, Wijsman R, van Ooijen PMA

pubmed logopapersAug 9 2025
Accurate delineation of the Gross Tumor Volume (GTV) and the Internal Target Volume (ITV) in early-stage lung tumors is crucial in Stereotactic Body Radiation Therapy (SBRT). Traditionally, the ITVs, which account for breathing motion, are generated by manually contouring GTVs across all breathing phases (BPs), a time-consuming process. This research aims to streamline this workflow by developing a deep learning algorithm to automatically delineate GTVs in all four-dimensional computed tomography (4D-CT) BPs for early-stage Non-Small Cell Lung Cancer Patients (NSCLC). A dataset of 214 early-stage NSCLC patients treated with SBRT was used. Each patient had a 4D-CT scan containing ten reconstructed BPs. The data were divided into a training set (75 %) and a testing set (25 %). Three models SwinUNetR and Dynamic UNet (DynUnet), and a hybrid model combining both (Swin + Dyn)were trained and evaluated using the Dice Similarity Coefficient (DSC), 3 mm Surface Dice Similarity Coefficient (SDSC), and the 95<sup>th</sup> percentile Hausdorff distance (HD95). The best performing model was used to delineate GTVs in all test set BPs, creating the ITVs using two methods: all 10 phases and the maximum inspiration/expiration phases. The ITVs were compared to the ground truth ITVs. The Swin + Dyn model achieved the highest performance, with a test set SDSC of 0.79 ± 0.14 for GTV 50 %. For the ITVs, the SDSC was 0.79 ± 0.16 using all 10 BPs and 0.77 ± 0.14 using 2 BPs. At the voxel level, the Swin + DynNet network achieved a sensitivity of 0.75 ± 0.14 and precision of 0.84 ± 0.10 for the ITV 2 breathing phases, and a sensitivity of 0.79 ± 0.12 and precision of 0.80 ± 0.11 for the 10 breathing phases. The Swin + Dyn Net algorithm, trained on the maximum expiration CT-scan effectively delineated gross tumor volumes in all breathing phases and the resulting ITV showed a good agreement with the ground truth (surface DSC = 0.79 ± 0.16 using all 10 BPs and 0.77 ± 0.14 using 2 BPs.). The proposed approach could reduce delineation time and inter-performer variability in the tumor contouring process for NSCLC SBRT workflows.

Supporting intraoperative margin assessment using deep learning for automatic tumour segmentation in breast lumpectomy micro-PET-CT.

Maris L, Göker M, De Man K, Van den Broeck B, Van Hoecke S, Van de Vijver K, Vanhove C, Keereman V

pubmed logopapersAug 9 2025
Complete tumour removal is vital in curative breast cancer (BCa) surgery to prevent recurrence. Recently, [<sup>18</sup>F]FDG micro-PET-CT of lumpectomy specimens has shown promise for intraoperative margin assessment (IMA). To aid interpretation, we trained a 2D Residual U-Net to delineate invasive carcinoma of no special type in micro-PET-CT lumpectomy images. We collected 53 BCa lamella images from 19 patients with true histopathology-defined tumour segmentations. Group five-fold cross-validation yielded a dice similarity coefficient of 0.71 ± 0.20 for segmentation. Afterwards, an ensemble model was generated to segment tumours and predict margin status. Comparing predicted and true histopathological margin status in a separate set of 31 micro-PET-CT lumpectomy images of 31 patients achieved an F1 score of 84%, closely matching the mean performance of seven physicians who manually interpreted the same images. This model represents an important step towards a decision-support system that enhances micro-PET-CT-based IMA in BCa, facilitating its clinical adoption.

LWT-ARTERY-LABEL: A Lightweight Framework for Automated Coronary Artery Identification

Shisheng Zhang, Ramtin Gharleghi, Sonit Singh, Daniel Moses, Dona Adikari, Arcot Sowmya, Susann Beier

arxiv logopreprintAug 9 2025
Coronary artery disease (CAD) remains the leading cause of death globally, with computed tomography coronary angiography (CTCA) serving as a key diagnostic tool. However, coronary arterial analysis using CTCA, such as identifying artery-specific features from computational modelling, is labour-intensive and time-consuming. Automated anatomical labelling of coronary arteries offers a potential solution, yet the inherent anatomical variability of coronary trees presents a significant challenge. Traditional knowledge-based labelling methods fall short in leveraging data-driven insights, while recent deep-learning approaches often demand substantial computational resources and overlook critical clinical knowledge. To address these limitations, we propose a lightweight method that integrates anatomical knowledge with rule-based topology constraints for effective coronary artery labelling. Our approach achieves state-of-the-art performance on benchmark datasets, providing a promising alternative for automated coronary artery labelling.

Neurobehavioral mechanisms of fear and anxiety in multiple sclerosis.

Meyer-Arndt L, Rust R, Bellmann-Strobl J, Schmitz-Hübsch T, Marko L, Forslund S, Scheel M, Gold SM, Hetzer S, Paul F, Weygandt M

pubmed logopapersAug 9 2025
Anxiety is a common yet often underdiagnosed and undertreated comorbidity in multiple sclerosis (MS). While altered fear processing is a hallmark of anxiety in other populations, its neurobehavioral mechanisms in MS remain poorly understood. This study investigates the extent to which neurobehavioral mechanisms of fear generalization contribute to anxiety in MS. We recruited 18 persons with MS (PwMS) and anxiety, 36 PwMS without anxiety, and 23 healthy persons (HPs). Participants completed a functional MRI (fMRI) fear generalization task to assess fear processing and diffusion-weighted MRI for graph-based structural connectome analyses. Consistent with findings in non-MS anxiety populations, PwMS with anxiety exhibit fear overgeneralization, perceiving non-threating stimuli as threatening. A machine learning model trained on HPs in a multivariate pattern analysis (MVPA) cross-decoding approach accurately predicts behavioral fear generalization in both MS groups using whole-brain fMRI fear response patterns. Regional fMRI prediction and graph-based structural connectivity analyses reveal that fear response activity and structural network integrity of partially overlapping areas, such as hippocampus (for fear stimulus comparison) and anterior insula (for fear excitation), are crucial for MS fear generalization. Reduced network integrity in such regions is a direct indicator of MS anxiety. Our findings demonstrate that MS anxiety is substantially characterized by fear overgeneralization. The fact that a machine learning model trained to associate fMRI fear response patterns with fear ratings in HPs predicts fear ratings from fMRI data across MS groups using an MVPA cross-decoding approach suggests that generic fear processing mechanisms substantially contribute to anxiety in MS.

Trustworthy Medical Imaging with Large Language Models: A Study of Hallucinations Across Modalities

Anindya Bijoy Das, Shahnewaz Karim Sakib, Shibbir Ahmed

arxiv logopreprintAug 9 2025
Large Language Models (LLMs) are increasingly applied to medical imaging tasks, including image interpretation and synthetic image generation. However, these models often produce hallucinations, which are confident but incorrect outputs that can mislead clinical decisions. This study examines hallucinations in two directions: image to text, where LLMs generate reports from X-ray, CT, or MRI scans, and text to image, where models create medical images from clinical prompts. We analyze errors such as factual inconsistencies and anatomical inaccuracies, evaluating outputs using expert informed criteria across imaging modalities. Our findings reveal common patterns of hallucination in both interpretive and generative tasks, with implications for clinical reliability. We also discuss factors contributing to these failures, including model architecture and training data. By systematically studying both image understanding and generation, this work provides insights into improving the safety and trustworthiness of LLM driven medical imaging systems.

Quantitative radiomic analysis of computed tomography scans using machine and deep learning techniques accurately predicts histological subtypes of non-small cell lung cancer: A retrospective analysis.

Panchawagh S, Halder A, Haldule S, Sanker V, Lalwani D, Sequeria R, Naik H, Desai A

pubmed logopapersAug 9 2025
Non-small cell lung cancer (NSCLC) histological subtypes impact treatment decisions. While pre-surgical histopathological examination is ideal, it's not always possible. CT radiomic analysis shows promise in predicting NSCLC histological subtypes. To predict NSCLC histological subtypes using machine learning and deep learning models using Radiomic features. 422 lung CT scans from The Cancer Imaging Archive (TCIA) were analyzed. Primary neoplasms were segmented by expert radiologists. Using PyRadiomics, 2446 radiomic features were extracted; post-selection, 179 features remained. Machine learning models like logistic regression (LR), Support vector machine (SVM), Random Forest (RF), XGBoost, LightGBM, and CatBoost were employed, alongside a deep neural network (DNN) model. RF demonstrated the highest accuracy at 78 % (95 % CI: 70 %-84 %) and AUC-ROC at 94 % (95 % CI: 90 %-96 %). LightGBM, XGBoost, and CatBoost had AUC-ROC values of 95 %, 93 %, and 93 % respectively. The DNN's AUC was 94.4 % (95 % CI: 94.1 %-94.6 %). Logistic regression had the least efficacy. For histological subtype prediction, random forest, boosting models, and DNN were superior. Quantitative radiomic analysis with machine learning can accurately determine NSCLC histological subtypes. Random forest, ensemble models, and DNNs show significant promise for pre-operative NSCLC classification, which can streamline therapy decisions.

Fusion-Based Brain Tumor Classification Using Deep Learning and Explainable AI, and Rule-Based Reasoning

Melika Filvantorkaman, Mohsen Piri, Maral Filvan Torkaman, Ashkan Zabihi, Hamidreza Moradi

arxiv logopreprintAug 9 2025
Accurate and interpretable classification of brain tumors from magnetic resonance imaging (MRI) is critical for effective diagnosis and treatment planning. This study presents an ensemble-based deep learning framework that combines MobileNetV2 and DenseNet121 convolutional neural networks (CNNs) using a soft voting strategy to classify three common brain tumor types: glioma, meningioma, and pituitary adenoma. The models were trained and evaluated on the Figshare dataset using a stratified 5-fold cross-validation protocol. To enhance transparency and clinical trust, the framework integrates an Explainable AI (XAI) module employing Grad-CAM++ for class-specific saliency visualization, alongside a symbolic Clinical Decision Rule Overlay (CDRO) that maps predictions to established radiological heuristics. The ensemble classifier achieved superior performance compared to individual CNNs, with an accuracy of 91.7%, precision of 91.9%, recall of 91.7%, and F1-score of 91.6%. Grad-CAM++ visualizations revealed strong spatial alignment between model attention and expert-annotated tumor regions, supported by Dice coefficients up to 0.88 and IoU scores up to 0.78. Clinical rule activation further validated model predictions in cases with distinct morphological features. A human-centered interpretability assessment involving five board-certified radiologists yielded high Likert-scale scores for both explanation usefulness (mean = 4.4) and heatmap-region correspondence (mean = 4.0), reinforcing the framework's clinical relevance. Overall, the proposed approach offers a robust, interpretable, and generalizable solution for automated brain tumor classification, advancing the integration of deep learning into clinical neurodiagnostics.

FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI

Somayeh Farahani, Marjaneh Hejazi, Antonio Di Ieva, Sidong Liu

arxiv logopreprintAug 9 2025
Accurate, noninvasive detection of isocitrate dehydrogenase (IDH) mutation is essential for effective glioma management. Traditional methods rely on invasive tissue sampling, which may fail to capture a tumor's spatial heterogeneity. While deep learning models have shown promise in molecular profiling, their performance is often limited by scarce annotated data. In contrast, foundation deep learning models offer a more generalizable approach for glioma imaging biomarkers. We propose a Foundation-based Biomarker Network (FoundBioNet) that utilizes a SWIN-UNETR-based architecture to noninvasively predict IDH mutation status from multi-parametric MRI. Two key modules are incorporated: Tumor-Aware Feature Encoding (TAFE) for extracting multi-scale, tumor-focused features, and Cross-Modality Differential (CMD) for highlighting subtle T2-FLAIR mismatch signals associated with IDH mutation. The model was trained and validated on a diverse, multi-center cohort of 1705 glioma patients from six public datasets. Our model achieved AUCs of 90.58%, 88.08%, 65.41%, and 80.31% on independent test sets from EGD, TCGA, Ivy GAP, RHUH, and UPenn, consistently outperforming baseline approaches (p <= 0.05). Ablation studies confirmed that both the TAFE and CMD modules are essential for improving predictive accuracy. By integrating large-scale pretraining and task-specific fine-tuning, FoundBioNet enables generalizable glioma characterization. This approach enhances diagnostic accuracy and interpretability, with the potential to enable more personalized patient care.

Dense breasts and women's health: which screenings are essential?

Mota BS, Shimizu C, Reis YN, Gonçalves R, Soares Junior JM, Baracat EC, Filassi JR

pubmed logopapersAug 9 2025
This review synthesizes current evidence regarding optimal breast cancer screening strategies for women with dense breasts, a population at increased risk due to decreased mammographic sensitivity. A systematic literature review was performed in accordance with PRISMA criteria, covering MEDLINE, EMBASE, CINAHL Plus, Scopus, and Web of Science until May 2025. The analysis examines advanced imaging techniques such as digital breast tomosynthesis (DBT), contrast-enhanced spectral mammography (CESM), ultrasound, and magnetic resonance imaging (MRI), assessing their effectiveness in addressing the shortcomings of traditional mammography in dense breast tissue. The review rigorously evaluates the incorporation of risk stratification models, such as the BCSC, in customizing screening regimens, in conjunction with innovative technologies like liquid biopsy and artificial intelligence-based image analysis for improved risk prediction. A key emphasis is placed on the heterogeneity in international screening guidelines and the challenges in translating research findings to diverse clinical settings, particularly in resource-constrained environments. The discussion includes ethical implications regarding compulsory breast density notification and the possibility of intensifying disparities in health care. The review ultimately encourages the development of evidence-based, context-specific guidelines that facilitate equitable access to effective breast cancer screening for all women with dense breasts.
Page 39 of 3523516 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.