Sort by:
Page 166 of 2052045 results

Intelligent health model for medical imaging to guide laymen using neural cellular automata.

Sharma SK, Chowdhary CL, Sharma VS, Rasool A, Khan AA

pubmed logopapersMay 20 2025
A layman in health systems is a person who doesn't have any knowledge about health data i.e., X-ray, MRI, CT scan, and health examination reports, etc. The motivation behind the proposed invention is to help laymen to make medical images understandable. The health model is trained using a neural network approach that analyses user health examination data; predicts the type and level of the disease and advises precaution to the user. Cellular Automata (CA) technology has been integrated with the neural networks to segment the medical image. The CA analyzes the medical images pixel by pixel and generates a robust threshold value which helps to efficiently segment the image and identify accurate abnormal spots from the medical image. The proposed method has been trained and experimented using 10000+ medical images which are taken from various open datasets. Various text analysis measures i.e., BLEU, ROUGE, and WER are used in the research to validate the produced report. The BLEU and ROUGE calculate a similarity to decide how the generated text report is closer to the original report. The BLEU and ROUGE scores of the experimented images are approximately 0.62 and 0.90, claims that the produced report is very close to the original report. The WER score 0.14, claims that the generated report contains the most relevant words. The overall summary of the proposed research is that it provides a fruitful medical report with accurate disease and precautions to the laymen.

Diagnostic value of fully automated CT pulmonary angiography in patients with chronic thromboembolic pulmonary hypertension and chronic thromboembolic disease.

Lin Y, Li M, Xie S

pubmed logopapersMay 20 2025
To evaluate the value of employing artificial intelligence (AI)-assisted CT pulmonary angiography (CTPA) for patients with chronic thromboembolic pulmonary hypertension (CTEPH) and chronic thromboembolic disease (CTED). A single-center, retrospective analysis of 350 sequential patients with right heart catheterization (RHC)-confirmed CTEPH, CTED, and normal controls was conducted. Parameters such as the main pulmonary artery diameter (MPAd), the ratio of MPA to ascending aorta diameter (MPAd/AAd), the ratio of right to left ventricle diameter (RVd/LVd), and the ratio of RV to LV volume (RVv/LVv) were evaluated using automated AI software and compared with manual analysis. The reliability was assessed through an intraclass correlation coefficient (ICC) analysis. The diagnostic accuracy was determined using receiver-operating characteristic (ROC) curves. Compared to CTED and control groups, CTEPH patients were significantly more likely to have elevated automatic CTPA metrics (all p < 0.001, respectively). Automated MPAd, MPAd/Aad, and RVv/LVv had a strong correlation with mPAP (r = 0.952, 0.904, and 0.815, respectively, all p < 0.001). The automated and manual CTPA analyses showed strong concordance. For the CTEPH and CTED categories, the optimal area under the curve (AU-ROC) reached 0.939 (CI: 0.908-0.969). In the CTEPH and control groups, the best AU-ROC was 0.970 (CI: 0.953-0.988). In the CTED and control groups, the best AU-ROC was 0.782 (CI: 0.724-0.840). Automated AI-driven CTPA analysis provides a dependable approach for evaluating patients with CTEPH, CTED, and normal controls, demonstrating excellent consistency and efficiency. Question Guidelines do not advocate for applying treatment protocols for CTEPH to patients with CTED; early detection of the condition is crucial. Findings Automated CTPA analysis was feasible in 100% of patients with good agreement and would have added information for early detection and identification. Clinical relevance Automated AI-driven CTPA analysis provides a reliable approach demonstrating excellent consistency and efficiency. Additionally, these noninvasive imaging findings may aid in treatment stratification and determining optimal intervention directed by RHC.

Mask of Truth: Model Sensitivity to Unexpected Regions of Medical Images.

Sourget T, Hestbek-Møller M, Jiménez-Sánchez A, Junchi Xu J, Cheplygina V

pubmed logopapersMay 20 2025
The development of larger models for medical image analysis has led to increased performance. However, it also affected our ability to explain and validate model decisions. Models can use non-relevant parts of images, also called spurious correlations or shortcuts, to obtain high performance on benchmark datasets but fail in real-world scenarios. In this work, we challenge the capacity of convolutional neural networks (CNN) to classify chest X-rays and eye fundus images while masking out clinically relevant parts of the image. We show that all models trained on the PadChest dataset, irrespective of the masking strategy, are able to obtain an area under the curve (AUC) above random. Moreover, the models trained on full images obtain good performance on images without the region of interest (ROI), even superior to the one obtained on images only containing the ROI. We also reveal a possible spurious correlation in the Chákṣu dataset while the performances are more aligned with the expectation of an unbiased model. We go beyond the performance analysis with the usage of the explainability method SHAP and the analysis of embeddings. We asked a radiology resident to interpret chest X-rays under different masking to complement our findings with clinical knowledge.

Detection of maxillary sinus pathologies using deep learning algorithms.

Aktuna Belgin C, Kurbanova A, Aksoy S, Akkaya N, Orhan K

pubmed logopapersMay 20 2025
Deep learning, a subset of machine learning, is widely utilized in medical applications. Identifying maxillary sinus pathologies before surgical interventions is crucial for ensuring successful treatment outcomes. Cone beam computed tomography (CBCT) is commonly employed for maxillary sinus evaluations due to its high resolution and lower radiation exposure. This study aims to assess the accuracy of artificial intelligence (AI) algorithms in detecting maxillary sinus pathologies from CBCT scans. A dataset comprising 1000 maxillary sinuses (MS) from 500 patients was analyzed using CBCT. Sinuses were categorized based on the presence or absence of pathology, followed by segmentation of the maxillary sinus. Manual segmentation masks were generated using the semiautomatic software ITK-SNAP, which served as a reference for comparison. A convolutional neural network (CNN)-based machine learning model was then implemented to automatically segment maxillary sinus pathologies from CBCT images. To evaluate segmentation accuracy, metrics such as the Dice similarity coefficient (DSC) and intersection over union (IoU) were utilized by comparing AI-generated results with human-generated segmentations. The automated segmentation model achieved a Dice score of 0.923, a recall of 0.979, an IoU of 0.887, an F1 score of 0.970, and a precision of 0.963. This study successfully developed an AI-driven approach for segmenting maxillary sinus pathologies in CBCT images. The findings highlight the potential of this method for rapid and accurate clinical assessment of maxillary sinus conditions using CBCT imaging.

An explainable AI-driven deep neural network for accurate breast cancer detection from histopathological and ultrasound images.

Alom MR, Farid FA, Rahaman MA, Rahman A, Debnath T, Miah ASM, Mansor S

pubmed logopapersMay 20 2025
Breast cancer represents a significant global health challenge, which makes it essential to detect breast cancer early and accurately to improve patient prognosis and reduce mortality rates. However, traditional diagnostic processes relying on manual analysis of medical images are inherently complex and subject to variability between observers, highlighting the urgent need for robust automated breast cancer detection systems. While deep learning has demonstrated potential, many current models struggle with limited accuracy and lack of interpretability. This research introduces the Deep Neural Breast Cancer Detection (DNBCD) model, an explainable AI-based framework that utilizes deep learning methods for classifying breast cancer using histopathological and ultrasound images. The proposed model employs Densenet121 as a foundation, integrating customized Convolutional Neural Network (CNN) layers including GlobalAveragePooling2D, Dense, and Dropout layers along with transfer learning to achieve both high accuracy and interpretability for breast cancer diagnosis. The proposed DNBCD model integrates several preprocessing techniques, including image normalization and resizing, and augmentation techniques to enhance the model's robustness and address class imbalances using class weight. It employs Grad-CAM (Gradient-weighted Class Activation Mapping) to offer visual justifications for its predictions, increasing trust and transparency among healthcare providers. The model was assessed using two benchmark datasets: Breakhis-400x (B-400x) and Breast Ultrasound Images Dataset (BUSI) containing 1820 and 1578 images, respectively. We systematically divided the datasets into training (70%), testing (20%,) and validation (10%) sets, ensuring efficient model training and evaluation obtaining accuracies of 93.97% for B-400x dataset having benign and malignant classes and 89.87% for BUSI dataset having benign, malignant, and normal classes for breast cancer detection. Experimental results demonstrate that the proposed DNBCD model significantly outperforms existing state-of-the-art approaches with potential uses in clinical environments. We also made all the materials publicly accessible for the research community at: https://github.com/romzanalom/XAI-Based-Deep-Neural-Breast-Cancer-Detection .

Enhancing pathological myopia diagnosis: a bimodal artificial intelligence approach integrating fundus and optical coherence tomography imaging for precise atrophy, traction and neovascularisation grading.

Xu Z, Yang Y, Chen H, Han R, Han X, Zhao J, Yu W, Yang Z, Chen Y

pubmed logopapersMay 20 2025
Pathological myopia (PM) has emerged as a leading cause of global visual impairment, early detection and precise grading of PM are crucial for timely intervention. The atrophy, traction and neovascularisation (ATN) system is applied to define PM progression and stages with precision. This study focuses on constructing a comprehensive PM image dataset comprising both fundus and optical coherence tomography (OCT) images and developing a bimodal artificial intelligence (AI) classification model for ATN grading in PM. This single-centre retrospective cross-sectional study collected 2760 colour fundus photographs and matching OCT images of PM from January 2019 to November 2022 at Peking Union Medical College Hospital. Ophthalmology specialists labelled and inspected all paired images using the ATN grading system. The AI model used a ResNet-50 backbone and a multimodal multi-instance learning module to enhance interaction across instances from both modalities. Performance comparisons among single-modality fundus, OCT and bimodal AI models were conducted for ATN grading in PM. The bimodality model, dual-deep learning (DL), demonstrated superior accuracy in both detailed multiclassification and biclassification of PM, which aligns well with our observation from instance attention-weight activation maps. The area under the curve for severe PM using dual-DL was 0.9635 (95% CI 0.9380 to 0.9890), compared with 0.9359 (95% CI 0.9027 to 0.9691) for the solely OCT model and 0.9268 (95% CI 0.8915 to 0.9621) for the fundus model. Our novel bimodal AI multiclassification model for PM ATN staging proves accurate and beneficial for public health screening and prompt referral of PM patients.

Deep-Learning Reconstruction for 7T MP2RAGE and SPACE MRI: Improving Image Quality at High Acceleration Factors.

Liu Z, Patel V, Zhou X, Tao S, Yu T, Ma J, Nickel D, Liebig P, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersMay 20 2025
Deep learning (DL) reconstruction has been successful in realizing otherwise impracticable acceleration factors and improving image quality in conventional MRI field strengths; however, there has been limited application to ultra-high field MRI.The objective of this study was to evaluate the performance of a prototype DL-based image reconstruction technique in 7T MRI of the brain utilizing MP2RAGE and SPACE acquisitions, in comparison to reconstructions in conventional compressed sensing (CS) and controlled aliasing in parallel imaging (CAIPIRINHA) techniques. This retrospective study involved 60 patients who underwent 7T brain MRI between June 2024 and October 2024, comprised of 30 patients with MP2RAGE data and 30 patients with SPACE FLAIR data. Each set of raw data was reconstructed with DL-based reconstruction and conventional reconstruction. Image quality was independently assessed by two neuroradiologists using a 5-point Likert scale, which included overall image quality, artifacts, sharpness, structural conspicuity, and noise level. Inter-observer agreement was determined using top-box analysis. Contrast-to-noise ratio (CNR) and noise levels were quantitatively evaluated and compared using the Wilcoxon signed-rank test. DL-based reconstruction resulted in a significant increase in overall image quality and a reduction in subjective noise level for both MP2RAGE and SPACE FLAIR data (all P<0.001), with no significant differences in image artifacts (all P>0.05). When compared to standard reconstruction, the implementation of DL-based reconstruction yielded an increase in CNR of 49.5% [95% CI 33.0-59.0%] for MP2RAGE data and 90.6% [95% CI 73.2-117.7%] for SPACE FLAIR data, along with a decrease in noise of 33.5% [95% CI 23.0-38.0%] for MP2RAGE data and 47.5% [95% CI 41.9-52.6%] for SPACE FLAIR data. DL-based reconstruction of 7T MRI significantly enhanced image quality compared to conventional reconstruction without introducing image artifacts. The achievable high acceleration factors have the potential to substantially improve image quality and resolution in 7T MRI. CAIPIRINHA = Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration; CNR = contrast-to-noise ratio; CS = compressed sensing; DL = deep learning; MNI = Montreal Neurological Institute; MP2RAGE = Magnetization-Prepared 2 Rapid Acquisition Gradient Echoes; SPACE = Sampling Perfection with Application-Optimized Contrasts using Different Flip Angle Evolutions.

Neuroimaging Characterization of Acute Traumatic Brain Injury with Focus on Frontline Clinicians: Recommendations from the 2024 National Institute of Neurological Disorders and Stroke Traumatic Brain Injury Classification and Nomenclature Initiative Imaging Working Group.

Mac Donald CL, Yuh EL, Vande Vyvere T, Edlow BL, Li LM, Mayer AR, Mukherjee P, Newcombe VFJ, Wilde EA, Koerte IK, Yurgelun-Todd D, Wu YC, Duhaime AC, Awwad HO, Dams-O'Connor K, Doperalski A, Maas AIR, McCrea MA, Umoh N, Manley GT

pubmed logopapersMay 20 2025
Neuroimaging screening and surveillance is one of the first frontline diagnostic tools leveraged in the acute assessment (first 24 h postinjury) of patients suspected to have traumatic brain injury (TBI). While imaging, in particular computed tomography, is used almost universally in emergency departments worldwide to evaluate possible features of TBI, there is no currently agreed-upon reporting system, standard terminology, or framework to contextualize brain imaging findings with other available medical, psychosocial, and environmental data. In 2023, the NIH-National Institute of Neurological Disorders and Stroke convened six working groups of international experts in TBI to develop a new framework for nomenclature and classification. The goal of this effort was to propose a more granular system of injury classification that incorporates recent progress in imaging biomarkers, blood-based biomarkers, and injury and recovery modifiers to replace the commonly used Glasgow Coma Scale-based diagnosis groups of mild, moderate, and severe TBI, which have shown relatively poor diagnostic, prognostic, and therapeutic utility. Motivated by prior efforts to standardize the nomenclature for pathoanatomic imaging findings of TBI for research and clinical trials, along with more recent studies supporting the refinement of the originally proposed definitions, the Imaging Working Group sought to update and expand this application specifically for consideration of use in clinical practice. Here we report the recommendations of this working group to enable the translation of structured imaging common data elements to the standard of care. These leverage recent advances in imaging technology, electronic medical record (EMR) systems, and artificial intelligence (AI), along with input from key stakeholders, including patients with lived experience, caretakers, providers across medical disciplines, radiology industry partners, and policymakers. It was recommended that (1) there would be updates to the definitions of key imaging features used for this system of classification and that these should be further refined as new evidence of the underlying pathology driving the signal change is identified; (2) there would be an efficient, integrated tool embedded in the EMR imaging reporting system developed in collaboration with industry partners; (3) this would include AI-generated evidence-based feature clusters with diagnostic, prognostic, and therapeutic implications; and (4) a "patient translator" would be developed in parallel to assist patients and families in understanding these imaging features. In addition, important disclaimers would be provided regarding known limitations of current technology until such time as they are overcome, such as resolution and sequence parameter considerations. The end goal is a multifaceted TBI characterization model incorporating clinical, imaging, blood biomarker, and psychosocial and environmental modifiers to better serve patients not only acutely but also through the postinjury continuum in the days, months, and years that follow TBI.

AI-powered integration of multimodal imaging in precision medicine for neuropsychiatric disorders.

Huang W, Shu N

pubmed logopapersMay 20 2025
Neuropsychiatric disorders have complex pathological mechanism, pronounced clinical heterogeneity, and a prolonged preclinical phase, which presents a challenge for early diagnosis and development of precise intervention strategies. With the development of large-scale multimodal neuroimaging datasets and advancement of artificial intelligence (AI) algorithms, the integration of multimodal imaging with AI techniques has emerged as a pivotal avenue for early detection and tailoring individualized treatment for neuropsychiatric disorders. To support these advances, in this review, we outline multimodal neuroimaging techniques, AI methods, and strategies for multimodal data fusion. We highlight applications of multimodal AI based on neuroimaging data in precision medicine for neuropsychiatric disorders, discussing challenges in clinical adoption, their emerging solutions, and future directions.

Pancreas segmentation in CT scans: A novel MOMUNet based workflow.

Juwita J, Hassan GM, Datta A

pubmed logopapersMay 20 2025
Automatic pancreas segmentation in CT scans is crucial for various medical applications, including early diagnosis and computer-assisted surgery. However, existing segmentation methods remain suboptimal due to significant pancreas size variations across slices and severe class imbalance caused by the pancreas's small size and CT scanner movement during imaging. Traditional computer vision techniques struggle with these challenges, while deep learning-based approaches, despite their success in other domains, still face limitations in pancreas segmentation. To address these issues, we propose a novel, three-stage workflow that enhances segmentation accuracy and computational efficiency. First, we introduce External Contour Cropping (ECC), a background cleansing technique that mitigates class imbalance. Second, we propose a Size Ratio (SR) technique that restructures the training dataset based on the relative size of the target organ, improving the robustness of the model against anatomical variations. Third, we develop MOMUNet, an ultra-lightweight segmentation model with only 1.31 million parameters, designed for optimal performance on limited computational resources. Our proposed workflow achieves an improvement in Dice Score (DSC) of 2.56% over state-of-the-art (SOTA) models in the NIH-Pancreas dataset and 2.97% in the MSD-Pancreas dataset. Furthermore, applying the proposed model to another small organ, such as colon cancer segmentation in the MSD-Colon dataset, yielded a DSC of 68.4%, surpassing the SOTA models. These results demonstrate the effectiveness of our approach in significantly improving segmentation accuracy for small abdomen organs including pancreas and colon, making deep learning more accessible for low-resource medical facilities.
Page 166 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.