Sort by:
Page 34 of 46453 results

Performance of multimodal prediction models for intracerebral hemorrhage outcomes using real-world data.

Matsumoto K, Suzuki M, Ishihara K, Tokunaga K, Matsuda K, Chen J, Yamashiro S, Soejima H, Nakashima N, Kamouchi M

pubmed logopapersMay 21 2025
We aimed to develop and validate multimodal models integrating computed tomography (CT) images, text and tabular clinical data to predict poor functional outcomes and in-hospital mortality in patients with intracerebral hemorrhage (ICH). These models were designed to assist non-specialists in emergency settings with limited access to stroke specialists. A retrospective analysis of 527 patients with ICH admitted to a Japanese tertiary hospital between April 2019 and February 2022 was conducted. Deep learning techniques were used to extract features from three-dimensional CT images and unstructured data, which were then combined with tabular data to develop an L1-regularized logistic regression model to predict poor functional outcomes (modified Rankin scale score 3-6) and in-hospital mortality. The model's performance was evaluated by assessing discrimination metrics, calibration plots, and decision curve analysis (DCA) using temporal validation data. The multimodal model utilizing both imaging and text data, such as medical interviews, exhibited the highest performance in predicting poor functional outcomes. In contrast, the model that combined imaging with tabular data, including physiological and laboratory results, demonstrated the best predictive performance for in-hospital mortality. These models exhibited high discriminative performance, with areas under the receiver operating curve (AUROCs) of 0.86 (95% CI: 0.79-0.92) and 0.91 (95% CI: 0.84-0.96) for poor functional outcomes and in-hospital mortality, respectively. Calibration was satisfactory for predicting poor functional outcomes, but requires refinement for mortality prediction. The models performed similar to or better than conventional risk scores, and DCA curves supported their clinical utility. Multimodal prediction models have the potential to aid non-specialists in making informed decisions regarding ICH cases in emergency departments as part of clinical decision support systems. Enhancing real-world data infrastructure and improving model calibration are essential for successful implementation in clinical practice.

Large medical image database impact on generalizability of synthetic CT scan generation.

Boily C, Mazellier JP, Meyer P

pubmed logopapersMay 21 2025
This study systematically examines the impact of training database size and the generalizability of deep learning models for synthetic medical image generation. Specifically, we employ a Cycle-Consistency Generative Adversarial Network (CycleGAN) with softly paired data to synthesize kilovoltage computed tomography (kVCT) images from megavoltage computed tomography (MVCT) scans. Unlike previous works, which were constrained by limited data availability, our study uses an extensive database comprising 4,000 patient CT scans, an order of magnitude larger than prior research, allowing for a more rigorous assessment of database size in medical image translation. We quantitatively evaluate the fidelity of the generated synthetic images using established image similarity metrics, including Mean Absolute Error (MAE) and Structural Similarity Index Measure (SSIM). Beyond assessing image quality, we investigate the model's capacity for generalization by analyzing its performance across diverse patient subgroups, considering factors such as sex, age, and anatomical region. This approach enables a more granular understanding of how dataset composition influences model robustness.

Coronary Computed Tomographic Angiography to Optimize the Diagnostic Yield of Invasive Angiography for Low-Risk Patients Screened With Artificial Intelligence: Protocol for the CarDIA-AI Randomized Controlled Trial.

Petch J, Tabja Bortesi JP, Sheth T, Natarajan M, Pinilla-Echeverri N, Di S, Bangdiwala SI, Mosleh K, Ibrahim O, Bainey KR, Dobranowski J, Becerra MP, Sonier K, Schwalm JD

pubmed logopapersMay 21 2025
Invasive coronary angiography (ICA) is the gold standard in the diagnosis of coronary artery disease (CAD). Being invasive, it carries rare but serious risks including myocardial infarction, stroke, major bleeding, and death. A large proportion of elective outpatients undergoing ICA have nonobstructive CAD, highlighting the suboptimal use of this test. Coronary computed tomographic angiography (CCTA) is a noninvasive option that provides similar information with less risk and is recommended as a first-line test for patients with low-to-intermediate risk of CAD. Leveraging artificial intelligence (AI) to appropriately direct patients to ICA or CCTA based on the predicted probability of disease may improve the efficiency and safety of diagnostic pathways. he CarDIA-AI (Coronary computed tomographic angiography to optimize the Diagnostic yield of Invasive Angiography for low-risk patients screened with Artificial Intelligence) study aims to evaluate whether AI-based risk assessment for obstructive CAD implemented within a centralized triage process can optimize the use of ICA in outpatients referred for nonurgent ICA. CarDIA-AI is a pragmatic, open-label, superior randomized controlled trial involving 2 Canadian cardiac centers. A total of 252 adults referred for elective outpatient ICA will be randomized 1:1 to usual care (directly proceeding to ICA) or to triage using an AI-based decision support tool. The AI-based decision support tool was developed using referral information from over 37,000 patients and uses a light gradient boosting machine model to predict the probability of obstructive CAD based on 42 clinically relevant predictors, including patient referral information, demographic characteristics, risk factors, and medical history. Participants in the intervention arm will have their ICA referral forms and medical charts reviewed, and select details entered into the decision support tool, which recommends CCTA or ICA based on the patient's predicted probability of obstructive CAD. All patients will receive the selected imaging modality within 6 weeks of referral and will be subsequently followed for 90 days. The primary outcome is the proportion of normal or nonobstructive CAD diagnosed via ICA and will be assessed using a 2-sided z test to compare the patients referred for cardiac investigation with normal or nonobstructive CAD diagnosed through ICA between the intervention and control groups. Secondary outcomes include the number of angiograms avoided and the diagnostic yield of ICA. Recruitment began on January 9, 2025, and is expected to conclude in mid to late 2025. As of April 14, 2025, we have enrolled 81 participants. Data analysis will begin once data collection is completed. We expect to submit the results for publication in 2026. CarDIA-AI will be the first randomized controlled trial using AI to optimize patient selection for CCTA versus ICA, potentially improving diagnostic efficiency, avoiding unnecessary complications of ICA, and improving health care resource usage. ClinicalTrials.gov NCT06648239; https://clinicaltrials.gov/study/NCT06648239/. DERR1-10.2196/71726.

Exchange of Quantitative Computed Tomography Assessed Body Composition Data Using Fast Healthcare Interoperability Resources as a Necessary Step Toward Interoperable Integration of Opportunistic Screening Into Clinical Practice: Methodological Development Study.

Wen Y, Choo VY, Eil JH, Thun S, Pinto Dos Santos D, Kast J, Sigle S, Prokosch HU, Ovelgönne DL, Borys K, Kohnke J, Arzideh K, Winnekens P, Baldini G, Schmidt CS, Haubold J, Nensa F, Pelka O, Hosch R

pubmed logopapersMay 21 2025
Fast Healthcare Interoperability Resources (FHIR) is a widely used standard for storing and exchanging health care data. At the same time, image-based artificial intelligence (AI) models for quantifying relevant body structures and organs from routine computed tomography (CT)/magnetic resonance imaging scans have emerged. The missing link, simultaneously a needed step in advancing personalized medicine, is the incorporation of measurements delivered by AI models into an interoperable and standardized format. Incorporating image-based measurements and biomarkers into FHIR profiles can standardize data exchange, enabling timely, personalized treatment decisions and improving the precision and efficiency of patient care. This study aims to present the synergistic incorporation of CT-derived body organ and composition measurements with FHIR, delineating an initial paradigm for storing image-based biomarkers. This study integrated the results of the Body and Organ Analysis (BOA) model into FHIR profiles to enhance the interoperability of image-based biomarkers in radiology. The BOA model was selected as an exemplary AI model due to its ability to provide detailed body composition and organ measurements from CT scans. The FHIR profiles were developed based on 2 primary observation types: Body Composition Analysis (BCA Observation) for quantitative body composition metrics and Body Structure Observation for organ measurements. These profiles were structured to interoperate with a specially designed Diagnostic Report profile, which references the associated Imaging Study, ensuring a standardized linkage between image data and derived biomarkers. To ensure interoperability, all labels were mapped to SNOMED CT (Systematized Nomenclature of Medicine - Clinical Terms) or RadLex terminologies using specific value sets. The profiles were developed using FHIR Shorthand (FSH) and SUSHI, enabling efficient definition and implementation guide generation, ensuring consistency and maintainability. In this study, 4 BOA profiles, namely, Body Composition Analysis Observation, Body Structure Volume Observation, Diagnostic Report, and Imaging Study, have been presented. These FHIR profiles, which cover 104 anatomical landmarks, 8 body regions, and 8 tissues, enable the interoperable usage of the results of AI segmentation models, providing a direct link between image studies, series, and measurements. The BOA profiles provide a foundational framework for integrating AI-derived imaging biomarkers into FHIR, bridging the gap between advanced imaging analytics and standardized health care data exchange. By enabling structured, interoperable representation of body composition and organ measurements, these profiles facilitate seamless integration into clinical and research workflows, supporting improved data accessibility and interoperability. Their adaptability allows for extension to other imaging modalities and AI models, fostering a more standardized and scalable approach to using imaging biomarkers in precision medicine. This work represents a step toward enhancing the integration of AI-driven insights into digital health ecosystems, ultimately contributing to more data-driven, personalized, and efficient patient care.

Lung Nodule-SSM: Self-Supervised Lung Nodule Detection and Classification in Thoracic CT Images

Muniba Noreen, Furqan Shaukat

arxiv logopreprintMay 21 2025
Lung cancer remains among the deadliest types of cancer in recent decades, and early lung nodule detection is crucial for improving patient outcomes. The limited availability of annotated medical imaging data remains a bottleneck in developing accurate computer-aided diagnosis (CAD) systems. Self-supervised learning can help leverage large amounts of unlabeled data to develop more robust CAD systems. With the recent advent of transformer-based architecture and their ability to generalize to unseen tasks, there has been an effort within the healthcare community to adapt them to various medical downstream tasks. Thus, we propose a novel "LungNodule-SSM" method, which utilizes selfsupervised learning with DINOv2 as a backbone to enhance lung nodule detection and classification without annotated data. Our methodology has two stages: firstly, the DINOv2 model is pre-trained on unlabeled CT scans to learn robust feature representations, then secondly, these features are fine-tuned using transformer-based architectures for lesionlevel detection and accurate lung nodule diagnosis. The proposed method has been evaluated on the challenging LUNA 16 dataset, consisting of 888 CT scans, and compared with SOTA methods. Our experimental results show the superiority of our proposed method with an accuracy of 98.37%, explaining its effectiveness in lung nodule detection. The source code, datasets, and pre-processed data can be accessed using the link:https://github.com/EMeRALDsNRPU/Lung-Nodule-SSM-Self-Supervised-Lung-Nodule-Detection-and-Classification/tree/main

FasNet: a hybrid deep learning model with attention mechanisms and uncertainty estimation for liver tumor segmentation on LiTS17.

Singh R, Gupta S, Almogren A, Rehman AU, Bharany S, Altameem A, Choi J

pubmed logopapersMay 21 2025
Liver cancer, especially hepatocellular carcinoma (HCC), remains one of the most fatal cancers globally, emphasizing the critical need for accurate tumor segmentation to enable timely diagnosis and effective treatment planning. Traditional imaging techniques, such as CT and MRI, rely on manual interpretation, which can be both time-intensive and subject to variability. This study introduces FasNet, an innovative hybrid deep learning model that combines ResNet-50 and VGG-16 architectures, incorporating Channel and Spatial Attention mechanisms alongside Monte Carlo Dropout to improve segmentation precision and reliability. FasNet leverages ResNet-50's robust feature extraction and VGG-16's detailed spatial feature capture to deliver superior liver tumor segmentation accuracy. Channel and spatial attention mechanisms could selectively focus on the most relevant features and spatial regions for suitable segmentation with good accuracy and reliability. Monte Carlo Dropout estimates uncertainty and adds robustness, which is critical for high-stakes medical applications. Tested on the LiTS17 dataset, FasNet achieved a Dice Coefficient of 0.8766 and a Jaccard Index of 0.8487, surpassing several state-of-the-art methods. The Channel and Spatial Attention mechanisms in FasNet enhance feature selection, focusing on the most relevant spatial and channel information, while Monte Carlo Dropout improves model robustness and uncertainty estimation. These results position FasNet as a powerful diagnostic tool, offering precise and automated liver tumor segmentation that aids in early detection and precise treatment, ultimately enhancing patient outcomes.

Pancreas segmentation in CT scans: A novel MOMUNet based workflow.

Juwita J, Hassan GM, Datta A

pubmed logopapersMay 20 2025
Automatic pancreas segmentation in CT scans is crucial for various medical applications, including early diagnosis and computer-assisted surgery. However, existing segmentation methods remain suboptimal due to significant pancreas size variations across slices and severe class imbalance caused by the pancreas's small size and CT scanner movement during imaging. Traditional computer vision techniques struggle with these challenges, while deep learning-based approaches, despite their success in other domains, still face limitations in pancreas segmentation. To address these issues, we propose a novel, three-stage workflow that enhances segmentation accuracy and computational efficiency. First, we introduce External Contour Cropping (ECC), a background cleansing technique that mitigates class imbalance. Second, we propose a Size Ratio (SR) technique that restructures the training dataset based on the relative size of the target organ, improving the robustness of the model against anatomical variations. Third, we develop MOMUNet, an ultra-lightweight segmentation model with only 1.31 million parameters, designed for optimal performance on limited computational resources. Our proposed workflow achieves an improvement in Dice Score (DSC) of 2.56% over state-of-the-art (SOTA) models in the NIH-Pancreas dataset and 2.97% in the MSD-Pancreas dataset. Furthermore, applying the proposed model to another small organ, such as colon cancer segmentation in the MSD-Colon dataset, yielded a DSC of 68.4%, surpassing the SOTA models. These results demonstrate the effectiveness of our approach in significantly improving segmentation accuracy for small abdomen organs including pancreas and colon, making deep learning more accessible for low-resource medical facilities.

Neuroimaging Characterization of Acute Traumatic Brain Injury with Focus on Frontline Clinicians: Recommendations from the 2024 National Institute of Neurological Disorders and Stroke Traumatic Brain Injury Classification and Nomenclature Initiative Imaging Working Group.

Mac Donald CL, Yuh EL, Vande Vyvere T, Edlow BL, Li LM, Mayer AR, Mukherjee P, Newcombe VFJ, Wilde EA, Koerte IK, Yurgelun-Todd D, Wu YC, Duhaime AC, Awwad HO, Dams-O'Connor K, Doperalski A, Maas AIR, McCrea MA, Umoh N, Manley GT

pubmed logopapersMay 20 2025
Neuroimaging screening and surveillance is one of the first frontline diagnostic tools leveraged in the acute assessment (first 24 h postinjury) of patients suspected to have traumatic brain injury (TBI). While imaging, in particular computed tomography, is used almost universally in emergency departments worldwide to evaluate possible features of TBI, there is no currently agreed-upon reporting system, standard terminology, or framework to contextualize brain imaging findings with other available medical, psychosocial, and environmental data. In 2023, the NIH-National Institute of Neurological Disorders and Stroke convened six working groups of international experts in TBI to develop a new framework for nomenclature and classification. The goal of this effort was to propose a more granular system of injury classification that incorporates recent progress in imaging biomarkers, blood-based biomarkers, and injury and recovery modifiers to replace the commonly used Glasgow Coma Scale-based diagnosis groups of mild, moderate, and severe TBI, which have shown relatively poor diagnostic, prognostic, and therapeutic utility. Motivated by prior efforts to standardize the nomenclature for pathoanatomic imaging findings of TBI for research and clinical trials, along with more recent studies supporting the refinement of the originally proposed definitions, the Imaging Working Group sought to update and expand this application specifically for consideration of use in clinical practice. Here we report the recommendations of this working group to enable the translation of structured imaging common data elements to the standard of care. These leverage recent advances in imaging technology, electronic medical record (EMR) systems, and artificial intelligence (AI), along with input from key stakeholders, including patients with lived experience, caretakers, providers across medical disciplines, radiology industry partners, and policymakers. It was recommended that (1) there would be updates to the definitions of key imaging features used for this system of classification and that these should be further refined as new evidence of the underlying pathology driving the signal change is identified; (2) there would be an efficient, integrated tool embedded in the EMR imaging reporting system developed in collaboration with industry partners; (3) this would include AI-generated evidence-based feature clusters with diagnostic, prognostic, and therapeutic implications; and (4) a "patient translator" would be developed in parallel to assist patients and families in understanding these imaging features. In addition, important disclaimers would be provided regarding known limitations of current technology until such time as they are overcome, such as resolution and sequence parameter considerations. The end goal is a multifaceted TBI characterization model incorporating clinical, imaging, blood biomarker, and psychosocial and environmental modifiers to better serve patients not only acutely but also through the postinjury continuum in the days, months, and years that follow TBI.

Expert-guided StyleGAN2 image generation elevates AI diagnostic accuracy for maxillary sinus lesions.

Zeng P, Song R, Chen S, Li X, Li H, Chen Y, Gong Z, Cai G, Lin Y, Shi M, Huang K, Chen Z

pubmed logopapersMay 20 2025
The progress of artificial intelligence (AI) research in dental medicine is hindered by data acquisition challenges and imbalanced distributions. These problems are especially apparent when planning to develop AI-based diagnostic or analytic tools for various lesions, such as maxillary sinus lesions (MSL) including mucosal thickening and polypoid lesions. Traditional unsupervised generative models struggle to simultaneously control the image realism, diversity, and lesion-type specificity. This study establishes an expert-guided framework to overcome these limitations to elevate AI-based diagnostic accuracy. A StyleGAN2 framework was developed for generating clinically relevant MSL images (such as mucosal thickening and polypoid lesion) under expert control. The generated images were then integrated into training datasets to evaluate their effect on ResNet50's diagnostic performance. Here we show: 1) Both lesion subtypes achieve satisfactory fidelity metrics, with structural similarity indices (SSIM > 0.996) and maximum mean discrepancy values (MMD < 0.032), and clinical validation scores close to those of real images; 2) Integrating baseline datasets with synthetic images significantly enhances diagnostic accuracy for both internal and external test sets, particularly improving area under the precision-recall curve (AUPRC) by approximately 8% and 14% for mucosal thickening and polypoid lesions in the internal test set, respectively. The StyleGAN2-based image generation tool effectively addressed data scarcity and imbalance through high-quality MSL image synthesis, consequently boosting diagnostic model performance. This work not only facilitates AI-assisted preoperative assessment for maxillary sinus lift procedures but also establishes a methodological framework for overcoming data limitations in medical image analysis.

Detection of maxillary sinus pathologies using deep learning algorithms.

Aktuna Belgin C, Kurbanova A, Aksoy S, Akkaya N, Orhan K

pubmed logopapersMay 20 2025
Deep learning, a subset of machine learning, is widely utilized in medical applications. Identifying maxillary sinus pathologies before surgical interventions is crucial for ensuring successful treatment outcomes. Cone beam computed tomography (CBCT) is commonly employed for maxillary sinus evaluations due to its high resolution and lower radiation exposure. This study aims to assess the accuracy of artificial intelligence (AI) algorithms in detecting maxillary sinus pathologies from CBCT scans. A dataset comprising 1000 maxillary sinuses (MS) from 500 patients was analyzed using CBCT. Sinuses were categorized based on the presence or absence of pathology, followed by segmentation of the maxillary sinus. Manual segmentation masks were generated using the semiautomatic software ITK-SNAP, which served as a reference for comparison. A convolutional neural network (CNN)-based machine learning model was then implemented to automatically segment maxillary sinus pathologies from CBCT images. To evaluate segmentation accuracy, metrics such as the Dice similarity coefficient (DSC) and intersection over union (IoU) were utilized by comparing AI-generated results with human-generated segmentations. The automated segmentation model achieved a Dice score of 0.923, a recall of 0.979, an IoU of 0.887, an F1 score of 0.970, and a precision of 0.963. This study successfully developed an AI-driven approach for segmenting maxillary sinus pathologies in CBCT images. The findings highlight the potential of this method for rapid and accurate clinical assessment of maxillary sinus conditions using CBCT imaging.
Page 34 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.