Sort by:
Page 91 of 1321316 results

Deep Learning for Automated Prediction of Sphenoid Sinus Pneumatization in Computed Tomography.

Alamer A, Salim O, Alharbi F, Alsaleem F, Almuqbil A, Alhassoon K, Alsunaydih F

pubmed logopapersMay 22 2025
The sphenoid sinus is an important access point for trans-sphenoidal surgeries, but variations in its pneumatization may complicate surgical safety. Deep learning can be used to identify these anatomical variations. We developed a convolutional neural network (CNN) model for the automated prediction of sphenoid sinus pneumatization patterns in computed tomography (CT) scans. This model was tested on mid-sagittal CT images. Two radiologists labeled all CT images into four pneumatization patterns: Conchal (type I), presellar (type II), sellar (type III), and postsellar (type IV). We then augmented the training set to address the limited size and imbalanced nature of the data. The initial dataset included 249 CT images, divided into training (n = 174) and test (n = 75) datasets. The training dataset was augmented to 378 images. Following augmentation, the overall diagnostic accuracy of the model improved from 76.71% to 84%, with an area under the curve (AUC) of 0.84, indicating very good diagnostic performance. Subgroup analysis showed excellent results for type IV, with the highest AUC of 0.93, perfect sensitivity (100%), and an F1-score of 0.94. The model also performed robustly for type I, achieving an accuracy of 97.33% and high specificity (99%). These metrics highlight the model's potential for reliable clinical application. The proposed CNN model demonstrates very good diagnostic accuracy in identifying various sphenoid sinus pneumatization patterns, particularly excelling in type IV, which is crucial for endoscopic sinus surgery due to its higher risk of surgical complications. By assisting radiologists and surgeons, this model enhances the safety of transsphenoidal surgery, highlighting its value, novelty, and applicability in clinical settings.

Customized GPT-4V(ision) for radiographic diagnosis: can large language model detect supernumerary teeth?

Aşar EM, İpek İ, Bi Lge K

pubmed logopapersMay 21 2025
With the growing capabilities of language models like ChatGPT to process text and images, this study evaluated their accuracy in detecting supernumerary teeth on periapical radiographs. A customized GPT-4V model (CGPT-4V) was also developed to assess whether domain-specific training could improve diagnostic performance compared to standard GPT-4V and GPT-4o models. One hundred eighty periapical radiographs (90 with and 90 without supernumerary teeth) were evaluated using GPT-4 V, GPT-4o, and a fine-tuned CGPT-4V model. Each image was assessed separately with the standardized prompt "Are there any supernumerary teeth in the radiograph above?" to avoid contextual bias. Three dental experts scored the responses using a three-point Likert scale for positive cases and a binary scale for negatives. Chi-square tests and ROC analysis were used to compare model performances (p < 0.05). Among the three models, CGPT-4 V exhibited the highest accuracy, detecting supernumerary teeth correctly in 91% of cases, compared to 77% for GPT-4o and 63% for GPT-4V. The CGPT-4V model also demonstrated a significantly lower false positive rate (16%) than GPT-4V (42%). A statistically significant difference was found between CGPT-4V and GPT-4o (p < 0.001), while no significant difference was observed between GPT-4V and CGPT-4V or between GPT-4V and GPT-4o. Additionally, CGPT-4V successfully identified multiple supernumerary teeth in radiographs where present. These findings highlight the diagnostic potential of customized GPT models in dental radiology. Future research should focus on multicenter validation, seamless clinical integration, and cost-effectiveness to support real-world implementation.

Exchange of Quantitative Computed Tomography Assessed Body Composition Data Using Fast Healthcare Interoperability Resources as a Necessary Step Toward Interoperable Integration of Opportunistic Screening Into Clinical Practice: Methodological Development Study.

Wen Y, Choo VY, Eil JH, Thun S, Pinto Dos Santos D, Kast J, Sigle S, Prokosch HU, Ovelgönne DL, Borys K, Kohnke J, Arzideh K, Winnekens P, Baldini G, Schmidt CS, Haubold J, Nensa F, Pelka O, Hosch R

pubmed logopapersMay 21 2025
Fast Healthcare Interoperability Resources (FHIR) is a widely used standard for storing and exchanging health care data. At the same time, image-based artificial intelligence (AI) models for quantifying relevant body structures and organs from routine computed tomography (CT)/magnetic resonance imaging scans have emerged. The missing link, simultaneously a needed step in advancing personalized medicine, is the incorporation of measurements delivered by AI models into an interoperable and standardized format. Incorporating image-based measurements and biomarkers into FHIR profiles can standardize data exchange, enabling timely, personalized treatment decisions and improving the precision and efficiency of patient care. This study aims to present the synergistic incorporation of CT-derived body organ and composition measurements with FHIR, delineating an initial paradigm for storing image-based biomarkers. This study integrated the results of the Body and Organ Analysis (BOA) model into FHIR profiles to enhance the interoperability of image-based biomarkers in radiology. The BOA model was selected as an exemplary AI model due to its ability to provide detailed body composition and organ measurements from CT scans. The FHIR profiles were developed based on 2 primary observation types: Body Composition Analysis (BCA Observation) for quantitative body composition metrics and Body Structure Observation for organ measurements. These profiles were structured to interoperate with a specially designed Diagnostic Report profile, which references the associated Imaging Study, ensuring a standardized linkage between image data and derived biomarkers. To ensure interoperability, all labels were mapped to SNOMED CT (Systematized Nomenclature of Medicine - Clinical Terms) or RadLex terminologies using specific value sets. The profiles were developed using FHIR Shorthand (FSH) and SUSHI, enabling efficient definition and implementation guide generation, ensuring consistency and maintainability. In this study, 4 BOA profiles, namely, Body Composition Analysis Observation, Body Structure Volume Observation, Diagnostic Report, and Imaging Study, have been presented. These FHIR profiles, which cover 104 anatomical landmarks, 8 body regions, and 8 tissues, enable the interoperable usage of the results of AI segmentation models, providing a direct link between image studies, series, and measurements. The BOA profiles provide a foundational framework for integrating AI-derived imaging biomarkers into FHIR, bridging the gap between advanced imaging analytics and standardized health care data exchange. By enabling structured, interoperable representation of body composition and organ measurements, these profiles facilitate seamless integration into clinical and research workflows, supporting improved data accessibility and interoperability. Their adaptability allows for extension to other imaging modalities and AI models, fostering a more standardized and scalable approach to using imaging biomarkers in precision medicine. This work represents a step toward enhancing the integration of AI-driven insights into digital health ecosystems, ultimately contributing to more data-driven, personalized, and efficient patient care.

ÆMMamba: An Efficient Medical Segmentation Model With Edge Enhancement.

Dong X, Zhou B, Yin C, Liao IY, Jin Z, Xu Z, Pu B

pubmed logopapersMay 21 2025
Medical image segmentation is critical for disease diagnosis, treatment planning, and prognosis assessment, yet the complexity and diversity of medical images pose significant challenges to accurate segmentation. While Convolutional Neural Networks capture local features and Vision Transformers excel in the global context, both struggle with efficient long-range dependency modeling. Inspired by Mamba's State Space Modeling efficiency, we propose ÆMMamba, a novel multi-scale feature extraction framework built on the Mamba backbone network. AÆMMamba integrates several innovative modules: the Efficient Fusion Bridge (EFB) module, which employs a bidirectional state-space model and attention mechanisms to fuse multi-scale features; the Edge-Aware Module (EAM), which enhances low-level edge representation using Sobel-based edge extraction; and the Boundary Sensitive Decoder (BSD), which leverages inverse attention and residual convolutional layers to handle cross-level complex boundaries. ÆMMamba achieves state-of-the-art performance across 8 medical segmentation datasets. On polyp segmentation datasets (Kvasir, ClinicDB, ColonDB, EndoScene, ETIS), it records the highest mDice and mIoU scores, outperforming methods like MADGNet and Swin-UMamba, with a standout mDice of 72.22 on ETIS, the most challenging dataset in this domain. For lung and breast segmentation, ÆMMamba surpasses competitors such as H2Former and SwinUnet, achieving Dice scores of 84.24 on BUSI and 79.83 on COVID-19 Lung. And on the LGG brain MRI dataset, ÆMMamba attains an mDice of 87.25 and an mIoU of 79.31, outperforming all compared methods. The source code will be released at https://github.com/xingbod/eMMamba.

Coronary Computed Tomographic Angiography to Optimize the Diagnostic Yield of Invasive Angiography for Low-Risk Patients Screened With Artificial Intelligence: Protocol for the CarDIA-AI Randomized Controlled Trial.

Petch J, Tabja Bortesi JP, Sheth T, Natarajan M, Pinilla-Echeverri N, Di S, Bangdiwala SI, Mosleh K, Ibrahim O, Bainey KR, Dobranowski J, Becerra MP, Sonier K, Schwalm JD

pubmed logopapersMay 21 2025
Invasive coronary angiography (ICA) is the gold standard in the diagnosis of coronary artery disease (CAD). Being invasive, it carries rare but serious risks including myocardial infarction, stroke, major bleeding, and death. A large proportion of elective outpatients undergoing ICA have nonobstructive CAD, highlighting the suboptimal use of this test. Coronary computed tomographic angiography (CCTA) is a noninvasive option that provides similar information with less risk and is recommended as a first-line test for patients with low-to-intermediate risk of CAD. Leveraging artificial intelligence (AI) to appropriately direct patients to ICA or CCTA based on the predicted probability of disease may improve the efficiency and safety of diagnostic pathways. he CarDIA-AI (Coronary computed tomographic angiography to optimize the Diagnostic yield of Invasive Angiography for low-risk patients screened with Artificial Intelligence) study aims to evaluate whether AI-based risk assessment for obstructive CAD implemented within a centralized triage process can optimize the use of ICA in outpatients referred for nonurgent ICA. CarDIA-AI is a pragmatic, open-label, superior randomized controlled trial involving 2 Canadian cardiac centers. A total of 252 adults referred for elective outpatient ICA will be randomized 1:1 to usual care (directly proceeding to ICA) or to triage using an AI-based decision support tool. The AI-based decision support tool was developed using referral information from over 37,000 patients and uses a light gradient boosting machine model to predict the probability of obstructive CAD based on 42 clinically relevant predictors, including patient referral information, demographic characteristics, risk factors, and medical history. Participants in the intervention arm will have their ICA referral forms and medical charts reviewed, and select details entered into the decision support tool, which recommends CCTA or ICA based on the patient's predicted probability of obstructive CAD. All patients will receive the selected imaging modality within 6 weeks of referral and will be subsequently followed for 90 days. The primary outcome is the proportion of normal or nonobstructive CAD diagnosed via ICA and will be assessed using a 2-sided z test to compare the patients referred for cardiac investigation with normal or nonobstructive CAD diagnosed through ICA between the intervention and control groups. Secondary outcomes include the number of angiograms avoided and the diagnostic yield of ICA. Recruitment began on January 9, 2025, and is expected to conclude in mid to late 2025. As of April 14, 2025, we have enrolled 81 participants. Data analysis will begin once data collection is completed. We expect to submit the results for publication in 2026. CarDIA-AI will be the first randomized controlled trial using AI to optimize patient selection for CCTA versus ICA, potentially improving diagnostic efficiency, avoiding unnecessary complications of ICA, and improving health care resource usage. ClinicalTrials.gov NCT06648239; https://clinicaltrials.gov/study/NCT06648239/. DERR1-10.2196/71726.

Mammography-based artificial intelligence for breast cancer detection, diagnosis, and BI-RADS categorization using multi-view and multi-level convolutional neural networks.

Tan H, Wu Q, Wu Y, Zheng B, Wang B, Chen Y, Du L, Zhou J, Fu F, Guo H, Fu C, Ma L, Dong P, Xue Z, Shen D, Wang M

pubmed logopapersMay 21 2025
We developed an artificial intelligence system (AIS) using multi-view multi-level convolutional neural networks for breast cancer detection, diagnosis, and BI-RADS categorization support in mammography. Twenty-four thousand eight hundred sixty-six breasts from 12,433 Asian women between August 2012 and December 2018 were enrolled. The study consisted of three parts: (1) evaluation of AIS performance in malignancy diagnosis; (2) stratified analysis of BI-RADS 3-4 subgroups with AIS; and (3) reassessment of BI-RADS 0 breasts with AIS assistance. We further evaluate AIS by conducting a counterbalance-designed AI-assisted study, where ten radiologists read 1302 cases with/without AIS assistance. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and F1 score were measured. The AIS yielded AUC values of 0.995, 0.933, and 0.947 for malignancy diagnosis in the validation set, testing set 1, and testing set 2, respectively. Within BI-RADS 3-4 subgroups with pathological results, AIS downgraded 83.1% of false-positives into benign groups, and upgraded 54.1% of false-negatives into malignant groups. AIS also successfully assisted radiologists in identifying 7 out of 43 malignancies initially diagnosed with BI-RADS 0, with a specificity of 96.7%. In the counterbalance-designed AI-assisted study, the average AUC across ten readers significantly improved with AIS assistance (p = 0.001). AIS can accurately detect and diagnose breast cancer on mammography and further serve as a supportive tool for BI-RADS categorization. An AI risk assessment tool employing deep learning algorithms was developed and validated for enhancing breast cancer diagnosis from mammograms, to improve risk stratification accuracy, particularly in patients with dense breasts, and serve as a decision support aid for radiologists. The false positive and negative rates of mammography diagnosis remain high. The AIS can yield a high AUC for malignancy diagnosis. The AIS is important in stratifying BI-RADS categorization.

Systematic review on the impact of deep learning-driven worklist triage on radiology workflow and clinical outcomes.

Momin E, Cook T, Gershon G, Barr J, De Cecco CN, van Assen M

pubmed logopapersMay 21 2025
To perform a systematic review on the impact of deep learning (DL)-based triage for reducing diagnostic delays and improving patient outcomes in peer-reviewed and pre-print publications. A search was conducted of primary research studies focused on DL-based worklist optimization for diagnostic imaging triage published on multiple databases from January 2018 until July 2024. Extracted data included study design, dataset characteristics, workflow metrics including report turnaround time and time-to-treatment, and patient outcome differences. Further analysis between clinical settings and integration modality was investigated using nonparametric statistics. Risk of bias was assessed with the risk of bias in non-randomized studies-of interventions (ROBINS-I) checklist. A total of 38 studies from 20 publications, involving 138,423 images, were analyzed. Workflow interventions concerned pulmonary embolism (n = 8), stroke (n = 3), intracranial hemorrhage (n = 12), and chest conditions (n = 15). Patients in the post DL-triage group had shorter median report turnaround times: a mean difference of 12.3 min (IQR: -25.7, -7.6) for pulmonary embolism, 20.5 min (IQR: -32.1, -9.3) for stroke, 4.3 min (IQR: -8.6, 1.3) for intracranial hemorrhage and 29.7 min (IQR: -2947.7, -18.3) for chest diseases. Sub-group analysis revealed that reductions varied per clinical environment and relative prevalence rates but were the highest when algorithms actively stratified and reordered the radiological worklist, with reductions of -43.7% in report turnaround time compared to -7.6% from widget-based systems (p < 0.01). DL-based triage systems had comparable report turnaround time improvements, especially in outpatient and high-prevalence settings, suggesting that AI-based triage holds promise in alleviating radiology workloads. Question Can DL-based triage address lengthening imaging report turnaround times and improve patient outcomes across distinct clinical environments? Findings DL-based triage improved report turnaround time across disease groups, with higher reductions reported in high-prevalence or lower acuity settings. Clinical relevance DL-based workflow prioritization is a reliable tool for reducing diagnostic imaging delay for time-sensitive disease across clinical settings. However, further research and reliable metrics are needed to provide specific recommendations with regards to false-negative examinations and multi-condition prioritization.

Deep Learning with Domain Randomization in Image and Feature Spaces for Abdominal Multiorgan Segmentation on CT and MRI Scans.

Shi Y, Wang L, Qureshi TA, Deng Z, Xie Y, Li D

pubmed logopapersMay 21 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning segmentation model that can segment abdominal organs on CT and MR images with high accuracy and generalization ability. Materials and Methods In this study, an extended nnU-Net model was trained for abdominal organ segmentation. A domain randomization method in both the image and feature space was developed to improve the generalization ability under cross-site and cross-modality settings on public prostate MRI and abdominal CT and MRI datasets. The prostate MRI dataset contains data from multiple health care institutions with domain shifts. The abdominal CT and MRI dataset is structured for cross-modality evaluation, training on one modality (eg, MRI) and testing on the other (eg, CT). This domain randomization method was then used to train a segmentation model with enhanced generalization ability on the abdominal multiorgan segmentation challenge (AMOS) dataset to improve abdominal CT and MR multiorgan segmentation, and the model was compared with two commonly used segmentation algorithms (TotalSegmentator and MRSegmentator). Model performance was evaluated using the Dice similarity coefficient (DSC). Results The proposed domain randomization method showed improved generalization ability on the cross-site and cross-modality datasets compared with the state-of-the-art methods. The segmentation model using this method outperformed two other publicly available segmentation models on data from unseen test domains (Average DSC: 0.88 versus 0.79; <i>P</i> < .001 and 0.88 versus 0.76; <i>P</i> < .001). Conclusion The combination of image and feature domain randomizations improved the accuracy and generalization ability of deep learning-based abdominal segmentation on CT and MR images. © RSNA, 2025.

An Ultrasound Image-Based Deep Learning Radiomics Nomogram for Differentiating Between Benign and Malignant Indeterminate Cytology (Bethesda III) Thyroid Nodules: A Retrospective Study.

Zhong L, Shi L, Li W, Zhou L, Wang K, Gu L

pubmed logopapersMay 21 2025
Our objective is to develop and validate a deep learning radiomics nomogram (DLRN) based on preoperative ultrasound images and clinical features, for predicting the malignancy of thyroid nodules with indeterminate cytology (Bethesda III). Between June 2017 and June 2022, we conducted a retrospective study on 194 patients with surgically confirmed indeterminate cytology (Bethesda III) in our hospital. The training and internal validation cohorts were comprised of 155 and 39 patients, in a 7:3 ratio. To facilitate external validation, we selected an additional 80 patients from each of the remaining two medical centers. Utilizing preoperative ultrasound data, we obtained imaging markers that encompass both deep learning and manually radiomic features. After feature selection, we developed a comprehensive diagnostic model to evaluate the predictive value for Bethesda III benign and malignant cases. The model's diagnostic accuracy, calibration, and clinical applicability were systematically assessed. The results showed that the prediction model, which integrated 512 DTL features extracted from the pre-trained Resnet34 network, ultrasound radiomics, and clinical features, exhibited superior stability in distinguishing between benign and malignant indeterminate thyroid nodules (Bethesda Class III). In the validation set, the AUC was 0.92 (95% CI: 0.831-1.000), and the accuracy, sensitivity, specificity, precision, and recall were 0.897, 0.882, 0.909, 0.882, and 0.882, respectively. The comprehensive multidimensional data model based on deep transfer learning, ultrasound radiomics features, and clinical characteristics can effectively distinguish the benign and malignant indeterminate thyroid nodules (Bethesda Class III), providing valuable guidance for treatment selection in patients with indeterminate thyroid nodules (Bethesda Class III).

Synthesizing [<sup>18</sup>F]PSMA-1007 PET bone images from CT images with GAN for early detection of prostate cancer bone metastases: a pilot validation study.

Chai L, Yao X, Yang X, Na R, Yan W, Jiang M, Zhu H, Sun C, Dai Z, Yang X

pubmed logopapersMay 21 2025
[<sup>18</sup>F]FDG PET/CT scan combined with [<sup>18</sup>F]PSMA-1007 PET/CT scan is commonly conducted for detecting bone metastases in prostate cancer (PCa). However, it is expensive and may expose patients to more radiation hazards. This study explores deep learning (DL) techniques to synthesize [<sup>18</sup>F]PSMA-1007 PET bone images from CT bone images for the early detection of bone metastases in PCa, which may reduce additional PET/CT scans and relieve the burden on patients. We retrospectively collected paired whole-body (WB) [<sup>18</sup>F]PSMA-1007 PET/CT images from 152 patients with clinical and pathological diagnosis results, including 123 PCa and 29 cases of benign lesions. The average age of the patients was 67.48 ± 10.87 years, and the average lesion size was 8.76 ± 15.5 mm. The paired low-dose CT and PET images were preprocessed and segmented to construct the WB bone structure images. 152 subjects were randomly stratified into training, validation, and test groups in the number of 92:41:19. Two generative adversarial network (GAN) models-Pix2pix and Cycle GAN-were trained to synthesize [<sup>18</sup>F]PSMA-1007 PET bone images from paired CT bone images. The performance of two synthesis models was evaluated using quantitative metrics of mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index metrics (SSIM), as well as the target-to-background ratio (TBR). The results of DL-based image synthesis indicated that the synthesis of [<sup>18</sup>F]PSMA-1007 PET bone images from low-dose CT bone images was highly feasible. The Pix2pix model performed better with an SSIM of 0.97, PSNR of 44.96, MSE of 0.80, and MAE of 0.10, respectively. The TBRs of bone metastasis lesions calculated on DL-synthesized PET bone images were highly correlated with those of real PET bone images (Pearson's r > 0.90) and had no significant differences (p < 0.05). It is feasible to generate synthetic [<sup>18</sup>F]PSMA-1007 PET bone images from CT bone images by using DL techniques with reasonable accuracy, which can provide information for early detection of PCa bone metastases.
Page 91 of 1321316 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.