Sort by:
Page 204 of 2602591 results

Daily proton dose re-calculation on deep-learning corrected cone-beam computed tomography scans.

Vestergaard CD, Muren LP, Elstrøm UV, Stolarczyk L, Nørrevang O, Petersen SE, Taasti VT

pubmed logopapersMay 22 2025
Synthetic CT (sCT) generation from cone-beam CT (CBCT) must maintain stable performance and allow for accurate dose calculation across all treatment fractions to effectively support adaptive proton therapy. This study evaluated a 3D deep-learning (DL) network for sCT generation for prostate cancer patients over the full treatment course. Patient data from 25/6 prostate cancer patients were used to train/test the DL network. Patients in the test set had a planning CT, 39 CBCT images, and at least one repeat CT (reCT) used for replanning. The generated sCT images were compared to fan-beam planning and reCT images in terms of i) CT number accuracy and stability within spherical regions-of-interest (ROIs) in the bladder, prostate, and femoral heads, ii) proton range calculation accuracy through single-spot plans, and iii) dose trends in target coverage over the treatment course (one patient). The sCT images demonstrated image quality comparable to CT, while preserving the CBCT anatomy. The mean CT numbers on the sCT and CT images were comparable, e.g. for the prostate ROI they ranged from 29 HU to 59 HU for sCT, and from 36 HU to 50 HU for CT. The largest median proton range difference was 1.9 mm. Proton dose calculations showed excellent target coverage (V95%≥99.6 %) for the high-dose target. The DL network effectively generated high-quality sCT images with CT numbers, proton range, and dose characteristics comparable to fan-beam CT. Its robustness against intra-patient variations makes it a feasible tool for adaptive proton therapy.

Deep Learning for Automated Prediction of Sphenoid Sinus Pneumatization in Computed Tomography.

Alamer A, Salim O, Alharbi F, Alsaleem F, Almuqbil A, Alhassoon K, Alsunaydih F

pubmed logopapersMay 22 2025
The sphenoid sinus is an important access point for trans-sphenoidal surgeries, but variations in its pneumatization may complicate surgical safety. Deep learning can be used to identify these anatomical variations. We developed a convolutional neural network (CNN) model for the automated prediction of sphenoid sinus pneumatization patterns in computed tomography (CT) scans. This model was tested on mid-sagittal CT images. Two radiologists labeled all CT images into four pneumatization patterns: Conchal (type I), presellar (type II), sellar (type III), and postsellar (type IV). We then augmented the training set to address the limited size and imbalanced nature of the data. The initial dataset included 249 CT images, divided into training (n = 174) and test (n = 75) datasets. The training dataset was augmented to 378 images. Following augmentation, the overall diagnostic accuracy of the model improved from 76.71% to 84%, with an area under the curve (AUC) of 0.84, indicating very good diagnostic performance. Subgroup analysis showed excellent results for type IV, with the highest AUC of 0.93, perfect sensitivity (100%), and an F1-score of 0.94. The model also performed robustly for type I, achieving an accuracy of 97.33% and high specificity (99%). These metrics highlight the model's potential for reliable clinical application. The proposed CNN model demonstrates very good diagnostic accuracy in identifying various sphenoid sinus pneumatization patterns, particularly excelling in type IV, which is crucial for endoscopic sinus surgery due to its higher risk of surgical complications. By assisting radiologists and surgeons, this model enhances the safety of transsphenoidal surgery, highlighting its value, novelty, and applicability in clinical settings.

Render-FM: A Foundation Model for Real-time Photorealistic Volumetric Rendering

Zhongpai Gao, Meng Zheng, Benjamin Planche, Anwesa Choudhuri, Terrence Chen, Ziyan Wu

arxiv logopreprintMay 22 2025
Volumetric rendering of Computed Tomography (CT) scans is crucial for visualizing complex 3D anatomical structures in medical imaging. Current high-fidelity approaches, especially neural rendering techniques, require time-consuming per-scene optimization, limiting clinical applicability due to computational demands and poor generalizability. We propose Render-FM, a novel foundation model for direct, real-time volumetric rendering of CT scans. Render-FM employs an encoder-decoder architecture that directly regresses 6D Gaussian Splatting (6DGS) parameters from CT volumes, eliminating per-scan optimization through large-scale pre-training on diverse medical data. By integrating robust feature extraction with the expressive power of 6DGS, our approach efficiently generates high-quality, real-time interactive 3D visualizations across diverse clinical CT data. Experiments demonstrate that Render-FM achieves visual fidelity comparable or superior to specialized per-scan methods while drastically reducing preparation time from nearly an hour to seconds for a single inference step. This advancement enables seamless integration into real-time surgical planning and diagnostic workflows. The project page is: https://gaozhongpai.github.io/renderfm/.

A Deep Learning Vision-Language Model for Diagnosing Pediatric Dental Diseases

Pham, T.

medrxiv logopreprintMay 22 2025
This study proposes a deep learning vision-language model for the automated diagnosis of pediatric dental diseases, with a focus on differentiating between caries and periapical infections. The model integrates visual features extracted from panoramic radiographs using methods of non-linear dynamics and textural encoding with textual descriptions generated by a large language model. These multimodal features are concatenated and used to train a 1D-CNN classifier. Experimental results demonstrate that the proposed model outperforms conventional convolutional neural networks and standalone language-based approaches, achieving high accuracy (90%), sensitivity (92%), precision (92%), and an AUC of 0.96. This work highlights the value of combining structured visual and textual representations in improving diagnostic accuracy and interpretability in dental radiology. The approach offers a promising direction for the development of context-aware, AI-assisted diagnostic tools in pediatric dental care.

Customized GPT-4V(ision) for radiographic diagnosis: can large language model detect supernumerary teeth?

Aşar EM, İpek İ, Bi Lge K

pubmed logopapersMay 21 2025
With the growing capabilities of language models like ChatGPT to process text and images, this study evaluated their accuracy in detecting supernumerary teeth on periapical radiographs. A customized GPT-4V model (CGPT-4V) was also developed to assess whether domain-specific training could improve diagnostic performance compared to standard GPT-4V and GPT-4o models. One hundred eighty periapical radiographs (90 with and 90 without supernumerary teeth) were evaluated using GPT-4 V, GPT-4o, and a fine-tuned CGPT-4V model. Each image was assessed separately with the standardized prompt "Are there any supernumerary teeth in the radiograph above?" to avoid contextual bias. Three dental experts scored the responses using a three-point Likert scale for positive cases and a binary scale for negatives. Chi-square tests and ROC analysis were used to compare model performances (p < 0.05). Among the three models, CGPT-4 V exhibited the highest accuracy, detecting supernumerary teeth correctly in 91% of cases, compared to 77% for GPT-4o and 63% for GPT-4V. The CGPT-4V model also demonstrated a significantly lower false positive rate (16%) than GPT-4V (42%). A statistically significant difference was found between CGPT-4V and GPT-4o (p < 0.001), while no significant difference was observed between GPT-4V and CGPT-4V or between GPT-4V and GPT-4o. Additionally, CGPT-4V successfully identified multiple supernumerary teeth in radiographs where present. These findings highlight the diagnostic potential of customized GPT models in dental radiology. Future research should focus on multicenter validation, seamless clinical integration, and cost-effectiveness to support real-world implementation.

Exchange of Quantitative Computed Tomography Assessed Body Composition Data Using Fast Healthcare Interoperability Resources as a Necessary Step Toward Interoperable Integration of Opportunistic Screening Into Clinical Practice: Methodological Development Study.

Wen Y, Choo VY, Eil JH, Thun S, Pinto Dos Santos D, Kast J, Sigle S, Prokosch HU, Ovelgönne DL, Borys K, Kohnke J, Arzideh K, Winnekens P, Baldini G, Schmidt CS, Haubold J, Nensa F, Pelka O, Hosch R

pubmed logopapersMay 21 2025
Fast Healthcare Interoperability Resources (FHIR) is a widely used standard for storing and exchanging health care data. At the same time, image-based artificial intelligence (AI) models for quantifying relevant body structures and organs from routine computed tomography (CT)/magnetic resonance imaging scans have emerged. The missing link, simultaneously a needed step in advancing personalized medicine, is the incorporation of measurements delivered by AI models into an interoperable and standardized format. Incorporating image-based measurements and biomarkers into FHIR profiles can standardize data exchange, enabling timely, personalized treatment decisions and improving the precision and efficiency of patient care. This study aims to present the synergistic incorporation of CT-derived body organ and composition measurements with FHIR, delineating an initial paradigm for storing image-based biomarkers. This study integrated the results of the Body and Organ Analysis (BOA) model into FHIR profiles to enhance the interoperability of image-based biomarkers in radiology. The BOA model was selected as an exemplary AI model due to its ability to provide detailed body composition and organ measurements from CT scans. The FHIR profiles were developed based on 2 primary observation types: Body Composition Analysis (BCA Observation) for quantitative body composition metrics and Body Structure Observation for organ measurements. These profiles were structured to interoperate with a specially designed Diagnostic Report profile, which references the associated Imaging Study, ensuring a standardized linkage between image data and derived biomarkers. To ensure interoperability, all labels were mapped to SNOMED CT (Systematized Nomenclature of Medicine - Clinical Terms) or RadLex terminologies using specific value sets. The profiles were developed using FHIR Shorthand (FSH) and SUSHI, enabling efficient definition and implementation guide generation, ensuring consistency and maintainability. In this study, 4 BOA profiles, namely, Body Composition Analysis Observation, Body Structure Volume Observation, Diagnostic Report, and Imaging Study, have been presented. These FHIR profiles, which cover 104 anatomical landmarks, 8 body regions, and 8 tissues, enable the interoperable usage of the results of AI segmentation models, providing a direct link between image studies, series, and measurements. The BOA profiles provide a foundational framework for integrating AI-derived imaging biomarkers into FHIR, bridging the gap between advanced imaging analytics and standardized health care data exchange. By enabling structured, interoperable representation of body composition and organ measurements, these profiles facilitate seamless integration into clinical and research workflows, supporting improved data accessibility and interoperability. Their adaptability allows for extension to other imaging modalities and AI models, fostering a more standardized and scalable approach to using imaging biomarkers in precision medicine. This work represents a step toward enhancing the integration of AI-driven insights into digital health ecosystems, ultimately contributing to more data-driven, personalized, and efficient patient care.

ÆMMamba: An Efficient Medical Segmentation Model With Edge Enhancement.

Dong X, Zhou B, Yin C, Liao IY, Jin Z, Xu Z, Pu B

pubmed logopapersMay 21 2025
Medical image segmentation is critical for disease diagnosis, treatment planning, and prognosis assessment, yet the complexity and diversity of medical images pose significant challenges to accurate segmentation. While Convolutional Neural Networks capture local features and Vision Transformers excel in the global context, both struggle with efficient long-range dependency modeling. Inspired by Mamba's State Space Modeling efficiency, we propose ÆMMamba, a novel multi-scale feature extraction framework built on the Mamba backbone network. AÆMMamba integrates several innovative modules: the Efficient Fusion Bridge (EFB) module, which employs a bidirectional state-space model and attention mechanisms to fuse multi-scale features; the Edge-Aware Module (EAM), which enhances low-level edge representation using Sobel-based edge extraction; and the Boundary Sensitive Decoder (BSD), which leverages inverse attention and residual convolutional layers to handle cross-level complex boundaries. ÆMMamba achieves state-of-the-art performance across 8 medical segmentation datasets. On polyp segmentation datasets (Kvasir, ClinicDB, ColonDB, EndoScene, ETIS), it records the highest mDice and mIoU scores, outperforming methods like MADGNet and Swin-UMamba, with a standout mDice of 72.22 on ETIS, the most challenging dataset in this domain. For lung and breast segmentation, ÆMMamba surpasses competitors such as H2Former and SwinUnet, achieving Dice scores of 84.24 on BUSI and 79.83 on COVID-19 Lung. And on the LGG brain MRI dataset, ÆMMamba attains an mDice of 87.25 and an mIoU of 79.31, outperforming all compared methods. The source code will be released at https://github.com/xingbod/eMMamba.

Coronary Computed Tomographic Angiography to Optimize the Diagnostic Yield of Invasive Angiography for Low-Risk Patients Screened With Artificial Intelligence: Protocol for the CarDIA-AI Randomized Controlled Trial.

Petch J, Tabja Bortesi JP, Sheth T, Natarajan M, Pinilla-Echeverri N, Di S, Bangdiwala SI, Mosleh K, Ibrahim O, Bainey KR, Dobranowski J, Becerra MP, Sonier K, Schwalm JD

pubmed logopapersMay 21 2025
Invasive coronary angiography (ICA) is the gold standard in the diagnosis of coronary artery disease (CAD). Being invasive, it carries rare but serious risks including myocardial infarction, stroke, major bleeding, and death. A large proportion of elective outpatients undergoing ICA have nonobstructive CAD, highlighting the suboptimal use of this test. Coronary computed tomographic angiography (CCTA) is a noninvasive option that provides similar information with less risk and is recommended as a first-line test for patients with low-to-intermediate risk of CAD. Leveraging artificial intelligence (AI) to appropriately direct patients to ICA or CCTA based on the predicted probability of disease may improve the efficiency and safety of diagnostic pathways. he CarDIA-AI (Coronary computed tomographic angiography to optimize the Diagnostic yield of Invasive Angiography for low-risk patients screened with Artificial Intelligence) study aims to evaluate whether AI-based risk assessment for obstructive CAD implemented within a centralized triage process can optimize the use of ICA in outpatients referred for nonurgent ICA. CarDIA-AI is a pragmatic, open-label, superior randomized controlled trial involving 2 Canadian cardiac centers. A total of 252 adults referred for elective outpatient ICA will be randomized 1:1 to usual care (directly proceeding to ICA) or to triage using an AI-based decision support tool. The AI-based decision support tool was developed using referral information from over 37,000 patients and uses a light gradient boosting machine model to predict the probability of obstructive CAD based on 42 clinically relevant predictors, including patient referral information, demographic characteristics, risk factors, and medical history. Participants in the intervention arm will have their ICA referral forms and medical charts reviewed, and select details entered into the decision support tool, which recommends CCTA or ICA based on the patient's predicted probability of obstructive CAD. All patients will receive the selected imaging modality within 6 weeks of referral and will be subsequently followed for 90 days. The primary outcome is the proportion of normal or nonobstructive CAD diagnosed via ICA and will be assessed using a 2-sided z test to compare the patients referred for cardiac investigation with normal or nonobstructive CAD diagnosed through ICA between the intervention and control groups. Secondary outcomes include the number of angiograms avoided and the diagnostic yield of ICA. Recruitment began on January 9, 2025, and is expected to conclude in mid to late 2025. As of April 14, 2025, we have enrolled 81 participants. Data analysis will begin once data collection is completed. We expect to submit the results for publication in 2026. CarDIA-AI will be the first randomized controlled trial using AI to optimize patient selection for CCTA versus ICA, potentially improving diagnostic efficiency, avoiding unnecessary complications of ICA, and improving health care resource usage. ClinicalTrials.gov NCT06648239; https://clinicaltrials.gov/study/NCT06648239/. DERR1-10.2196/71726.

Mammography-based artificial intelligence for breast cancer detection, diagnosis, and BI-RADS categorization using multi-view and multi-level convolutional neural networks.

Tan H, Wu Q, Wu Y, Zheng B, Wang B, Chen Y, Du L, Zhou J, Fu F, Guo H, Fu C, Ma L, Dong P, Xue Z, Shen D, Wang M

pubmed logopapersMay 21 2025
We developed an artificial intelligence system (AIS) using multi-view multi-level convolutional neural networks for breast cancer detection, diagnosis, and BI-RADS categorization support in mammography. Twenty-four thousand eight hundred sixty-six breasts from 12,433 Asian women between August 2012 and December 2018 were enrolled. The study consisted of three parts: (1) evaluation of AIS performance in malignancy diagnosis; (2) stratified analysis of BI-RADS 3-4 subgroups with AIS; and (3) reassessment of BI-RADS 0 breasts with AIS assistance. We further evaluate AIS by conducting a counterbalance-designed AI-assisted study, where ten radiologists read 1302 cases with/without AIS assistance. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and F1 score were measured. The AIS yielded AUC values of 0.995, 0.933, and 0.947 for malignancy diagnosis in the validation set, testing set 1, and testing set 2, respectively. Within BI-RADS 3-4 subgroups with pathological results, AIS downgraded 83.1% of false-positives into benign groups, and upgraded 54.1% of false-negatives into malignant groups. AIS also successfully assisted radiologists in identifying 7 out of 43 malignancies initially diagnosed with BI-RADS 0, with a specificity of 96.7%. In the counterbalance-designed AI-assisted study, the average AUC across ten readers significantly improved with AIS assistance (p = 0.001). AIS can accurately detect and diagnose breast cancer on mammography and further serve as a supportive tool for BI-RADS categorization. An AI risk assessment tool employing deep learning algorithms was developed and validated for enhancing breast cancer diagnosis from mammograms, to improve risk stratification accuracy, particularly in patients with dense breasts, and serve as a decision support aid for radiologists. The false positive and negative rates of mammography diagnosis remain high. The AIS can yield a high AUC for malignancy diagnosis. The AIS is important in stratifying BI-RADS categorization.

Systematic review on the impact of deep learning-driven worklist triage on radiology workflow and clinical outcomes.

Momin E, Cook T, Gershon G, Barr J, De Cecco CN, van Assen M

pubmed logopapersMay 21 2025
To perform a systematic review on the impact of deep learning (DL)-based triage for reducing diagnostic delays and improving patient outcomes in peer-reviewed and pre-print publications. A search was conducted of primary research studies focused on DL-based worklist optimization for diagnostic imaging triage published on multiple databases from January 2018 until July 2024. Extracted data included study design, dataset characteristics, workflow metrics including report turnaround time and time-to-treatment, and patient outcome differences. Further analysis between clinical settings and integration modality was investigated using nonparametric statistics. Risk of bias was assessed with the risk of bias in non-randomized studies-of interventions (ROBINS-I) checklist. A total of 38 studies from 20 publications, involving 138,423 images, were analyzed. Workflow interventions concerned pulmonary embolism (n = 8), stroke (n = 3), intracranial hemorrhage (n = 12), and chest conditions (n = 15). Patients in the post DL-triage group had shorter median report turnaround times: a mean difference of 12.3 min (IQR: -25.7, -7.6) for pulmonary embolism, 20.5 min (IQR: -32.1, -9.3) for stroke, 4.3 min (IQR: -8.6, 1.3) for intracranial hemorrhage and 29.7 min (IQR: -2947.7, -18.3) for chest diseases. Sub-group analysis revealed that reductions varied per clinical environment and relative prevalence rates but were the highest when algorithms actively stratified and reordered the radiological worklist, with reductions of -43.7% in report turnaround time compared to -7.6% from widget-based systems (p < 0.01). DL-based triage systems had comparable report turnaround time improvements, especially in outpatient and high-prevalence settings, suggesting that AI-based triage holds promise in alleviating radiology workloads. Question Can DL-based triage address lengthening imaging report turnaround times and improve patient outcomes across distinct clinical environments? Findings DL-based triage improved report turnaround time across disease groups, with higher reductions reported in high-prevalence or lower acuity settings. Clinical relevance DL-based workflow prioritization is a reliable tool for reducing diagnostic imaging delay for time-sensitive disease across clinical settings. However, further research and reliable metrics are needed to provide specific recommendations with regards to false-negative examinations and multi-condition prioritization.
Page 204 of 2602591 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.