Sort by:
Page 72 of 1351347 results

Assessing the value of artificial intelligence-based image analysis for pre-operative surgical planning of neck dissections and iENE detection in head and neck cancer patients.

Schmidl B, Hoch CC, Walter R, Wirth M, Wollenberg B, Hussain T

pubmed logopapersMay 30 2025
Accurate preoperative detection and analysis of lymph node metastasis (LNM) in head and neck squamous cell carcinoma (HNSCC) is essential for the surgical planning and execution of a neck dissection and may directly affect the morbidity and prognosis of patients. Additionally, predicting extranodal extension (ENE) using pre-operative imaging could be particularly valuable in oropharyngeal HPV-positive squamous cell carcinoma, enabling more accurate patient counseling, allowing the decision to favor primary chemoradiotherapy over immediate neck dissection when appropriate. Currently, radiological images are evaluated by radiologists and head and neck oncologists; and automated image interpretation is not part of the current standard of care. Therefore, the value of preoperative image recognition by artificial intelligence (AI) with the large language model (LLM) ChatGPT-4 V was evaluated in this exploratory study based on neck computed tomography (CT) images of HNSCC patients with cervical LNM, and corresponding images without LNM. The objective of this study was to firstly assess the preoperative rater accuracy by comparing clinician assessments of imaging-detected extranodal extension (iENE) and the extent of neck dissection to AI predictions, and secondly to evaluate the pathology-based accuracy by comparing AI predictions to final histopathological outcomes. 45 preoperative CT scans were retrospectively analyzed in this study: 15 cases in which a selective neck dissection (sND) was performed, 15 cases with ensuing radical neck dissection (mrND), and 15 cases without LNM (sND). Of note, image analysis was based on three single images provided to both ChatGPT-4 V and the head and neck surgeons as reviewers. Final pathological characteristics were available in all cases as HNSCC patients had undergone surgery. ChatGPT-4 V was tasked with providing the extent of LNM in the preoperative CT scans and with providing a recommendation for the extent of neck dissection and the detection of iENE. The diagnostic performance of ChatGPT-4 V was reviewed independently by two head and neck surgeons with its accuracy, sensitivity, and specificity being assessed. In this study, ChatGPT-4 V reached a sensitivity of 100% and a specificity of 34.09% in identifying the need for a radical neck dissection based on neck CT images. The sensitivity and specificity of detecting iENE was 100% and 34.15%, respectively. Both human reviewers achieved higher specificity. Notably, ChatGPT-4 V also recommended a mrND and detected iENE on CT images without any cervical LNM. In this exploratory study of 45 preoperative CT Neck scans before a neck dissection, ChatGPT-4 V substantially overestimated the degree and severity of lymph node metastasis in head and neck cancer. While these results suggest that ChatGPT-4 V may not yet be a tool providing added value for surgical planning in head and neck cancer, the unparalleled speed of analysis and well-founded reasoning provided suggests that AI tools may provide added value in the future.

Deep learning without borders: recent advances in ultrasound image classification for liver diseases diagnosis.

Yousefzamani M, Babapour Mofrad F

pubmed logopapersMay 30 2025
Liver diseases are among the top global health burdens. Recently, there has been an increasing significance of diagnostics without discomfort to the patient; among them, ultrasound is the most used. Deep learning, in particular convolutional neural networks, has revolutionized the classification of liver diseases by automatically performing some specific analyses of difficult images. This review summarizes the progress that has been made in deep learning techniques for the classification of liver diseases using ultrasound imaging. It evaluates various models from CNNs to their hybrid versions, such as CNN-Transformer, for detecting fatty liver, fibrosis, and liver cancer, among others. Several challenges in the generalization of data and models across a different clinical environment are also discussed. Deep learning has great prospects for automatic diagnosis of liver diseases. Most of the models have performed with high accuracy in different clinical studies. Despite this promise, challenges relating to generalization have remained. Future hardware developments and access to quality clinical data continue to further improve the performance of these models and ensure their vital role in the diagnosis of liver diseases.

Bias in Artificial Intelligence: Impact on Breast Imaging.

Net JM, Collado-Mesa F

pubmed logopapersMay 30 2025
Artificial intelligence (AI) in breast imaging has garnered significant attention given the numerous reports of improved efficiency, accuracy, and the potential to bridge the gap of expanded volume in the face of limited physician resources. While AI models are developed with specific data points, on specific equipment, and in specific populations, the real-world clinical environment is dynamic, and patient populations are diverse, which can impact generalizability and widespread adoption of AI in clinical practice. Implementation of AI models into clinical practice requires focused attention on the potential of AI bias impacting outcomes. The following review presents the concept, sources, and types of AI bias to be considered when implementing AI models and offers suggestions on strategies to mitigate AI bias in practice.

Phantom-Based Ultrasound-ECG Deep Learning Framework for Prospective Cardiac Computed Tomography.

Ganesh S, Lindsey BD, Tridandapani S, Bhatti PT

pubmed logopapersMay 30 2025
We present the first multimodal deep learning framework combining ultrasound (US) and electrocardiography (ECG) data to predict cardiac quiescent periods (QPs) for optimized computed tomography angiography gating (CTA). The framework integrates a 3D convolutional neural network (CNN) for US data and an artificial neural network (ANN) for ECG data. A dynamic heart motion phantom, replicating diverse cardiac conditions, including arrhythmias, was used to validate the framework. Performance was assessed across varying QP lengths, cardiac segments, and motions to simulate real-world conditions. The multimodal US-ECG 3D CNN-ANN framework demonstrated improved QP prediction accuracy compared to single-modality ECG-only gating, achieving 96.87% accuracy compared to 85.56%, including scenarios involving arrhythmic conditions. Notably, the framework shows higher accuracy for longer QP durations (100 ms - 200 ms) compared to shorter durations (<100ms), while still outperforming single-modality methods, which often fail to detect shorter quiescent phases, especially in arrhythmic cases. Consistently outperforming single-modality approaches, it achieves reliable QP prediction across cardiac regions, including the whole phantom, interventricular septum, and cardiac wall regions. Analysis of QP prediction accuracy across cardiac segments demonstrated an average accuracy of 92% in clinically relevant echocardiographic views, highlighting the framework's robustness. Combining US and ECG data using a multimodal framework improves QP prediction accuracy under variable cardiac motion, particularly in arrhythmic conditions. Since even small errors in cardiac CTA can result in non-diagnostic scans, the potential benefits of multimodal gating may improve diagnostic scan rates in patients with high and variable heart rates and arrhythmias.

End-to-end 2D/3D registration from pre-operative MRI to intra-operative fluoroscopy for orthopedic procedures.

Ku PC, Liu M, Grupp R, Harris A, Oni JK, Mears SC, Martin-Gomez A, Armand M

pubmed logopapersMay 30 2025
Soft tissue pathologies and bone defects are not easily visible in intra-operative fluoroscopic images; therefore, we develop an end-to-end MRI-to-fluoroscopic image registration framework, aiming to enhance intra-operative visualization for surgeons during orthopedic procedures. The proposed framework utilizes deep learning to segment MRI scans and generate synthetic CT (sCT) volumes. These sCT volumes are then used to produce digitally reconstructed radiographs (DRRs), enabling 2D/3D registration with intra-operative fluoroscopic images. The framework's performance was validated through simulation and cadaver studies for core decompression (CD) surgery, focusing on the registration accuracy of femur and pelvic regions. The framework achieved a mean translational registration accuracy of 2.4 ± 1.0 mm and rotational accuracy of 1.6 ± <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0</mn> <mo>.</mo> <msup><mn>8</mn> <mo>∘</mo></msup> </mrow> </math> for the femoral region in cadaver studies. The method successfully enabled intra-operative visualization of necrotic lesions that were not visible on conventional fluoroscopic images, marking a significant advancement in image guidance for femur and pelvic surgeries. The MRI-to-fluoroscopic registration framework offers a novel approach to image guidance in orthopedic surgeries, exclusively using MRI without the need for CT scans. This approach enhances the visualization of soft tissues and bone defects, reduces radiation exposure, and provides a safer, more effective alternative for intra-operative surgical guidance.

Bidirectional Projection-Based Multi-Modal Fusion Transformer for Early Detection of Cerebral Palsy in Infants.

Qi K, Huang T, Jin C, Yang Y, Ying S, Sun J, Yang J

pubmed logopapersMay 30 2025
Periventricular white matter injury (PWMI) is the most frequent magnetic resonance imaging (MRI) finding in infants with Cerebral Palsy (CP). We aim to detect CP and identify subtle, sparse PWMI lesions in infants under two years of age with immature brain structures. Based on the characteristic that the responsible lesions are located within five target regions, we first construct a multi-modal dataset including 243 cases with the mask annotations of five target regions for delineating anatomical structures on T1-Weighted Imaging (T1WI) images, masks for lesions on T2-Weighted Imaging (T2WI) images, and categories (CP or Non-CP). Furthermore, we develop a bidirectional projection-based multi-modal fusion transformer (BiP-MFT), incorporating a Bidirectional Projection Fusion Module (BPFM) for integrating the features between five target regions on T1WI images and lesions on T2WI images. Our BiP-MFT achieves subject-level classification accuracy of 0.90, specificity of 0.87, and sensitivity of 0.94. It surpasses the best results of nine comparative methods, with 0.10, 0.08, and 0.09 improvements in classification accuracy, specificity and sensitivity respectively. Our BPFM outperforms eight compared feature fusion strategies using Transformer and U-Net backbones on our dataset. Ablation studies on the dataset annotations and model components justify the effectiveness of our annotation method and the model rationality. The proposed dataset and codes are available at https://github.com/Kai-Qi/BiP-MFT.

Using AI to triage patients without clinically significant prostate cancer using biparametric MRI and PSA.

Grabke EP, Heming CAM, Hadari A, Finelli A, Ghai S, Lajkosz K, Taati B, Haider MA

pubmed logopapersMay 30 2025
To train and evaluate the performance of a machine learning triaging tool that identifies MRI negative for clinically significant prostate cancer and to compare this against non-MRI models. 2895 MRIs were collected from two sources (1630 internal, 1265 public) in this retrospective study. Risk models compared were: Prostate Cancer Prevention Trial Risk Calculator 2.0, Prostate Biopsy Collaborative Group Calculator, PSA density, U-Net segmentation, and U-Net combined with clinical parameters. The reference standard was histopathology or negative follow-up. Performance metrics were calculated by simulating a triaging workflow compared to radiologist interpreting all exams on a test set of 465 patients. Sensitivity and specificity differences were assessed using the McNemar test. Differences in PPV and NPV were assessed using the Leisenring, Alonzo and Pepe generalized score statistic. Equivalence test p-values were adjusted within each measure using Benjamini-Hochberg correction. Triaging using U-Net with clinical parameters reduced radiologist workload by 12.5% with sensitivity decrease from 93 to 90% (p = 0.023) and specificity increase from 39 to 47% (p < 0.001). This simulated workload reduction was greater than triaging with risk calculators (3.2% and 1.3%, p < 0.001), and comparable to PSA density (8.4%, p = 0.071) and U-Net alone (11.6%, p = 0.762). Both U-Net triaging strategies increased PPV (+ 2.8% p = 0.005 clinical, + 2.2% p = 0.020 nonclinical), unlike non-U-Net strategies (p > 0.05). NPV remained equivalent for all scenarios (p > 0.05). Clinically-informed U-Net triaging correctly ruled out 20 (13.4%) radiologist false positives (12 PI-RADS = 3, 8 PI-RADS = 4). Of the eight (3.6%) false negatives, two were misclassified by the radiologist. No misclassified case was interpreted as PI-RADS 5. Prostate MRI triaging using machine learning could reduce radiologist workload by 12.5% with a 3% sensitivity decrease and 8% specificity increase, outperforming triaging using non-imaging-based risk models. Further prospective validation is required.

Advantages of deep learning reconstruction algorithm in ultra-high-resolution CT for the diagnosis of pancreatic cystic neoplasm.

Sofue K, Ueno Y, Yabe S, Ueshima E, Yamaguchi T, Masuda A, Sakai A, Toyama H, Fukumoto T, Hori M, Murakami T

pubmed logopapersMay 30 2025
This study aimed to evaluate the image quality and clinical utility of a deep learning reconstruction (DLR) algorithm in ultra-high-resolution computed tomography (UHR-CT) for the diagnosis of pancreatic cystic neoplasms (PCNs). This retrospective study included 45 patients with PCNs between March 2020 and February 2022. Contrast-enhanced UHR-CT images were obtained and reconstructed using DLR and hybrid iterative reconstruction (IR). Image noise and contrast-to-noise ratio (CNR) were measured. Two radiologists assessed the diagnostic performance of the imaging findings associated with PCNs using a 5-point Likert scale. The diagnostic performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC), were calculated. Quantitative and qualitative features were compared between CT with DLR and hybrid IR. Interobserver agreement for qualitative assessments was also analyzed. DLR significantly reduced image noise and increased CNR compared to hybrid IR for all objects (p < 0.001). Radiologists rated DLR images as superior in overall quality, lesion delineation, and vessel conspicuity (p < 0.001). DLR produced higher AUROC values for diagnostic imaging findings (ductal communication: 0.887‒0.938 vs. 0.816‒0.827 and enhanced mural nodule: 0.843‒0.916 vs. 0.785‒0.801), although DLR did not directly improve sensitivity, specificity, and accuracy. Interobserver agreement for qualitative assessments was higher in CT with DLR (κ = 0.69‒0.82 vs. 0.57‒0.73). DLR improved image quality and diagnostic performance by effectively reducing image noise and improving lesion conspicuity in the diagnosis of PCNs on UHR-CT. The DLR demonstrated greater diagnostic confidence for the assessment of imaging findings associated with PCNs.

GLIMPSE: Generalized Locality for Scalable and Robust CT.

Khorashadizadeh A, Debarnot V, Liu T, Dokmanic I

pubmed logopapersMay 30 2025
Deep learning has become the state-of-the-art approach to medical tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a multiscale convolutional neural network (CNN) which computes the final reconstruction. Despite good results on in-distribution test data, this often results in overfitting certain large-scale structures and poor generalization on out-of-distribution (OOD) samples. Moreover, the memory and computational complexity of multiscale CNNs scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions. In this paper, we introduce GLIMPSE, a local coordinate-based neural network for computed tomography which reconstructs a pixel value by processing only the measurements associated with the neighborhood of the pixel. GLIMPSE significantly outperforms successful CNNs on OOD samples, while achieving comparable or better performance on in-distribution test data and maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024 × 1024 images which is orders of magnitude less than CNNs. GLIMPSE is fully differentiable and can be used plug-and-play in arbitrary deep learning architectures, enabling feats such as correcting miscalibrated projection orientations.

Diagnostic Efficiency of an Artificial Intelligence-Based Technology in Dental Radiography.

Obrubov AA, Solovykh EA, Nadtochiy AG

pubmed logopapersMay 30 2025
We present results of the development of Dentomo artificial intelligence model based on two neural networks. The model includes a database and a knowledge base harmonized with SNOMED CT that allows processing and interpreting the results of cone beam computed tomography (CBCT) scans of the dental system, in particular, identifying and classifying teeth, identifying CT signs of pathology and previous treatments. Based on these data, artificial intelligence can draw conclusions and generate medical reports, systematize the data, and learn from the results. The diagnostic effectiveness of Dentomo was evaluated. The first results of the study have demonstrated that the model based on neural networks and artificial intelligence is a valuable tool for analyzing CBCT scans in clinical practice and optimizing the dentist workflow.
Page 72 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.