Sort by:
Page 269 of 3423416 results

Automated Computer Vision Methods for Image Segmentation, Stereotactic Localization, and Functional Outcome Prediction of Basal Ganglia Hemorrhages.

Kashkoush A, Davison MA, Achey R, Gomes J, Rasmussen P, Kshettry VR, Moore N, Bain M

pubmed logopapersMay 30 2025
Basal ganglia intracranial hemorrhage (bgICH) morphology is associated with postoperative functional outcomes. We hypothesized that bgICH spatial representation modeling could be automated for functional outcome prediction after minimally invasive surgical (MIS) evacuation. A training set of 678 computed tomography head and computed tomography angiography images from 63 patients were used to train key-point detection and instance segmentation convolutional neural network-based models for anatomic landmark identification and bgICH segmentation. Anatomic landmarks included the bilateral orbital rims at the globe's maximum diameter and the posterior-most aspect of the tentorial incisura, which were used to define a universal stereotactic reference frame across patients. Convolutional neural network models were tested using volumetric computed tomography head/computed tomography angiography scans from 45 patients who underwent MIS bgICH evacuation with recorded modified Rankin Scales within one year after surgery. bgICH volumes were highly correlated (R2 = 0.95, P < .001) between manual (median 39-mL) and automatic (median 38-mL) segmentation methods. The absolute median difference between groups was 2-mL (IQR: 1-6 mL). Median localization accuracy (distance between automated and manually designated coordinate frames) was 4 mm (IQR: 3-6). Landmark coordinates were highly correlated in the x- (medial-lateral), y- (anterior-posterior), and z-axes (rostral-caudal) for all 3 landmarks (R2 range = 0.95-0.99, P < .001 for all). Functional outcome (modified Rankin Scale 4-6) was predicted with similar model performance using automated (area under the receiver operating characteristic curve = 0.81, 95% CI: 0.67-0.94) and manually (area under the receiver operating characteristic curve = 0.84, 95% CI: 0.72-0.96) constructed spatial representation models (P = .173). Computer vision models can accurately replicate bgICH manual segmentation, stereotactic localization, and prognosticate functional outcomes after MIS bgICH evacuation.

GLIMPSE: Generalized Locality for Scalable and Robust CT.

Khorashadizadeh A, Debarnot V, Liu T, Dokmanic I

pubmed logopapersMay 30 2025
Deep learning has become the state-of-the-art approach to medical tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a multiscale convolutional neural network (CNN) which computes the final reconstruction. Despite good results on in-distribution test data, this often results in overfitting certain large-scale structures and poor generalization on out-of-distribution (OOD) samples. Moreover, the memory and computational complexity of multiscale CNNs scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions. In this paper, we introduce GLIMPSE, a local coordinate-based neural network for computed tomography which reconstructs a pixel value by processing only the measurements associated with the neighborhood of the pixel. GLIMPSE significantly outperforms successful CNNs on OOD samples, while achieving comparable or better performance on in-distribution test data and maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024 × 1024 images which is orders of magnitude less than CNNs. GLIMPSE is fully differentiable and can be used plug-and-play in arbitrary deep learning architectures, enabling feats such as correcting miscalibrated projection orientations.

Advantages of deep learning reconstruction algorithm in ultra-high-resolution CT for the diagnosis of pancreatic cystic neoplasm.

Sofue K, Ueno Y, Yabe S, Ueshima E, Yamaguchi T, Masuda A, Sakai A, Toyama H, Fukumoto T, Hori M, Murakami T

pubmed logopapersMay 30 2025
This study aimed to evaluate the image quality and clinical utility of a deep learning reconstruction (DLR) algorithm in ultra-high-resolution computed tomography (UHR-CT) for the diagnosis of pancreatic cystic neoplasms (PCNs). This retrospective study included 45 patients with PCNs between March 2020 and February 2022. Contrast-enhanced UHR-CT images were obtained and reconstructed using DLR and hybrid iterative reconstruction (IR). Image noise and contrast-to-noise ratio (CNR) were measured. Two radiologists assessed the diagnostic performance of the imaging findings associated with PCNs using a 5-point Likert scale. The diagnostic performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC), were calculated. Quantitative and qualitative features were compared between CT with DLR and hybrid IR. Interobserver agreement for qualitative assessments was also analyzed. DLR significantly reduced image noise and increased CNR compared to hybrid IR for all objects (p < 0.001). Radiologists rated DLR images as superior in overall quality, lesion delineation, and vessel conspicuity (p < 0.001). DLR produced higher AUROC values for diagnostic imaging findings (ductal communication: 0.887‒0.938 vs. 0.816‒0.827 and enhanced mural nodule: 0.843‒0.916 vs. 0.785‒0.801), although DLR did not directly improve sensitivity, specificity, and accuracy. Interobserver agreement for qualitative assessments was higher in CT with DLR (κ = 0.69‒0.82 vs. 0.57‒0.73). DLR improved image quality and diagnostic performance by effectively reducing image noise and improving lesion conspicuity in the diagnosis of PCNs on UHR-CT. The DLR demonstrated greater diagnostic confidence for the assessment of imaging findings associated with PCNs.

Using AI to triage patients without clinically significant prostate cancer using biparametric MRI and PSA.

Grabke EP, Heming CAM, Hadari A, Finelli A, Ghai S, Lajkosz K, Taati B, Haider MA

pubmed logopapersMay 30 2025
To train and evaluate the performance of a machine learning triaging tool that identifies MRI negative for clinically significant prostate cancer and to compare this against non-MRI models. 2895 MRIs were collected from two sources (1630 internal, 1265 public) in this retrospective study. Risk models compared were: Prostate Cancer Prevention Trial Risk Calculator 2.0, Prostate Biopsy Collaborative Group Calculator, PSA density, U-Net segmentation, and U-Net combined with clinical parameters. The reference standard was histopathology or negative follow-up. Performance metrics were calculated by simulating a triaging workflow compared to radiologist interpreting all exams on a test set of 465 patients. Sensitivity and specificity differences were assessed using the McNemar test. Differences in PPV and NPV were assessed using the Leisenring, Alonzo and Pepe generalized score statistic. Equivalence test p-values were adjusted within each measure using Benjamini-Hochberg correction. Triaging using U-Net with clinical parameters reduced radiologist workload by 12.5% with sensitivity decrease from 93 to 90% (p = 0.023) and specificity increase from 39 to 47% (p < 0.001). This simulated workload reduction was greater than triaging with risk calculators (3.2% and 1.3%, p < 0.001), and comparable to PSA density (8.4%, p = 0.071) and U-Net alone (11.6%, p = 0.762). Both U-Net triaging strategies increased PPV (+ 2.8% p = 0.005 clinical, + 2.2% p = 0.020 nonclinical), unlike non-U-Net strategies (p > 0.05). NPV remained equivalent for all scenarios (p > 0.05). Clinically-informed U-Net triaging correctly ruled out 20 (13.4%) radiologist false positives (12 PI-RADS = 3, 8 PI-RADS = 4). Of the eight (3.6%) false negatives, two were misclassified by the radiologist. No misclassified case was interpreted as PI-RADS 5. Prostate MRI triaging using machine learning could reduce radiologist workload by 12.5% with a 3% sensitivity decrease and 8% specificity increase, outperforming triaging using non-imaging-based risk models. Further prospective validation is required.

Bidirectional Projection-Based Multi-Modal Fusion Transformer for Early Detection of Cerebral Palsy in Infants.

Qi K, Huang T, Jin C, Yang Y, Ying S, Sun J, Yang J

pubmed logopapersMay 30 2025
Periventricular white matter injury (PWMI) is the most frequent magnetic resonance imaging (MRI) finding in infants with Cerebral Palsy (CP). We aim to detect CP and identify subtle, sparse PWMI lesions in infants under two years of age with immature brain structures. Based on the characteristic that the responsible lesions are located within five target regions, we first construct a multi-modal dataset including 243 cases with the mask annotations of five target regions for delineating anatomical structures on T1-Weighted Imaging (T1WI) images, masks for lesions on T2-Weighted Imaging (T2WI) images, and categories (CP or Non-CP). Furthermore, we develop a bidirectional projection-based multi-modal fusion transformer (BiP-MFT), incorporating a Bidirectional Projection Fusion Module (BPFM) for integrating the features between five target regions on T1WI images and lesions on T2WI images. Our BiP-MFT achieves subject-level classification accuracy of 0.90, specificity of 0.87, and sensitivity of 0.94. It surpasses the best results of nine comparative methods, with 0.10, 0.08, and 0.09 improvements in classification accuracy, specificity and sensitivity respectively. Our BPFM outperforms eight compared feature fusion strategies using Transformer and U-Net backbones on our dataset. Ablation studies on the dataset annotations and model components justify the effectiveness of our annotation method and the model rationality. The proposed dataset and codes are available at https://github.com/Kai-Qi/BiP-MFT.

Federated Foundation Model for GI Endoscopy Images

Alina Devkota, Annahita Amireskandari, Joel Palko, Shyam Thakkar, Donald Adjeroh, Xiajun Jiang, Binod Bhattarai, Prashnna K. Gyawali

arxiv logopreprintMay 30 2025
Gastrointestinal (GI) endoscopy is essential in identifying GI tract abnormalities in order to detect diseases in their early stages and improve patient outcomes. Although deep learning has shown success in supporting GI diagnostics and decision-making, these models require curated datasets with labels that are expensive to acquire. Foundation models offer a promising solution by learning general-purpose representations, which can be finetuned for specific tasks, overcoming data scarcity. Developing foundation models for medical imaging holds significant potential, but the sensitive and protected nature of medical data presents unique challenges. Foundation model training typically requires extensive datasets, and while hospitals generate large volumes of data, privacy restrictions prevent direct data sharing, making foundation model training infeasible in most scenarios. In this work, we propose a FL framework for training foundation models for gastroendoscopy imaging, enabling data to remain within local hospital environments while contributing to a shared model. We explore several established FL algorithms, assessing their suitability for training foundation models without relying on task-specific labels, conducting experiments in both homogeneous and heterogeneous settings. We evaluate the trained foundation model on three critical downstream tasks--classification, detection, and segmentation--and demonstrate that it achieves improved performance across all tasks, highlighting the effectiveness of our approach in a federated, privacy-preserving setting.

Assessing the value of artificial intelligence-based image analysis for pre-operative surgical planning of neck dissections and iENE detection in head and neck cancer patients.

Schmidl B, Hoch CC, Walter R, Wirth M, Wollenberg B, Hussain T

pubmed logopapersMay 30 2025
Accurate preoperative detection and analysis of lymph node metastasis (LNM) in head and neck squamous cell carcinoma (HNSCC) is essential for the surgical planning and execution of a neck dissection and may directly affect the morbidity and prognosis of patients. Additionally, predicting extranodal extension (ENE) using pre-operative imaging could be particularly valuable in oropharyngeal HPV-positive squamous cell carcinoma, enabling more accurate patient counseling, allowing the decision to favor primary chemoradiotherapy over immediate neck dissection when appropriate. Currently, radiological images are evaluated by radiologists and head and neck oncologists; and automated image interpretation is not part of the current standard of care. Therefore, the value of preoperative image recognition by artificial intelligence (AI) with the large language model (LLM) ChatGPT-4 V was evaluated in this exploratory study based on neck computed tomography (CT) images of HNSCC patients with cervical LNM, and corresponding images without LNM. The objective of this study was to firstly assess the preoperative rater accuracy by comparing clinician assessments of imaging-detected extranodal extension (iENE) and the extent of neck dissection to AI predictions, and secondly to evaluate the pathology-based accuracy by comparing AI predictions to final histopathological outcomes. 45 preoperative CT scans were retrospectively analyzed in this study: 15 cases in which a selective neck dissection (sND) was performed, 15 cases with ensuing radical neck dissection (mrND), and 15 cases without LNM (sND). Of note, image analysis was based on three single images provided to both ChatGPT-4 V and the head and neck surgeons as reviewers. Final pathological characteristics were available in all cases as HNSCC patients had undergone surgery. ChatGPT-4 V was tasked with providing the extent of LNM in the preoperative CT scans and with providing a recommendation for the extent of neck dissection and the detection of iENE. The diagnostic performance of ChatGPT-4 V was reviewed independently by two head and neck surgeons with its accuracy, sensitivity, and specificity being assessed. In this study, ChatGPT-4 V reached a sensitivity of 100% and a specificity of 34.09% in identifying the need for a radical neck dissection based on neck CT images. The sensitivity and specificity of detecting iENE was 100% and 34.15%, respectively. Both human reviewers achieved higher specificity. Notably, ChatGPT-4 V also recommended a mrND and detected iENE on CT images without any cervical LNM. In this exploratory study of 45 preoperative CT Neck scans before a neck dissection, ChatGPT-4 V substantially overestimated the degree and severity of lymph node metastasis in head and neck cancer. While these results suggest that ChatGPT-4 V may not yet be a tool providing added value for surgical planning in head and neck cancer, the unparalleled speed of analysis and well-founded reasoning provided suggests that AI tools may provide added value in the future.

Deploying a novel deep learning framework for segmentation of specific anatomical structures on cone-beam CT.

Yuce F, Buyuk C, Bilgir E, Çelik Ö, Bayrakdar İŞ

pubmed logopapersMay 30 2025
Cone-beam computed tomography (CBCT) imaging plays a crucial role in dentistry, with automatic prediction of anatomical structures on CBCT images potentially enhancing diagnostic and planning procedures. This study aims to predict anatomical structures automatically on CBCT images using a deep learning algorithm. CBCT images from 70 patients were analyzed. Anatomical structures were annotated using a regional segmentation tool within an annotation software by two dentomaxillofacial radiologists. Each volumetric dataset comprised 405 slices, with relevant anatomical structures marked in each slice. Seventy DICOM images were converted to Nifti format, with seven reserved for testing and the remaining sixty-three used for training. The training utilized nnUNetv2 with an initial learning rate of 0.01, decreasing by 0.00001 at each epoch, and was conducted for 1000 epochs. Statistical analysis included accuracy, Dice score, precision, and recall results. The segmentation model achieved an accuracy of 0.99 for nasal fossa, maxillary sinus, nasopalatine canal, mandibular canal, foramen mentale, and foramen mandible, with corresponding Dice scores of 0.85, 0.98, 0.79, 0.73, 0.78, and 0.74, respectively. Precision values ranged from 0.73 to 0.98. Maxillary sinus segmentation exhibited the highest performance, while mandibular canal segmentation showed the lowest performance. The results demonstrate high accuracy and precision across most structures, with varying Dice scores indicating the consistency of segmentation. Overall, our segmentation model exhibits robust performance in delineating anatomical features in CBCT images, promising potential applications in dental diagnostics and treatment planning.

Artificial Intelligence for Assessment of Digital Mammography Positioning Reveals Persistent Challenges.

Margolies LR, Spear GG, Payne JI, Iles SE, Abdolell M

pubmed logopapersMay 30 2025
Mammographic breast cancer detection depends on high-quality positioning, which is traditionally assessed and monitored subjectively. This study used artificial intelligence (AI) to evaluate mammography positioning on digital screening mammograms to identify and quantify unmet mammography positioning quality (MPQ). Data were collected within an IRB-approved collaboration. In total, 126 367 digital mammography studies (553 339 images) were processed. Unmet MPQ criteria, including exaggeration, portion cutoff, posterior tissue missing, nipple not in profile, too high on image receptor, inadequate pectoralis length, sagging, and posterior nipple line (PNL) length difference, were evaluated using MPQ AI algorithms. The similarity of unmet MPQ occurrence and rank order was compared for each health system. Altogether, 163 759 and 219 785 unmet MPQ criteria were identified, respectively, at the health systems. The rank order and the probability distribution of the unmet MPQ criteria were not statistically significantly different between health systems (P = .844 and P = .92, respectively). The 3 most-common unmet MPQ criteria were: short PNL length on the craniocaudal (CC) view, inadequate pectoralis muscle, and excessive exaggeration on the CC view. The percentages of unmet positioning criteria out of the total potential unmet positioning criteria at health system 1 and health system 2 were 8.4% (163 759/1 949 922) and 7.3% (219 785/3 030 129), respectively. Artificial intelligence identified a similar distribution of unmet MPQ criteria in 2 health systems' daily work. Knowledge of current commonly unmet MPQ criteria can facilitate the improvement of mammography quality through tailored education strategies.

A Study on Predicting the Efficacy of Posterior Lumbar Interbody Fusion Surgery Using a Deep Learning Radiomics Model.

Fang L, Pan Y, Zheng H, Li F, Zhang W, Liu J, Zhou Q

pubmed logopapersMay 30 2025
This study seeks to develop a combined model integrating clinical data, radiomics, and deep learning (DL) for predicting the efficacy of posterior lumbar interbody fusion (PLIF) surgery. A retrospective review was conducted on 461 patients who underwent PLIF for degenerative lumbar diseases. These patients were partitioned into a training set (n=368) and a test set (n=93) in an 8:2 ratio. Clinical models, radiomics models, and DL models were constructed based on logistic regression and random forest, respectively. A combined model was established by integrating these three models. All radiomics and DL features were extracted from sagittal T2-weighted images using 3D slicer software. The least absolute shrinkage and selection operator method selected the optimal radiomics and DL features to build the models. In addition to analyzing the original region of interest (ROI), we also conducted different degrees of mask expansion on the ROI to determine the optimal ROI. The performance of the model was evaluated by using the receiver operating characteristic curve (ROC) and the area under the ROC curve. The differences in AUC were compared by DeLong test. Among the clinical characteristics, patient age, body weight, and preoperative intervertebral distance at the surgical segment are risk factors affecting the fusion outcome. The radiomics model based on MRI with expanded 10 mm mask showed excellent performance (training set AUC=0.814, 95% CI: (0.761-0.866); test set AUC=0.749, 95% CI: [0.631-0.866]). Among all single models, the DL model had the best diagnostic prediction performance, with AUC values of (0.995, 95% CI: [0.991-0.999]) for the training set and (0.803, 95% CI: [0.705-0.902]) for the test set. Compared to all the models, the combined model of clinical features, radiomics features, and DL features had the best diagnostic prediction performance, with AUC values of (0.993, 95% CI: [0.987-0.999]) for the training set and (0.866, 95% CI: [0.778-0.955]) for the test set. The proposed clinical feature-deep learning radiomics model can effectively predict the postoperative efficacy of patients undergoing PLIF surgery and has good clinical applicability.
Page 269 of 3423416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.