Sort by:
Page 25 of 46453 results

Comparative analysis of natural language processing methodologies for classifying computed tomography enterography reports in Crohn's disease patients.

Dai J, Kim MY, Sutton RT, Mitchell JR, Goebel R, Baumgart DC

pubmed logopapersMay 30 2025
Imaging is crucial to assess disease extent, activity, and outcomes in inflammatory bowel disease (IBD). Artificial intelligence (AI) image interpretation requires automated exploitation of studies at scale as an initial step. Here we evaluate natural language processing to classify Crohn's disease (CD) on CTE. From our population representative IBD registry a sample of CD patients (male: 44.6%, median age: 50 IQR37-60) and controls (n = 981 each) CTE reports were extracted and split into training- (n = 1568), development- (n = 196), and testing (n = 198) datasets each with around 200 words and balanced numbers of labels, respectively. Predictive classification was evaluated with CNN, Bi-LSTM, BERT-110M, LLaMA-3.3-70B-Instruct and DeepSeek-R1-Distill-LLaMA-70B. While our custom IBDBERT finetuned on expert IBD knowledge (i.e. ACG, AGA, ECCO guidelines), outperformed rule- and rationale extraction-based classifiers (accuracy 88.6% with pre-tuning learning rate 0.00001, AUC 0.945) in predictive performance, LLaMA, but not DeepSeek achieved overall superior results (accuracy 91.2% vs. 88.9%, F1 0.907 vs. 0.874).

Automated Computer Vision Methods for Image Segmentation, Stereotactic Localization, and Functional Outcome Prediction of Basal Ganglia Hemorrhages.

Kashkoush A, Davison MA, Achey R, Gomes J, Rasmussen P, Kshettry VR, Moore N, Bain M

pubmed logopapersMay 30 2025
Basal ganglia intracranial hemorrhage (bgICH) morphology is associated with postoperative functional outcomes. We hypothesized that bgICH spatial representation modeling could be automated for functional outcome prediction after minimally invasive surgical (MIS) evacuation. A training set of 678 computed tomography head and computed tomography angiography images from 63 patients were used to train key-point detection and instance segmentation convolutional neural network-based models for anatomic landmark identification and bgICH segmentation. Anatomic landmarks included the bilateral orbital rims at the globe's maximum diameter and the posterior-most aspect of the tentorial incisura, which were used to define a universal stereotactic reference frame across patients. Convolutional neural network models were tested using volumetric computed tomography head/computed tomography angiography scans from 45 patients who underwent MIS bgICH evacuation with recorded modified Rankin Scales within one year after surgery. bgICH volumes were highly correlated (R2 = 0.95, P < .001) between manual (median 39-mL) and automatic (median 38-mL) segmentation methods. The absolute median difference between groups was 2-mL (IQR: 1-6 mL). Median localization accuracy (distance between automated and manually designated coordinate frames) was 4 mm (IQR: 3-6). Landmark coordinates were highly correlated in the x- (medial-lateral), y- (anterior-posterior), and z-axes (rostral-caudal) for all 3 landmarks (R2 range = 0.95-0.99, P < .001 for all). Functional outcome (modified Rankin Scale 4-6) was predicted with similar model performance using automated (area under the receiver operating characteristic curve = 0.81, 95% CI: 0.67-0.94) and manually (area under the receiver operating characteristic curve = 0.84, 95% CI: 0.72-0.96) constructed spatial representation models (P = .173). Computer vision models can accurately replicate bgICH manual segmentation, stereotactic localization, and prognosticate functional outcomes after MIS bgICH evacuation.

Diagnostic Efficiency of an Artificial Intelligence-Based Technology in Dental Radiography.

Obrubov AA, Solovykh EA, Nadtochiy AG

pubmed logopapersMay 30 2025
We present results of the development of Dentomo artificial intelligence model based on two neural networks. The model includes a database and a knowledge base harmonized with SNOMED CT that allows processing and interpreting the results of cone beam computed tomography (CBCT) scans of the dental system, in particular, identifying and classifying teeth, identifying CT signs of pathology and previous treatments. Based on these data, artificial intelligence can draw conclusions and generate medical reports, systematize the data, and learn from the results. The diagnostic effectiveness of Dentomo was evaluated. The first results of the study have demonstrated that the model based on neural networks and artificial intelligence is a valuable tool for analyzing CBCT scans in clinical practice and optimizing the dentist workflow.

GLIMPSE: Generalized Locality for Scalable and Robust CT.

Khorashadizadeh A, Debarnot V, Liu T, Dokmanic I

pubmed logopapersMay 30 2025
Deep learning has become the state-of-the-art approach to medical tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a multiscale convolutional neural network (CNN) which computes the final reconstruction. Despite good results on in-distribution test data, this often results in overfitting certain large-scale structures and poor generalization on out-of-distribution (OOD) samples. Moreover, the memory and computational complexity of multiscale CNNs scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions. In this paper, we introduce GLIMPSE, a local coordinate-based neural network for computed tomography which reconstructs a pixel value by processing only the measurements associated with the neighborhood of the pixel. GLIMPSE significantly outperforms successful CNNs on OOD samples, while achieving comparable or better performance on in-distribution test data and maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024 × 1024 images which is orders of magnitude less than CNNs. GLIMPSE is fully differentiable and can be used plug-and-play in arbitrary deep learning architectures, enabling feats such as correcting miscalibrated projection orientations.

Advantages of deep learning reconstruction algorithm in ultra-high-resolution CT for the diagnosis of pancreatic cystic neoplasm.

Sofue K, Ueno Y, Yabe S, Ueshima E, Yamaguchi T, Masuda A, Sakai A, Toyama H, Fukumoto T, Hori M, Murakami T

pubmed logopapersMay 30 2025
This study aimed to evaluate the image quality and clinical utility of a deep learning reconstruction (DLR) algorithm in ultra-high-resolution computed tomography (UHR-CT) for the diagnosis of pancreatic cystic neoplasms (PCNs). This retrospective study included 45 patients with PCNs between March 2020 and February 2022. Contrast-enhanced UHR-CT images were obtained and reconstructed using DLR and hybrid iterative reconstruction (IR). Image noise and contrast-to-noise ratio (CNR) were measured. Two radiologists assessed the diagnostic performance of the imaging findings associated with PCNs using a 5-point Likert scale. The diagnostic performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC), were calculated. Quantitative and qualitative features were compared between CT with DLR and hybrid IR. Interobserver agreement for qualitative assessments was also analyzed. DLR significantly reduced image noise and increased CNR compared to hybrid IR for all objects (p < 0.001). Radiologists rated DLR images as superior in overall quality, lesion delineation, and vessel conspicuity (p < 0.001). DLR produced higher AUROC values for diagnostic imaging findings (ductal communication: 0.887‒0.938 vs. 0.816‒0.827 and enhanced mural nodule: 0.843‒0.916 vs. 0.785‒0.801), although DLR did not directly improve sensitivity, specificity, and accuracy. Interobserver agreement for qualitative assessments was higher in CT with DLR (κ = 0.69‒0.82 vs. 0.57‒0.73). DLR improved image quality and diagnostic performance by effectively reducing image noise and improving lesion conspicuity in the diagnosis of PCNs on UHR-CT. The DLR demonstrated greater diagnostic confidence for the assessment of imaging findings associated with PCNs.

Assessing the value of artificial intelligence-based image analysis for pre-operative surgical planning of neck dissections and iENE detection in head and neck cancer patients.

Schmidl B, Hoch CC, Walter R, Wirth M, Wollenberg B, Hussain T

pubmed logopapersMay 30 2025
Accurate preoperative detection and analysis of lymph node metastasis (LNM) in head and neck squamous cell carcinoma (HNSCC) is essential for the surgical planning and execution of a neck dissection and may directly affect the morbidity and prognosis of patients. Additionally, predicting extranodal extension (ENE) using pre-operative imaging could be particularly valuable in oropharyngeal HPV-positive squamous cell carcinoma, enabling more accurate patient counseling, allowing the decision to favor primary chemoradiotherapy over immediate neck dissection when appropriate. Currently, radiological images are evaluated by radiologists and head and neck oncologists; and automated image interpretation is not part of the current standard of care. Therefore, the value of preoperative image recognition by artificial intelligence (AI) with the large language model (LLM) ChatGPT-4 V was evaluated in this exploratory study based on neck computed tomography (CT) images of HNSCC patients with cervical LNM, and corresponding images without LNM. The objective of this study was to firstly assess the preoperative rater accuracy by comparing clinician assessments of imaging-detected extranodal extension (iENE) and the extent of neck dissection to AI predictions, and secondly to evaluate the pathology-based accuracy by comparing AI predictions to final histopathological outcomes. 45 preoperative CT scans were retrospectively analyzed in this study: 15 cases in which a selective neck dissection (sND) was performed, 15 cases with ensuing radical neck dissection (mrND), and 15 cases without LNM (sND). Of note, image analysis was based on three single images provided to both ChatGPT-4 V and the head and neck surgeons as reviewers. Final pathological characteristics were available in all cases as HNSCC patients had undergone surgery. ChatGPT-4 V was tasked with providing the extent of LNM in the preoperative CT scans and with providing a recommendation for the extent of neck dissection and the detection of iENE. The diagnostic performance of ChatGPT-4 V was reviewed independently by two head and neck surgeons with its accuracy, sensitivity, and specificity being assessed. In this study, ChatGPT-4 V reached a sensitivity of 100% and a specificity of 34.09% in identifying the need for a radical neck dissection based on neck CT images. The sensitivity and specificity of detecting iENE was 100% and 34.15%, respectively. Both human reviewers achieved higher specificity. Notably, ChatGPT-4 V also recommended a mrND and detected iENE on CT images without any cervical LNM. In this exploratory study of 45 preoperative CT Neck scans before a neck dissection, ChatGPT-4 V substantially overestimated the degree and severity of lymph node metastasis in head and neck cancer. While these results suggest that ChatGPT-4 V may not yet be a tool providing added value for surgical planning in head and neck cancer, the unparalleled speed of analysis and well-founded reasoning provided suggests that AI tools may provide added value in the future.

Deploying a novel deep learning framework for segmentation of specific anatomical structures on cone-beam CT.

Yuce F, Buyuk C, Bilgir E, Çelik Ö, Bayrakdar İŞ

pubmed logopapersMay 30 2025
Cone-beam computed tomography (CBCT) imaging plays a crucial role in dentistry, with automatic prediction of anatomical structures on CBCT images potentially enhancing diagnostic and planning procedures. This study aims to predict anatomical structures automatically on CBCT images using a deep learning algorithm. CBCT images from 70 patients were analyzed. Anatomical structures were annotated using a regional segmentation tool within an annotation software by two dentomaxillofacial radiologists. Each volumetric dataset comprised 405 slices, with relevant anatomical structures marked in each slice. Seventy DICOM images were converted to Nifti format, with seven reserved for testing and the remaining sixty-three used for training. The training utilized nnUNetv2 with an initial learning rate of 0.01, decreasing by 0.00001 at each epoch, and was conducted for 1000 epochs. Statistical analysis included accuracy, Dice score, precision, and recall results. The segmentation model achieved an accuracy of 0.99 for nasal fossa, maxillary sinus, nasopalatine canal, mandibular canal, foramen mentale, and foramen mandible, with corresponding Dice scores of 0.85, 0.98, 0.79, 0.73, 0.78, and 0.74, respectively. Precision values ranged from 0.73 to 0.98. Maxillary sinus segmentation exhibited the highest performance, while mandibular canal segmentation showed the lowest performance. The results demonstrate high accuracy and precision across most structures, with varying Dice scores indicating the consistency of segmentation. Overall, our segmentation model exhibits robust performance in delineating anatomical features in CBCT images, promising potential applications in dental diagnostics and treatment planning.

Multi-spatial-attention U-Net: a novel framework for automated gallbladder segmentation on CT images.

Lou H, Wen X, Lin F, Peng Z, Wang Q, Ren R, Xu J, Fan J, Song H, Ji X, Wang H, Sun X, Dong Y

pubmed logopapersMay 30 2025
This study aimed to construct a novel model, Multi-Spatial Attention U-Net (MSAU-Net) by incorporating our proposed Multi-Spatial Attention (MSA) block into the U-Net for the automated segmentation of the gallbladder on CT images. The gallbladder dataset consists of CT images of retrospectively-collected 152 liver cancer patients and corresponding ground truth delineated by experienced physicians. Our proposed MSAU-Net model was transformed into two versions V1(with one Multi-Scale Feature Extraction and Fusion (MSFEF) module in each MSA block) and V2 (with two parallel MSEFE modules in each MSA blcok). The performances of V1 and V2 were evaluated and compared with four other derivatives of U-Net or state-of-the-art models quantitatively using seven commonly-used metrics, and qualitatively by comparison against experienced physicians' assessment. MSAU-Net V1 and V2 models both outperformed the comparative models across most quantitative metrics with better segmentation accuracy and boundary delineation. The optimal number of MSA was three for V1 and two for V2. Qualitative evaluations confirmed that they produced results closer to physicians' annotations. External validation revealed that MSAU-Net V2 exhibited better generalization capability. The MSAU-Net V1 and V2 both exhibited outstanding performance in gallbladder segmentation, demonstrating strong potential for clinical application. The MSA block enhances spatial information capture, improving the model's ability to segment small and complex structures with greater precision. These advantages position the MSAU-Net V1 and V2 as valuable tools for broader clinical adoption.

Radiomics-based differentiation of upper urinary tract urothelial and renal cell carcinoma in preoperative computed tomography datasets.

Marcon J, Weinhold P, Rzany M, Fabritius MP, Winkelmann M, Buchner A, Eismann L, Jokisch JF, Casuscelli J, Schulz GB, Knösel T, Ingrisch M, Ricke J, Stief CG, Rodler S, Kazmierczak PM

pubmed logopapersMay 30 2025
To investigate a non-invasive radiomics-based machine learning algorithm to differentiate upper urinary tract urothelial carcinoma (UTUC) from renal cell carcinoma (RCC) prior to surgical intervention. Preoperative computed tomography venous-phase datasets from patients that underwent procedures for histopathologically confirmed UTUC or RCC were retrospectively analyzed. Tumor segmentation was performed manually, and radiomic features were extracted according to the International Image Biomarker Standardization Initiative. Features were normalized using z-scores, and a predictive model was developed using the least absolute shrinkage and selection operator (LASSO). The dataset was split into a training cohort (70%) and a test cohort (30%). A total of 236 patients [30.5% female, median age 70.5 years (IQR: 59.5-77), median tumor size 5.8 cm (range: 4.1-8.2 cm)] were included. For differentiating UTUC from RCC, the model achieved a sensitivity of 88.4% and specificity of 81% (AUC: 0.93, radiomics score cutoff: 0.467) in the training cohort. In the validation cohort, the sensitivity was 80.6% and specificity 80% (AUC: 0.87, radiomics score cutoff: 0.601). Subgroup analysis of the validation cohort demonstrated robust performance, particularly in distinguishing clear cell RCC from high-grade UTUC (sensitivity: 84%, specificity: 73.1%, AUC: 0.84) and high-grade from low-grade UTUC (sensitivity: 57.7%, specificity: 88.9%, AUC: 0.68). Limitations include the need for independent validation in future randomized controlled trials (RCTs). Machine learning-based radiomics models can reliably differentiate between RCC and UTUC in preoperative CT imaging. With a suggested performance benefit compared to conventional imaging, this technology might be added to the current preoperative diagnostic workflow. Local ethics committee no. 20-179.

Imaging-based machine learning to evaluate the severity of ischemic stroke in the middle cerebral artery territory.

Xie G, Gao J, Liu J, Zhou X, Zhao Z, Tang W, Zhang Y, Zhang L, Li K

pubmed logopapersMay 30 2025
This study aims to develop an imaging-based machine learning model for evaluating the severity of ischemic stroke in the middle cerebral artery (MCA) territory. This retrospective study included 173 patients diagnosed with acute ischemic stroke (AIS) in the MCA territory from two centers, with 114 in the training set and 59 in the test set. In the training set, spearman correlation coefficient and multiple linear regression were utilized to analyze the correlation between the CT imaging features of patients prior to treatment and the national institutes of health stroke scale (NIHSS) score. Subsequently, an optimal machine learning algorithm was determined by comparing seven different algorithms. This algorithm was then used to construct a imaging-based prediction model for stroke severity (severe and non-severe). Finally, the model was validated in the test set. After conducting correlation analysis, CT imaging features such as infarction side, basal ganglia area involvement, dense MCA sign, and infarction volume were found to be independently associated with NIHSS score (P < 0.05). The Logistic Regression algorithm was determined to be the optimal method for constructing the prediction model for stroke severity. The area under the receiver operating characteristic curve of the model in both the training set and test set were 0.815 (95% CI: 0.736-0.893) and 0.780 (95% CI: 0.646-0.914), respectively, with accuracies of 0.772 and 0.814. Imaging-based machine learning model can effectively evaluate the severity (severe or non-severe) of ischemic stroke in the MCA territory. Not applicable.
Page 25 of 46453 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.