Sort by:
Page 46 of 2252246 results

Concordance between single-slice abdominal computed tomography-based and bioelectrical impedance-based analysis of body composition in a prospective study.

Fehrenbach U, Hosse C, Wienbrandt W, Walter-Rittel T, Kolck J, Auer TA, Blüthner E, Tacke F, Beetz NL, Geisel D

pubmed logopapersJun 19 2025
Body composition analysis (BCA) is a recognized indicator of patient frailty. Apart from the established bioelectrical impedance analysis (BIA), computed tomography (CT)-derived BCA is being increasingly explored. The aim of this prospective study was to directly compare BCA obtained from BIA and CT. A total of 210 consecutive patients scheduled for CT, including a high proportion of cancer patients, were prospectively enrolled. Immediately prior to the CT scan, all patients underwent BIA. CT-based BCA was performed using a single-slice AI tool for automated detection and segmentation at the level of the third lumbar vertebra (L3). BIA-based parameters, body fat mass (BFM<sub>BIA</sub>) and skeletal muscle mass (SMM<sub>BIA</sub>), CT-based parameters, subcutaneous and visceral adipose tissue area (SATA<sub>CT</sub> and VATA<sub>CT</sub>) and total abdominal muscle area (TAMA<sub>CT</sub>) were determined. Indices were calculated by normalizing the BIA and CT parameters to patient's weight (body fat percentage (BFP<sub>BIA</sub>) and body fat index (BFI<sub>CT</sub>)) or height (skeletal muscle index (SMI<sub>BIA</sub>) and lumbar skeletal muscle index (LSMI<sub>CT</sub>)). Parameters representing fat, BFM<sub>BIA</sub> and SATA<sub>CT</sub> + VATA<sub>CT</sub>, and parameters representing muscle tissue, SMM<sub>BIA</sub> and TAMA<sub>CT</sub>, showed strong correlations in female (fat: r = 0.95; muscle: r = 0.72; p < 0.001) and male (fat: r = 0.91; muscle: r = 0.71; p < 0.001) patients. Linear regression analysis was statistically significant (fat: R<sup>2</sup> = 0.73 (female) and 0.74 (male); muscle: R<sup>2</sup> = 0.56 (female) and 0.56 (male); p < 0.001), showing that BFI<sub>CT</sub> and LSMI<sub>CT</sub> allowed prediction of BFP<sub>BIA</sub> and SMI<sub>BIA</sub> for both sexes. CT-based BCA strongly correlates with BIA results and yields quantitative results for BFP and SMI comparable to the existing gold standard. Question CT-based body composition analysis (BCA) is moving more and more into clinical focus, but validation against established methods is lacking. Findings Fully automated CT-based BCA correlates very strongly with guideline-accepted bioelectrical impedance analysis (BIA). Clinical relevance BCA is currently moving further into clinical focus to improve assessment of patient frailty and individualize therapies accordingly. Comparability with established BIA strengthens the value of CT-based BCA and supports its translation into clinical routine.

Optimized YOLOv8 for enhanced breast tumor segmentation in ultrasound imaging.

Mostafa AM, Alaerjan AS, Aldughayfiq B, Allahem H, Mahmoud AA, Said W, Shabana H, Ezz M

pubmed logopapersJun 19 2025
Breast cancer significantly affects people's health globally, making early and accurate diagnosis vital. While ultrasound imaging is safe and non-invasive, its manual interpretation is subjective. This study explores machine learning (ML) techniques to improve breast ultrasound image segmentation, comparing models trained on combined versus separate classes of benign and malignant tumors. The YOLOv8 object detection algorithm is applied to the image segmentation task, aiming to capitalize on its robust feature detection capabilities. We utilized a dataset of 780 ultrasound images categorized into benign and malignant classes to train several deep learning (DL) models: UNet, UNet with DenseNet-121, VGG16, VGG19, and an adapted YOLOv8. These models were evaluated in two experimental setups-training on a combined dataset and training on separate datasets for benign and malignant classes. Performance metrics such as Dice Coefficient, Intersection over Union (IoU), and mean Average Precision (mAP) were used to assess model effectiveness. The study demonstrated substantial improvements in model performance when trained on separate classes, with the UNet model's F1-score increasing from 77.80 to 84.09% and Dice Coefficient from 75.58 to 81.17%, and the adapted YOLOv8 model achieving an F1-score improvement from 93.44 to 95.29% and Dice Coefficient from 82.10 to 84.40%. These results highlight the advantage of specialized model training and the potential of using advanced object detection algorithms for segmentation tasks. This research underscores the significant potential of using specialized training strategies and innovative model adaptations in medical imaging segmentation, ultimately contributing to better patient outcomes.

Artificial Intelligence Language Models to Translate Professional Radiology Mammography Reports Into Plain Language - Impact on Interpretability and Perception by Patients.

Pisarcik D, Kissling M, Heimer J, Farkas M, Leo C, Kubik-Huch RA, Euler A

pubmed logopapersJun 19 2025
This study aimed to evaluate the interpretability and patient perception of AI-translated mammography and sonography reports, focusing on comprehensibility, follow-up recommendations, and conveyed empathy using a survey. In this observational study, three fictional mammography and sonography reports with BI-RADS categories 3, 4, and 5 were created. These reports were repeatedly translated to plain language by three different large language models (LLM: ChatGPT-4, ChatGPT-4o, Google Gemini). In a first step, the best of these repeatedly translated reports for each BI-RADS category and LLM was selected by two experts in breast imaging considering factual correctness, completeness, and quality. In a second step, female participants compared and rated the translated reports regarding comprehensibility, follow-up recommendations, conveyed empathy, and additional value of each report using a survey with Likert scales. Statistical analysis included cumulative link mixed models and the Plackett-Luce model for ranking preferences. 40 females participated in the survey. GPT-4 and GPT-4o were rated significantly higher than Gemini across all categories (P<.001). Participants >50 years of age rated the reports significantly higher as compared to participants of 18-29 years of age (P<.05). Higher education predicted lower ratings (P=.02). No prior mammography increased scores (P=.03), and AI-experience had no effect (P=.88). Ranking analysis showed GPT-4o as the most preferred (P=.48), followed by GPT-4 (P=.37), with Gemini ranked last (P=.15). Patient preference differed among AI-translated radiology reports. Compared to a traditional report using radiological language, AI-translated reports add value for patients, enhance comprehensibility and empathy and therefore hold the potential to improve patient communication in breast imaging.

A fusion-based deep-learning algorithm predicts PDAC metastasis based on primary tumour CT images: a multinational study.

Xue N, Sabroso-Lasa S, Merino X, Munzo-Beltran M, Schuurmans M, Olano M, Estudillo L, Ledesma-Carbayo MJ, Liu J, Fan R, Hermans JJ, van Eijck C, Malats N

pubmed logopapersJun 19 2025
Diagnosing the presence of metastasis of pancreatic cancer is pivotal for patient management and treatment, with contrast-enhanced CT scans (CECT) as the cornerstone of diagnostic evaluation. However, this diagnostic modality requires a multifaceted approach. To develop a convolutional neural network (CNN)-based model (PMPD, Pancreatic cancer Metastasis Prediction Deep-learning algorithm) to predict the presence of metastases based on CECT images of the primary tumour. CECT images in the portal venous phase of 335 patients with pancreatic ductal adenocarcinoma (PDAC) from the PanGenEU study and The First Affiliated Hospital of Zhengzhou University (ZZU) were randomly divided into training and internal validation sets by applying fivefold cross-validation. Two independent external validation datasets of 143 patients from the Radboud University Medical Center (RUMC), included in the PANCAIM study (RUMC-PANCAIM) and 183 patients from the PREOPANC trial of the Dutch Pancreatic Cancer Group (PREOPANC-DPCG) were used to evaluate the results. The area under the receiver operating characteristic curve (AUROC) for the internally tested model was 0.895 (0.853-0.937) and 0.779 (0.741-0.817) in the PanGenEU and ZZU sets, respectively. In the external validation sets, the mean AUROC was 0.806 (0.787-0.826) for the RUMC-PANCAIM and 0.761 (0.717-0.804) for the PREOPANC-DPCG. When stratified by the different metastasis sites, the PMPD model achieved the average AUROC between 0.901-0.927 in PanGenEU, 0.782-0.807 in ZZU and 0.761-0.820 in PREOPANC-DPCG sets. A PMPD-derived Metastasis Risk Score (MRS) (HR: 2.77, 95% CI 1.99 to 3.86, p=1.59e-09) outperformed the Resectability status from the National Comprehensive Cancer Network guideline and the CA19-9 biomarker in predicting overall survival. Meanwhile, the MRS could potentially predict developed metastasis (AUROC: 0.716 for within 3 months, 0.645 for within 6 months). This study represents a pioneering utilisation of a high-performance deep-learning model to predict extrapancreatic organ metastasis in patients with PDAC.

Deep learning detects retropharyngeal edema on MRI in patients with acute neck infections.

Rainio O, Huhtanen H, Vierula JP, Nurminen J, Heikkinen J, Nyman M, Klén R, Hirvonen J

pubmed logopapersJun 19 2025
In acute neck infections, magnetic resonance imaging (MRI) shows retropharyngeal edema (RPE), which is a prognostic imaging biomarker for a severe course of illness. This study aimed to develop a deep learning-based algorithm for the automated detection of RPE. We developed a deep neural network consisting of two parts using axial T2-weighted water-only Dixon MRI images from 479 patients with acute neck infections annotated by radiologists at both slice and patient levels. First, a convolutional neural network (CNN) classified individual slices; second, an algorithm classified patients based on a stack of slices. Model performance was compared with the radiologists' assessment as a reference standard. Accuracy, sensitivity, specificity, and area under receiver operating characteristic curve (AUROC) were calculated. The proposed CNN was compared with InceptionV3, and the patient-level classification algorithm was compared with traditional machine learning models. Of the 479 patients, 244 (51%) were positive and 235 (49%) negative for RPE. Our model achieved accuracy, sensitivity, specificity, and AUROC of 94.6%, 83.3%, 96.2%, and 94.1% at the slice level, and 87.4%, 86.5%, 88.2%, and 94.8% at the patient level, respectively. The proposed CNN was faster than InceptionV3 but equally accurate. Our patient classification algorithm outperformed traditional machine learning models. A deep learning model, based on weakly annotated data and computationally manageable training, achieved high accuracy for automatically detecting RPE on MRI in patients with acute neck infections. Our automated method for detecting relevant MRI findings was efficiently trained and might be easily deployed in practice to study clinical applicability. This approach might improve early detection of patients at high risk for a severe course of acute neck infections. Deep learning automatically detected retropharyngeal edema on MRI in acute neck infections. Areas under the receiver operating characteristic curve were 94.1% at the slice level and 94.8% at the patient level. The proposed convolutional neural network was lightweight and required only weakly annotated data.

Development and validation of an AI-driven radiomics model using non-enhanced CT for automated severity grading in chronic pancreatitis.

Chen C, Zhou J, Mo S, Li J, Fang X, Liu F, Wang T, Wang L, Lu J, Shao C, Bian Y

pubmed logopapersJun 19 2025
To develop and validate the chronic pancreatitis CT severity model (CATS), an artificial intelligence (AI)-based tool leveraging automated 3D segmentation and radiomics analysis of non-enhanced CT scans for objective severity stratification in chronic pancreatitis (CP). This retrospective study encompassed patients with recurrent acute pancreatitis (RAP) and CP from June 2016 to May 2020. A 3D convolutional neural network segmented non-enhanced CT scans, extracting 1843 radiomic features to calculate the radiomics score (Rad-score). The CATS was formulated using multivariable logistic regression and validated in a subsequent cohort from June 2020 to April 2023. Overall, 2054 patients with RAP and CP were included in the training (n = 927), validation set (n = 616), and external test (n = 511) sets. CP grade I and II patients accounted for 300 (14.61%) and 1754 (85.39%), respectively. The Rad-score significantly correlated with the acinus-to-stroma ratio (p = 0.023; OR, -2.44). The CATS model demonstrated high discriminatory performance in differentiating CP severity grades, achieving an area under the curve (AUC) of 0.96 (95% CI: 0.94-0.98) and 0.88 (95% CI: 0.81-0.90) in the validation and test cohorts. CATS-predicted grades correlated with exocrine insufficiency (all p < 0.05) and showed significant prognostic differences (all p < 0.05). CATS outperformed radiologists in detecting calcifications, identifying all minute calcifications missed by radiologists. The CATS, developed using non-enhanced CT and AI, accurately predicts CP severity, reflects disease morphology, and forecasts short- to medium-term prognosis, offering a significant advancement in CP management. Question Existing CP severity assessments rely on semi-quantitative CT evaluations and multi-modality imaging, leading to inconsistency and inaccuracy in early diagnosis and prognosis prediction. Findings The AI-driven CATS model, using non-enhanced CT, achieved high accuracy in grading CP severity, and correlated with histopathological fibrosis markers. Clinical relevance CATS provides a cost-effective, widely accessible tool for precise CP severity stratification, enabling early intervention, personalized management, and improved outcomes without contrast agents or invasive biopsies.

Data extraction from free-text stroke CT reports using GPT-4o and Llama-3.3-70B: the impact of annotation guidelines.

Wihl J, Rosenkranz E, Schramm S, Berberich C, Griessmair M, Woźnicki P, Pinto F, Ziegelmayer S, Adams LC, Bressem KK, Kirschke JS, Zimmer C, Wiestler B, Hedderich D, Kim SH

pubmed logopapersJun 19 2025
To evaluate the impact of an annotation guideline on the performance of large language models (LLMs) in extracting data from stroke computed tomography (CT) reports. The performance of GPT-4o and Llama-3.3-70B in extracting ten imaging findings from stroke CT reports was assessed in two datasets from a single academic stroke center. Dataset A (n = 200) was a stratified cohort including various pathological findings, whereas dataset B (n = 100) was a consecutive cohort. Initially, an annotation guideline providing clear data extraction instructions was designed based on a review of cases with inter-annotator disagreements in dataset A. For each LLM, data extraction was performed under two conditions: with the annotation guideline included in the prompt and without it. GPT-4o consistently demonstrated superior performance over Llama-3.3-70B under identical conditions, with micro-averaged precision ranging from 0.83 to 0.95 for GPT-4o and from 0.65 to 0.86 for Llama-3.3-70B. Across both models and both datasets, incorporating the annotation guideline into the LLM input resulted in higher precision rates, while recall rates largely remained stable. In dataset B, the precision of GPT-4o and Llama-3-70B improved from 0.83 to 0.95 and from 0.87 to 0.94, respectively. Overall classification performance with and without the annotation guideline was significantly different in five out of six conditions. GPT-4o and Llama-3.3-70B show promising performance in extracting imaging findings from stroke CT reports, although GPT-4o steadily outperformed Llama-3.3-70B. We also provide evidence that well-defined annotation guidelines can enhance LLM data extraction accuracy. Annotation guidelines can improve the accuracy of LLMs in extracting findings from radiological reports, potentially optimizing data extraction for specific downstream applications. LLMs have utility in data extraction from radiology reports, but the role of annotation guidelines remains underexplored. Data extraction accuracy from stroke CT reports by GPT-4o and Llama-3.3-70B improved when well-defined annotation guidelines were incorporated into the model prompt. Well-defined annotation guidelines can improve the accuracy of LLMs in extracting imaging findings from radiological reports.

Artificial intelligence in imaging diagnosis of liver tumors: current status and future prospects.

Hori M, Suzuki Y, Sofue K, Sato J, Nishigaki D, Tomiyama M, Nakamoto A, Murakami T, Tomiyama N

pubmed logopapersJun 19 2025
Liver cancer remains a significant global health concern, ranking as the sixth most common malignancy and the third leading cause of cancer-related deaths worldwide. Medical imaging plays a vital role in managing liver tumors, particularly hepatocellular carcinoma (HCC) and metastatic lesions. However, the large volume and complexity of imaging data can make accurate and efficient interpretation challenging. Artificial intelligence (AI) is recognized as a promising tool to address these challenges. Therefore, this review aims to explore the recent advances in AI applications in liver tumor imaging, focusing on key areas such as image reconstruction, image quality enhancement, lesion detection, tumor characterization, segmentation, and radiomics. Among these, AI-based image reconstruction has already been widely integrated into clinical workflows, helping to enhance image quality while reducing radiation exposure. While the adoption of AI-assisted diagnostic tools in liver imaging has lagged behind other fields, such as chest imaging, recent developments are driving their increasing integration into clinical practice. In the future, AI is expected to play a central role in various aspects of liver cancer care, including comprehensive image analysis, treatment planning, response evaluation, and prognosis prediction. This review offers a comprehensive overview of the status and prospects of AI applications in liver tumor imaging.

Multitask Deep Learning for Automated Segmentation and Prognostic Stratification of Endometrial Cancer via Biparametric MRI.

Yan R, Zhang X, Cao Q, Xu J, Chen Y, Qin S, Zhang S, Zhao W, Xing X, Yang W, Lang N

pubmed logopapersJun 19 2025
Endometrial cancer (EC) is a common gynecologic malignancy; accurate assessment of key prognostic factors is important for treatment planning. To develop a deep learning (DL) framework based on biparametric MRI for automated segmentation and multitask classification of EC key prognostic factors, including grade, stage, histological subtype, lymphovascular space invasion (LVSI), and deep myometrial invasion (DMI). Retrospective. A total of 325 patients with histologically confirmed EC were included: 211 training, 54 validation, and 60 test cases. T2-weighted imaging (T2WI, FSE/TSE) and diffusion-weighted imaging (DWI, SS-EPI) sequences at 1.5 and 3 T. The DL model comprised tumor segmentation and multitask classification. Manual delineation on T2WI and DWI acted as the reference standard for segmentation. Separate models were trained using T2WI alone, DWI alone and combined T2WI + DWI to classify dichotomized key prognostic factors. Performance was assessed in validation and test cohorts. For DMI, the combined model's was compared with visual assessment by four radiologists (with 1, 4, 7, and 20 years' experience), each of whom independently reviewed all cases. Segmentation was evaluated using the dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Hausdorff distance (HD95), and average surface distance (ASD). Classification performance was assessed using area under the receiver operating characteristic curve (AUC). Model AUCs were compared using DeLong's test. p < 0.05 was considered significant. In the test cohort, DSCs were 0.80 (T2WI) and 0.78 (DWI) and JSCs were 0.69 for both. HD95 and ASD were 7.02/1.71 mm (T2WI) versus 10.58/2.13 mm (DWI). The classification framework achieved AUCs of 0.78-0.94 (validation) and 0.74-0.94 (test). For DMI, the combined model performed comparably to radiologists (p = 0.07-0.84). The unified DL framework demonstrates strong EC segmentation and classification performance, with high accuracy across multiple tasks. 3. Stage 3.

Ensuring integrity in dental education: Developing a novel AI model for consistent and traceable image analysis in preclinical endodontic procedures.

Ibrahim M, Omidi M, Guentsch A, Gaffney J, Talley J

pubmed logopapersJun 19 2025
Academic integrity is crucial in dental education, especially during practical exams assessing competencies. Traditional oversight may not detect sophisticated academic dishonesty methods like radiograph substitution or tampering. This study aimed to develop and evaluate a novel artificial intelligence (AI) model utilizing a Siamese neural network to detect inconsistencies in radiographic images taken for root canal treatment (RCT) procedures in preclinical endodontic courses, thereby enhancing educational integrity. A Siamese neural network was designed to compare radiographs from different RCT procedures. The model was trained on 3390 radiographs, with data augmentation applied to improve generalizability. The dataset was split into training, validation, and testing subsets. Performance metrics included accuracy, precision, sensitivity (recall), and F1-score. Cross-validation and hyperparameter tuning optimized the model. Our AI model achieved an accuracy of 89.31%, a precision of 76.82%, a sensitivity of 84.82%, and an F1-score of 80.50%. The optimal similarity threshold was 0.48, where maximum accuracy was observed. The confusion matrix indicated a high rate of correct classifications, and cross-validation confirmed the model's robustness with a standard deviation of 1.95% across folds. The AI-driven Siamese neural network effectively detects radiographic inconsistencies in RCT preclinical procedures. Implementing this novel model will serve as an objective tool to uphold academic integrity in dental education, enhance the fairness and reliability of assessments, promote a culture of honesty amongst students, and reduce the administrative burden on educators.
Page 46 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.