Sort by:
Page 68 of 3163151 results

Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting: the DECIPHER study.

Bloom B, Haimovich A, Pott J, Williams SL, Cheetham M, Langsted S, Skene I, Astin-Chamberlain R, Thomas SH

pubmed logopapersJul 25 2025
Identifying whether there is a traumatic intracranial bleed (ICB+) on head CT is critical for clinical care and research. Free text CT reports are unstructured and therefore must undergo time-consuming manual review. Existing artificial intelligence classification schemes are not optimised for the emergency department endpoint of classification of ICB+ or ICB-. We sought to assess three methods for classifying CT reports: a text classification (TC) programme, a commercial natural language processing programme (Clinithink) and a generative pretrained transformer large language model (Digitalizing English-language CT Interpretation for Positive Haemorrhage Evaluation Reporting (DECIPHER)-LLM). Primary objective: determine the diagnostic classification performance of the dichotomous categorisation of each of the three approaches. determine whether the LLM could achieve a substantial reduction in CT report review workload while maintaining 100% sensitivity.Anonymised radiology reports of head CT scans performed for trauma were manually labelled as ICB+/-. Training and validation sets were randomly created to train the TC and natural language processing models. Prompts were written to train the LLM. 898 reports were manually labelled. Sensitivity and specificity (95% CI)) of TC, Clinithink and DECIPHER-LLM (with probability of ICB set at 10%) were respectively 87.9% (76.7% to 95.0%) and 98.2% (96.3% to 99.3%), 75.9% (62.8% to 86.1%) and 96.2% (93.8% to 97.8%) and 100% (93.8% to 100%) and 97.4% (95.3% to 98.8%).With DECIPHER-LLM probability of ICB+ threshold of 10% set to identify CT reports requiring manual evaluation, CT reports requiring manual classification reduced by an estimated 385/449 cases (85.7% (95% CI 82.1% to 88.9%)) while maintaining 100% sensitivity. DECIPHER-LLM outperformed other tested free-text classification methods.

A novel approach for breast cancer detection using a Nesterov accelerated adam optimizer with an attention mechanism.

Saber A, Emara T, Elbedwehy S, Hassan E

pubmed logopapersJul 25 2025
Image-based automatic breast tumor detection has become a significant research focus, driven by recent advancements in machine learning (ML) algorithms. Traditional disease detection methods often involve manual feature extraction from images, a process requiring extensive expertise from specialists and pathologists. This labor-intensive approach is not only time-consuming but also impractical for widespread application. However, advancements in digital technologies and computer vision have enabled convolutional neural networks (CNNs) to learn features automatically, thereby overcoming these challenges. This paper presents a deep neural network model based on the MobileNet-V2 architecture, enhanced with a convolutional block attention mechanism for identifying tumor types in ultrasound images. The attention module improves the MobileNet-V2 model's performance by highlighting disease-affected areas within the images. The proposed model refines features extracted by MobileNet-V2 using the Nesterov-accelerated Adaptive Moment Estimation (Nadam) optimizer. This integration enhances convergence and stability, leading to improved classification accuracy. The proposed approach was evaluated on the BUSI ultrasound image dataset. Experimental results demonstrated strong performance, achieving an accuracy of 99.1%, sensitivity of 99.7%, specificity of 99.5%, precision of 97.7%, and an area under the curve (AUC) of 1.0 using an 80-20 data split. Additionally, under 10-fold cross-validation, the model achieved an accuracy of 98.7%, sensitivity of 99.1%, specificity of 98.3%, precision of 98.4%, F1-score of 98.04%, and an AUC of 0.99.

Deep learning-based image classification for integrating pathology and radiology in AI-assisted medical imaging.

Lu C, Zhang J, Liu R

pubmed logopapersJul 25 2025
The integration of pathology and radiology in medical imaging has emerged as a critical need for advancing diagnostic accuracy and improving clinical workflows. Current AI-driven approaches for medical image analysis, despite significant progress, face several challenges, including handling multi-modal imaging, imbalanced datasets, and the lack of robust interpretability and uncertainty quantification. These limitations often hinder the deployment of AI systems in real-world clinical settings, where reliability and adaptability are essential. To address these issues, this study introduces a novel framework, the Domain-Informed Adaptive Network (DIANet), combined with an Adaptive Clinical Workflow Integration (ACWI) strategy. DIANet leverages multi-scale feature extraction, domain-specific priors, and Bayesian uncertainty modeling to enhance interpretability and robustness. The proposed model is tailored for multi-modal medical imaging tasks, integrating adaptive learning mechanisms to mitigate domain shifts and imbalanced datasets. Complementing the model, the ACWI strategy ensures seamless deployment through explainable AI (XAI) techniques, uncertainty-aware decision support, and modular workflow integration compatible with clinical systems like PACS. Experimental results demonstrate significant improvements in diagnostic accuracy, segmentation precision, and reconstruction fidelity across diverse imaging modalities, validating the potential of this framework to bridge the gap between AI innovation and clinical utility.

A DCT-UNet-based framework for pulmonary airway segmentation integrating label self-updating and terminal region growing.

Zhao S, Wu Y, Xu J, Li M, Feng J, Xia S, Chen R, Liang Z, Qian W, Qi S

pubmed logopapersJul 25 2025

Intrathoracic airway segmentation in computed tomography (CT) is important for quantitative and qualitative analysis of various chronic respiratory diseases and bronchial surgery navigation. However, the airway tree's morphological complexity, incomplete labels resulting from annotation difficulty, and intra-class imbalance between main and terminal airways limit the segmentation performance.
Methods:
Three methodological improvements are proposed to deal with the challenges. Firstly, we design a DCT-UNet to collect better information on neighbouring voxels and ones within a larger spatial region. Secondly, an airway label self-updating (ALSU) strategy is proposed to iteratively update the reference labels to conquer the problem of incomplete labels. Thirdly, a deep learning-based terminal region growing (TRG) is adopted to extract terminal airways. Extensive experiments were conducted on two internal datasets and three public datasets.
Results:
Compared to the counterparts, the proposed method can achieve a higher Branch Detected, Tree-length Detected, Branch Ratio, and Tree-length Ratio (ISICDM2021 dataset, 95.19%, 94.89%, 166.45%, and 172.29%; BAS dataset, 96.03%, 95.11%, 129.35%, and 137.00%). Ablation experiments show the effectiveness of three proposed solutions. Our method is applied to an in-house Chorionic Obstructive Pulmonary Disease (COPD) dataset. The measures of branch count, tree length, endpoint count, airway volume, and airway surface area are significantly different between COPD severity stages.
Conclusions:
The proposed methods can segment more terminal bronchi and larger length of airway, even some bronchi which are real but missed in the manual annotation can be detected. Potential application significance has been presented in characterizing COPD airway lesions and severity stages.&#xD.

Clinical application of a deep learning system for automatic mandibular alveolar bone quantity assessment and suggested treatment options using CBCT cross-sections.

Rashid MO, Gaghor S

pubmed logopapersJul 25 2025
Assessing dimensions of available bone throughout hundreds of cone-beam computed tomography cross-sectional images of the edentulous area is time-consuming, focus-demanding, and prone to variability and mistakes. This study aims for a clinically applicable artificial intelligence-based automation system for available bone quantity assessment and providing possible surgical and nonsurgical treatment options in a real-time manner. YOLOv8-seg, a single-stage convolutional neural network detector, has been used to segment mandibular alveolar bone and the inferior alveolar canal from cross-sectional images of a custom dataset. Measurements from the segmented mask of the bone and canal have been calculated mathematically and compared with manual measurements from 2 different operators, and the time for the measurement task has been compared. Classification of bone dimension with 25 treatment options has been automatically suggested by the system and validated with a team of specialists. The YOLOv8 model achieved significantly accurate improvements in segmenting anatomical structures with a precision of 0.951, recall of 0.915, mAP50 of 0.952, Intersection over Union of 0.871, and dice similarity coefficient of 0.911. The efficiency ratio of that segmentation performed by the artificial intelligence-based system is 2001 times faster in comparison to the human subject. A statistically significant difference in the measurements from the system to operators in height and time is recorded. The system's recommendations matched the clinicians' assessments in 94% of cases (83/88). Cohen κ of 0.89 indicated near-perfect agreement. The YOLOv8 model is an effective tool, providing high accuracy in segmenting dental structures with balanced computational requirements, and even with the challenges presented, the system can be clinically applicable with future improvements, providing less time-consuming and, most importantly, specialist-level accurate implant planning reports.

Agentic AI in radiology: Emerging Potential and Unresolved Challenges.

Dietrich N

pubmed logopapersJul 24 2025
This commentary introduces agentic artificial intelligence (AI) as an emerging paradigm in radiology, marking a shift from passive, user-triggered tools to systems capable of autonomous workflow management, task planning, and clinical decision support. Agentic AI models may dynamically prioritize imaging studies, tailor recommendations based on patient history and scan context, and automate administrative follow-up tasks, offering potential gains in efficiency, triage accuracy, and cognitive support. While not yet widely implemented, early pilot studies and proof-of-concept applications highlight promising utility across high-volume and high-acuity settings. Key barriers, including limited clinical validation, evolving regulatory frameworks, and integration challenges, must be addressed to ensure safe, scalable deployment. Agentic AI represents a forward-looking evolution in radiology that warrants careful development and clinician-guided implementation.

SUP-Net: Slow-time Upsampling Network for Aliasing Removal in Doppler Ultrasound.

Nahas H, Yu ACH

pubmed logopapersJul 24 2025
Doppler ultrasound modalities, which include spectral Doppler and color flow imaging, are frequently used tools for flow diagnostics because of their real-time point-of-care applicability and high temporal resolution. When implemented using pulse-echo sensing and phase shift estimation principles, this modality's pulse repetition frequency (PRF) is known to influence the maximum detectable velocity. If the PRF is inevitably set below the Nyquist limit due to imaging requirements or hardware constraints, aliasing errors or spectral overlap may corrupt the estimated flow data. To solve this issue, we have devised a deep learning-based framework, powered by a custom slow-time upsampling network (SUP-Net) that leverages spatiotemporal characteristics to upsample the received ultrasound signals across pulse echoes acquired using high-frame-rate ultrasound (HiFRUS). Our framework infers high-PRF signals from signals acquired at low PRF, thereby improving Doppler ultrasound's flow estimation quality. SUP-Net was trained and evaluated on in vivo femoral acquisitions from 20 participants and was applied recursively to resolve scenarios with excessive aliasing across a range of PRFs. We report the successful reconstruction of slow-time signals with frequency content that exceeds the Nyquist limit once and twice. By operating on the fundamental slow-time signals, our framework can resolve aliasing-related artifacts in several downstream modalities, including color Doppler and pulse wave Doppler.

Vox-MMSD: Voxel-wise Multi-scale and Multi-modal Self-Distillation for Self-supervised Brain Tumor Segmentation.

Zhou Y, Wu J, Fu J, Yue Q, Liao W, Zhang S, Zhang S, Wang G

pubmed logopapersJul 24 2025
Many deep learning methods have been proposed for brain tumor segmentation from multi-modal Magnetic Resonance Imaging (MRI) scans that are important for accurate diagnosis and treatment planning. However, supervised learning needs a large amount of labeled data to perform well, where the time-consuming and expensive annotation process or small size of training set will limit the model's performance. To deal with these problems, self-supervised pre-training is an appealing solution due to its feature learning ability from a set of unlabeled images that is transferable to downstream datasets with a small size. However, existing methods often overlook the utilization of multi-modal information and multi-scale features. Therefore, we propose a novel Self-Supervised Learning (SSL) framework that fully leverages multi-modal MRI scans to extract modality-invariant features for brain tumor segmentation. First, we employ a Siamese Block-wise Modality Masking (SiaBloMM) strategy that creates more diverse model inputs for image restoration to simultaneously learn contextual and modality-invariant features. Meanwhile, we proposed Overlapping Random Modality Sampling (ORMS) to sample voxel pairs with multi-scale features for self-distillation, enhancing voxel-wise representation which is important for segmentation tasks. Experiments on the BraTS 2024 adult glioma segmentation dataset showed that with a small amount of labeled data for fine-tuning, our method improved the average Dice by 3.80 percentage points. In addition, when transferred to three other small downstream datasets with brain tumors from different patient groups, our method also improved the dice by 3.47 percentage points on average, and outperformed several existing SSL methods. The code is availiable at https://github.com/HiLab-git/Vox-MMSD.

Deep Learning-Driven High Spatial Resolution Attenuation Imaging for Ultrasound Tomography (AI-UT).

Liu M, Kou Z, Wiskin JW, Czarnota GJ, Oelze ML

pubmed logopapersJul 24 2025
Ultrasonic attenuation can be used to characterize tissue properties of the human breast. Both quantitative ultrasound (QUS) and ultrasound tomography (USCT) can provide attenuation estimation. However, limitations have been identified for both approaches. In QUS, the generation of attenuation maps involves separating the whole image into different data blocks. The optimal size of the data block is around 15 to 30 pulse lengths, which dramatically decreases the spatial resolution for attenuation imaging. In USCT, the attenuation is often estimated with a full wave inversion (FWI) method, which is affected by background noise. In order to achieve a high resolution attenuation image with low variance, a deep learning (DL) based method was proposed. In the approach, RF data from 60 angle views from the QTI Breast Acoustic CT<sup>TM</sup> Scanner were acquired as the input and attenuation images as the output. To improve image quality for the DL method, the spatial correlation between speed of sound (SOS) and attenuation were used as a constraint in the model. The results indicated that by including the SOS structural information, the performance of the model was improved. With a higher spatial resolution attenuation image, further segmentation of the breast can be achieved. The structural information and actual attenuation values provided by DL-generated attenuation images were validated with the values from the literature and the SOS-based segmentation map. The information provided by DL-generated attenuation images can be used as an additional biomarker for breast cancer diagnosis.

Minimal Ablative Margin Quantification Using Hepatic Arterial Versus Portal Venous Phase CT for Colorectal Metastases Segmentation: A Dual-center, Retrospective Analysis.

Siddiqi NS, Lin YM, Marques Silva JA, Laimer G, Schullian P, Scharll Y, Dunker AM, O'Connor CS, Jones KA, Brock KK, Bale R, Odisio BC, Paolucci I

pubmed logopapersJul 24 2025
To compare the predictive value of minimal ablative margin (MAM) quantification using tumor segmentation on intraprocedural contrast-enhanced hepatic arterial (HAP) versus portal venous phase (PVP) CT on local outcomes following percutaneous thermal ablation of colorectal liver metastases (CRLM). This dual-center retrospective study included patients undergoing thermal ablation of CRLM with intraprocedural preablation and postablation contrast-enhanced CT imaging between 2009 and 2021. Tumors were segmented in both HAP and PVP CT phases using an artificial intelligence-based auto-segmentation model and reviewed by a trained radiologist. The MAM was quantified using a biomechanical deformable image registration process. The area under the receiver operating characteristic curve (AUROC) was used to compare the prognostic value for predicting local tumor progression (LTP). Among 81 patients (60 y±13, 53 men), 151 CRLMs were included. During 29.4 months of median follow-up, LTP was noted in 24/151 (15.9%). Median tumor volumes on HAP and PVP CT were 1.7 mL and 1.2 mL, respectively, with respective median MAMs of 2.3 and 4.0 mm (both P< 0.001). The AUROC for 1-year LTP prediction was 0.78 (95% CI: 0.70-0.85) on HAP and 0.84 (95% CI: 0.78-0.91) on PVP (P= 0.002). During CT-guided percutaneous thermal ablation, MAM measured based on tumors segmented on PVP images conferred a higher predictive accuracy of ablation outcomes among CRLM patients than those segmented on HAP images, supporting the use of PVP rather than HAP images for segmentation during ablation of CRLMs.
Page 68 of 3163151 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.