Sort by:
Page 98 of 3163151 results

Acute Management of Nasal Bone Fractures: A Systematic Review and Practice Management Guideline.

Paliwoda ED, Newman-Plotnick H, Buzzetta AJ, Post NK, LaClair JR, Trandafirescu M, Gildener-Leapman N, Kpodzo DS, Edwards K, Tafen M, Schalet BJ

pubmed logopapersJul 10 2025
Nasal bone fractures represent the most common facial skeletal injury, challenging both function and aesthetics. This Preferred Reporting Items for Systematic Reviews and Meta-Analyses-based review analyzed 23 studies published within the past 5 years, selected from 998 records retrieved from PubMed, Embase, and Web of Science. Data from 1780 participants were extracted, focusing on diagnostic methods, surgical techniques, anesthesia protocols, and long-term outcomes. Ultrasound and artificial intelligence-based algorithms improved diagnostic accuracy, while telephone triage streamlined necessary encounters. Navigation-assisted reduction, ballooning, and septal reduction with polydioxanone plates improved outcomes. Anesthetic approaches ranged from local nerve blocks to general anesthesia with intraoperative administration of lidocaine, alongside techniques to manage pain from nasal pack removal postoperatively. Long-term follow-up demonstrated improved quality of life, breathing function, and aesthetic satisfaction with timely and individualized treatment. This review highlights the trend toward personalized, technology-assisted approaches in nasal fracture management, highlighting areas for future research.

BSN with Explicit Noise-Aware Constraint for Self-Supervised Low-Dose CT Denoising.

Wang P, Li D, Zhang Y, Chen G, Wang Y, Ma J, He J

pubmed logopapersJul 10 2025
Although supervised deep learning methods have made significant advances in low-dose computed tomography (LDCT) image denoising, these approaches typically require pairs of low-dose and normal-dose CT images for training, which are often unavailable in clinical settings. Self-supervised deep learning (SSDL) has great potential to cast off the dependence on paired training datasets. However, existing SSDL methods are limited by the neighboring noise independence assumptions, making them ineffective for handling spatially correlated noises in LDCT images. To address this issue, this paper introduces a novel SSDL approach, named, Noise-Aware Blind Spot Network (NA-BSN), for high-quality LDCT imaging, while mitigating the dependence on the assumption of neighboring noise independence. NA-BSN achieves high-quality image reconstruction without referencing clean data through its explicit noise-aware constraint mechanism during the self-supervised learning process. Specifically, it is experimentally observed and theoretical proven that the l1 norm value of CT images in a downsampled space follows a certain descend trend with increasing of the radiation dose, which is then used to construct the explicit noise-aware constraint in the architecture of BSN for self-supervised LDCT image denoising. Various clinical datasets are adopted to validate the performance of the presented NA-BSN method. Experimental results reveal that NA-BSN significantly reduces the spatially correlated CT noises and retains crucial image details in various complex scenarios, such as different types of scanning machines, scanning positions, dose-level settings, and reconstruction kernels.

Research on a deep learning-based model for measurement of X-ray imaging parameters of atlantoaxial joint.

Wu Y, Zheng Y, Zhu J, Chen X, Dong F, He L, Zhu J, Cheng G, Wang P, Zhou S

pubmed logopapersJul 10 2025
To construct a deep learning-based SCNet model, in order to automatically measure X-ray imaging parameters related to atlantoaxial subluxation (AAS) in cervical open-mouth view radiographs, and the accuracy and reliability of the model were evaluated. A total of 1973 cervical open-mouth view radiographs were collected from picture archiving and communication system (PACS) of two hospitals(Hospitals A and B). Among them, 365 images of Hospital A were randomly selected as the internal test dataset for evaluating the model's performance, and the remaining 1364 images of Hospital A were used as the training dataset and validation dataset for constructing the model and tuning the model hyperparameters, respectively. The 244 images of Hospital B were used as an external test dataset to evaluate the robustness and generalizability of our model. The model identified and marked landmarks in the images for the parameters of the lateral atlanto-dental space (LADS), atlas lateral mass inclination (ALI), lateral mass width (LW), axis spinous process deviation distance (ASDD). The measured results of landmarks on the internal test dataset and external test dataset were compared with the mean values of manual measurement by three radiologists as the reference standard. Percentage of correct key-points (PCK), intra-class correlation coefficient (ICC), mean absolute error (MAE), Pearson correlation coefficient (r), mean square error (MSE), root mean square error (RMSE) and Bland-Altman plot were used to evaluate the performance of the SCNet model. (1) Within the 2 mm distance threshold, the PCK of the SCNet model predicted landmarks in internal test dataset images was 98.6-99.7%, and the PCK in the external test dataset images was 98-100%. (2) In the internal test dataset, for the parameters LADS, ALI, LW, and ASDD, there were strong correlation and consistency between the SCNet model predictions and the manual measurements (ICC = 0.80-0.96, r = 0.86-0.96, MAE = 0.47-2.39 mm/°, MSE = 0.38-8.55 mm<sup>2</sup>/°<sup>2</sup>, RMSE = 0.62-2.92 mm/°). (3) The same four parameters also showed strong correlation and consistency between SCNet and manual measurements in the external test dataset (ICC = 0.81-0.91, r = 0.82-0.91, MAE = 0.46-2.29 mm/°, MSE = 0.29-8.23mm<sup>2</sup>/°<sup>2</sup>, RMSE = 0.54-2.87 mm/°). The SCNet model constructed based on deep learning algorithm in this study can accurately identify atlantoaxial vertebral landmarks in cervical open-mouth view radiographs and automatically measure the AAS-related imaging parameters. Furthermore, the independent external test set demonstrates that the model exhibits a certain degree of robustness and generalization capability under meet radiographic standards.

Multiparametric ultrasound techniques are superior to AI-assisted ultrasound for assessment of solid thyroid nodules: a prospective study.

Li Y, Li X, Yan L, Xiao J, Yang Z, Zhang M, Luo Y

pubmed logopapersJul 10 2025
To evaluate the diagnostic performance of multiparametric ultrasound (mpUS) and AI-assisted B-mode ultrasound (AI-US), and their potential to reduce unnecessary biopsies to B-mode for solid thyroid nodules. This prospective study enrolled 226 solid thyroid nodules with 145 malignant and 81 benign pathological results from 189 patients (35 men and 154 women; age range, 19-73 years; mean age, 45 years). Each nodule was examined using B-mode, microvascular flow imaging (MVFI), elastography with elasticity contrast index (ECI), and an AI system. Image data were recorded for each modality. Ten readers with different experience levels independently evaluated the B-mode images of each nodule to make a "benign" or "malignant" diagnosis in both an unblinded and blinded manner to the AI reports. The most accurate ECI value and MVFI mode were selected and combined with the dichotomous prediction of all readers. Descriptive statistics and AUCs were used to evaluate the diagnostic performances of mpUS and AI-US. Triple mpUS with B-mode, MVFI, and ECI exhibited the highest diagnostic performance (average AUC = 0.811 vs. 0.677 for B-mode, p = 0.001), followed by AI-US (average AUC = 0.718, p = 0.315). Triple mpUS significantly reduced the unnecessary biopsy rate by up to 12% (p = 0.007). AUC and specificity were significantly higher for triple mpUS than for AI-US mode (both p < 0.05). Compared to AI-US, triple mpUS (B-mode, MVFI, and ECI) exhibited better diagnostic performance for thyroid cancer diagnosis, and resulted in a significant reduction in unnecessary biopsy rate. AI systems are expected to take advantage of multi-modal information to facilitate diagnoses.

Recurrence prediction of invasive ductal carcinoma from preoperative contrast-enhanced computed tomography using deep convolutional neural network.

Umezu M, Kondo Y, Ichikawa S, Sasaki Y, Kaneko K, Ozaki T, Koizumi N, Seki H

pubmed logopapersJul 10 2025
Predicting the risk of breast cancer recurrence is crucial for guiding therapeutic strategies, including enhanced surveillance and the consideration of additional treatment after surgery. In this study, we developed a deep convolutional neural network (DCNN) model to predict recurrence within six years after surgery using preoperative contrast-enhanced computed tomography (CECT) images, which are widely available and effective for detecting distant metastases. This retrospective study included preoperative CECT images from 133 patients with invasive ductal carcinoma. The images were classified into recurrence and no-recurrence groups using ResNet-101 and DenseNet-201. Classification performance was evaluated using the area under the receiver operating curve (AUC) with leave-one-patient-out cross-validation. At the optimal threshold, the classification accuracies for ResNet-101 and DenseNet-201 were 0.73 and 0.72, respectively. The median (interquartile range) AUC of DenseNet-201 (0.70 [0.69-0.72]) was statistically higher than that of ResNet-101 (0.68 [0.66-0.68]) (p < 0.05). These results suggest the potential of preoperative CECT-based DCNN models to predict breast cancer recurrence without the need for additional invasive procedures.

Data Extraction and Curation from Radiology Reports for Pancreatic Cyst Surveillance Using Large Language Models.

Choubey AP, Eguia E, Hollingsworth A, Chatterjee S, D'Angelica MI, Jarnagin WR, Wei AC, Schattner MA, Do RKG, Soares KC

pubmed logopapersJul 10 2025
Manual curation of radiographic features in pancreatic cyst registries for data abstraction and longitudinal evaluation is time consuming and limits widespread implementation. We examined the feasibility and accuracy of using large language models (LLMs) to extract clinical variables from radiology reports. A single center retrospective study included patients under surveillance for pancreatic cysts. Nine radiographic elements used to monitor cyst progression were included: cyst size, main pancreatic duct (MPD) size (continuous variable), number of lesions, MPD dilation ≥5mm (categorical), branch duct dilation, presence of solid component, calcific lesion, pancreatic atrophy, and pancreatitis. LLMs (GPT) on the OpenAI GPT-4 platform were employed to extract elements of interest with a zero-shot learning approach using prompting to facilitate annotation without any training data. A manually annotated institutional cyst database was used as the ground truth (GT) for comparison. Overall, 3198 longitudinal scans from 991 patients were included. GPT successfully extracted the selected radiographic elements with high accuracy. Among categorical variables, accuracy ranged from 97% for solid component to 99% for calcific lesions. In the continuous variables, accuracy varied from 92% for cyst size to 97% for MPD size. However, Cohen's Kappa was higher for cyst size (0.92) compared to MPD size (0.82). Lowest accuracy (81%) was noted in the multi-class variable for number of cysts. LLM can accurately extract and curate data from radiology reports for pancreatic cyst surveillance and can be reliably used to assemble longitudinal databases. Future application of this work may potentiate the development of artificial intelligence-based surveillance models.

Artificial Intelligence for Low-Dose CT Lung Cancer Screening: Comparison of Utilization Scenarios.

Lee M, Hwang EJ, Lee JH, Nam JG, Lim WH, Park H, Park CM, Choi H, Park J, Goo JM

pubmed logopapersJul 10 2025
<b>BACKGROUND</b>. Artificial intelligence (AI) tools for evaluating low-dose CT (LDCT) lung cancer screening examinations are used predominantly for assisting radiologists' interpretations. Alternate utilization scenarios (e.g., use of AI as a prescreener or backup) warrant consideration. <b>OBJECTIVE</b>. The purpose of this study was to evaluate the impact of different AI utilization scenarios on diagnostic outcomes and interpretation times for LDCT lung cancer screening. <b>METHODS</b>. This retrospective study included 366 individuals (358 men, 8 women; mean age, 64 years) who underwent LDCT from May 2017 to December 2017 as part of an earlier prospective lung cancer screening trial. Examinations were interpreted by one of five readers, who reviewed their assigned cases in two sessions (with and without a commercial AI computer-aided detection tool). These interpretations were used to reconstruct simulated AI utilization scenarios: as an assistant (i.e., radiologists interpret all examinations with AI assistance), as a prescreener (i.e., radiologists only interpret examinations with a positive AI result), or as backup (i.e., radiologists reinterpret examinations when AI suggests a missed finding). A group of thoracic radiologists determined the reference standard. Diagnostic outcomes and mean interpretation times were assessed. Decision-curve analysis was performed. <b>RESULTS</b>. Compared with interpretation without AI (recall rate, 22.1%; per-nodule sensitivity, 64.2%; per-examination specificity, 88.8%; mean interpretation time, 164 seconds), AI as an assistant showed higher recall rate (30.3%; <i>p</i> < .001), lower per-examination specificity (81.1%), and no significant change in per-nodule sensitivity (64.8%; <i>p</i> = .86) or mean interpretation time (161 seconds; <i>p</i> = .48); AI as a prescreener showed lower recall rate (20.8%; <i>p</i> = .02) and mean interpretation time (143 seconds; <i>p</i> = .001), higher per-examination specificity (90.3%; <i>p</i> = .04), and no significant difference in per-nodule sensitivity (62.9%; <i>p</i> = .16); and AI as a backup showed increased recall rate (33.6%; <i>p</i> < .001), per-examination sensitivity (66.4%; <i>p</i> < .001), and mean interpretation time (225 seconds; <i>p</i> = .001), with lower per-examination specificity (79.9%; <i>p</i> < .001). Among scenarios, only AI as a prescreener demonstrated higher net benefit than interpretation without AI; AI as an assistant had the least net benefit. <b>CONCLUSION</b>. Different AI implementation approaches yield varying outcomes. The findings support use of AI as a prescreener as the preferred scenario. <b>CLINICAL IMPACT</b>. An approach whereby radiologists only interpret LDCT examinations with a positive AI result can reduce radiologists' workload while preserving sensitivity.

GH-UNet: group-wise hybrid convolution-VIT for robust medical image segmentation.

Wang S, Li G, Gao M, Zhuo L, Liu M, Ma Z, Zhao W, Fu X

pubmed logopapersJul 10 2025
Medical image segmentation is vital for accurate diagnosis. While U-Net-based models are effective, they struggle to capture long-range dependencies in complex anatomy. We propose GH-UNet, a Group-wise Hybrid Convolution-ViT model within the U-Net framework, to address this limitation. GH-UNet integrates a hybrid convolution-Transformer encoder for both local detail and global context modeling, a Group-wise Dynamic Gating (GDG) module for adaptive feature weighting, and a cascaded decoder for multi-scale integration. Both the encoder and GDG are modular, enabling compatibility with various CNN or ViT backbones. Extensive experiments on five public and one private dataset show GH-UNet consistently achieves superior performance. On ISIC2016, it surpasses H2Former with 1.37% and 1.94% gains in DICE and IOU, respectively, using only 38% of the parameters and 49.61% of the FLOPs. The code is freely accessible via: https://github.com/xiachashuanghua/GH-UNet .

MRI sequence focused on pancreatic morphology evaluation: three-shot turbo spin-echo with deep learning-based reconstruction.

Kadoya Y, Mochizuki K, Asano A, Miyakawa K, Kanatani M, Saito J, Abo H

pubmed logopapersJul 10 2025
BackgroundHigher-resolution magnetic resonance imaging sequences are needed for the early detection of pancreatic cancer.PurposeTo compare the quality of our novel T2-weighted, high-contrast, thin-slice imaging sequence, with an improved spatial resolution and deep learning-based reconstruction (three-shot turbo spin-echo with deep learning-based reconstruction [3S-TSE-DLR]), for imaging the pancreas with imaging using three conventional sequences (half-Fourier acquisition single-shot turbo spin-echo [HASTE], fat-suppressed 3D T1-weighted [FS-3D-T1W] imaging, and magnetic resonance cholangiopancreatography [MRCP]).Material and MethodsPancreatic images of 50 healthy volunteers acquired with 3S-TSE-DLR, HASTE, FS-3D-T1W imaging, and MRCP were compared by two diagnostic radiologists. A 5-point scale was used for assessing motion artifacts, pancreatic margin sharpness, and the ability to identify the main pancreatic duct (MPD) on 3S-TSE-DLR, HASTE, and FS-3D-T1W imaging, respectively. The ability to identify MPD via MRCP was also evaluated.ResultsArtifact scores (the higher the score, the fewer the artifacts) were significantly higher for 3S-TSE-DLR than for HASTE, and significantly lower for 3S-TSE-DLR than for FS-3D-T1W imaging, for both radiologists. Sharpness scores were significantly higher for 3S-TSE-DLR than for HASTE and FS-3D-T1W imaging, for both radiologists. The rate of identification of MPD was significantly higher for 3S-TSE-DLR than for FS-3D-T1W imaging, for both radiologists, and significantly higher for 3S-TSE-DLR than for HASTE for one radiologist. The rate of identification of MPD was not significantly different between 3S-TSE-DLR and MRCP.Conclusion3S-TSE-DLR provides better image sharpness than conventional sequences, can identify MPD equally as well or better than HASTE, and shows identification performance comparable to that of MRCP.

Non-invasive identification of TKI-resistant NSCLC: a multi-model AI approach for predicting EGFR/TP53 co-mutations.

Li J, Xu R, Wang D, Liang Z, Li Y, Wang Q, Bi L, Qi Y, Zhou Y, Li W

pubmed logopapersJul 10 2025
To investigate the value of multi-model based on preoperative CT scans in predicting EGFR/TP53 co-mutation status. We retrospectively included 2171 patients with non-small cell lung cancer (NSCLC) with pre-treatment computed tomography (CT) scans and predicting epidermal growth factor receptor (EGFR) gene sequencing from West China Hospital between January 2013 and April 2024. The deep-learning model was built for predicting EGFR / tumor protein 53 (TP53) co-occurrence status. The model performance was evaluated by area under the curve (AUC) and Kaplan-Meier analysis. We further compared multi-dimension model with three one-dimension models separately, and we explored the value of combining clinical factors with machine-learning factors. Additionally, we investigated 546 patients with 56-panel next-generation sequencing and low-dose computed tomography (LDCT) to explore the biological mechanisms of radiomics. In our cohort of 2171 patients (1,153 males, 1,018 females; median age 60 years), single-dimensional models were developed using data from 1,055 eligible patients. The multi-dimensional model utilizing a Random Forest classifier achieved superior performance, yielding the highest AUC of 0.843 for predicting EGFR/TP53 co-mutations in the test set. The multi-dimensional model demonstrates promising potential for non-invasive prediction of EGFR and TP53 co-mutations, facilitating early and informed clinical decision-making in NSCLC patients at risk of treatment resistance.
Page 98 of 3163151 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.