Sort by:
Page 235 of 6576562 results

Li M, Zhao J, Liu H, Jin B, Cui X, Wang D

pubmed logopapersAug 23 2025
Accurate age estimation is essential for assessing pediatric developmental stages and for forensics. Conventionally, pediatric age is clinically estimated by bone age through wrist X-rays. However, recent advances in deep learning enable other radiological modalities to serve as a promising complement. This study aims to explore the effectiveness of deep learning for pediatric age estimation using chest X-rays. We developed a ResNet-based deep neural network model enhanced with Coordinate Attention mechanism to predict pediatric age from chest X-rays. A dataset comprising 128,008 images was retrospectively collected from two large tertiary hospitals in Shanghai. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) were employed as main evaluation metrics across age groups. Further analysis was conducted using Spearman correlation and heatmap visualizations. The model achieved an MAE of 5.86 months for males and 5.80 months for females on the internal validation set. On the external test set, the MAE was 7.40 months for males and 7.29 months for females. The Spearman correlation coefficient was above 0.98, indicating a strong positive correlation between the predicted and true age. Heatmap analysis revealed the deep learning model mainly focused on the spine, mediastinum, heart and great vessels, with additional attention given to surrounding bones. We successfully constructed a large dataset of pediatric chest X-rays and developed a neural network model integrated with Coordinate Attention for age prediction. Experiments demonstrated the model's robustness and proved that chest X-rays can be effectively utilized for accurate pediatric age estimation. By integrating pediatric chest X-rays with age data using deep learning, we can provide more support for predicting children's age, thereby aiding in the screening of abnormal growth and development in children. This study explores whether deep learning could leverage chest X-rays for pediatric age prediction. Trained on over 120,000 images, the model shows high accuracy on internal and external validation sets. This method provides a potential complement for traditional bone age assessment and could reduce radiation exposure.

Dagli MM, Sussman JH, Gujral J, Budihal BR, Kerr M, Yoon JW, Ozturk AK, Cahill PJ, Anari J, Winkelstein BA, Welch WC

pubmed logopapersAug 23 2025
Adolescent idiopathic scoliosis (AIS) affects a significant portion of the adolescent population, leading to severe spinal deformities if untreated. Diagnosis, surgical planning, and assessment of outcomes are determined primarily by the Cobb angle on anteroposterior spinal radiographs. Screening for scoliosis enables early interventions and improved outcomes. However, screenings are often conducted through school entities where a trained radiologist may not be available to accurately interpret the imaging results. In this study, we developed an artificial intelligence tool utilizing a keypoint region-based convolutional neural network (KR-CNN) for automated thoracic Cobb angle (TCA) measurement. The KR-CNN was trained on 609 whole-spine radiographs of AIS patients and validated using our institutional AIS registry, which included 83 patients who underwent posterior spinal fusion with both preoperative and postoperative anteroposterior X-ray images. The KR-CNN model demonstrated superior performance metrics, including a mean absolute error (MAE) of 2.22, mean squared error (MSE) of 9.1, symmetric mean absolute percentage error (SMAPE) of 4.29, and intraclass correlation coefficient (ICC) of 0.98, outperforming existing methods. This method will enable fast and accurate screening for AIS and assessment of postoperative outcomes and provides a development framework for further automation and validation of spinopelvic measurements.

Zhang J, Lv R, Chen W, Du G, Fu Q, Jiang H

pubmed logopapersAug 23 2025
Early and accurate brain tumor classification is vital for clinical diagnosis and treatment. Although Convolutional Neural Networks (CNNs) are widely used in medical image analysis, they often struggle to focus on critical information adequately and have limited feature extraction capabilities. To address these challenges, this study proposes a novel Residual Network based on Multi-dimensional Attention and Pinwheel Convolution (Res-MAPNet) for Magnetic Resonance Imaging (MRI) based brain tumor classification. Res-MAPNet is developed on two key modules: the Coordinated Local Importance Enhancement Attention (CLIA) module and the Pinwheel-Shaped Attention Convolution (PSAConv) module. CLIA combines channel attention, spatial attention, and direction-aware positional encoding to focus on lesion areas. PSAConv enhances spatial feature perception through asymmetric padding and grouped convolution, expanding the receptive field for better feature extraction. The proposed model classifies two publicly brain tumor datasets into glioma, meningioma, pituitary tumor, and no tumor. The experimental results show that the proposed model achieves 99.51% accuracy in the three-classification task and 98.01% accuracy in the four-classification task, better than the existing mainstream models. Ablation studies validate the effectiveness of CLIA and PSAConv, which are 4.41% and 4.45% higher than the ConvNeXt baseline, respectively. This study provides an efficient and robust solution for brain tumor computer-aided diagnosis systems with potential for clinical applications.

Chen KH, Lin YH, Wu S, Shih NW, Meng HC, Lin YY, Huang CR, Huang JW

pubmed logopapersAug 23 2025
Low-dose computed tomography (LDCT) is the most effective tools for early detection of lung cancer. With advancements in artificial intelligence, various Computer-Aided Diagnosis (CAD) systems are now supported in clinical practice. For radiologists dealing with a huge volume of CT scans, CAD systems are helpful. However, the development of these systems depends on precisely annotated datasets, which are currently limited. Although several lung imaging datasets exist, there is only few of publicly available datasets with segmentation annotations on LDCT images. To address this problem, we developed a dataset based on NLST LDCT images with pixel-level annotations of lung lesions. The dataset includes LDCT scans from 605 patients and 715 annotated lesions, including 662 lung tumors and 53 lung nodules. Lesion volumes range from 0.03 cm<sup>3</sup> to 372.21 cm<sup>3</sup>, with 500 lesions smaller than 5 cm<sup>3</sup>, mostly located in the right upper lung. A 2D U-Net model trained on the dataset achieved a 0.95 IoU on training dataset. This dataset enhances the diversity and usability of lung cancer annotation resources.

Deng S, Huang D, Han X, Zhang H, Wang H, Mao G, Ao W

pubmed logopapersAug 23 2025
To explore the efficacy of a deep learning (DL) model in predicting perineural invasion (PNI) in prostate cancer (PCa) by conducting multiparametric MRI (mpMRI)-based tumor heterogeneity analysis. This retrospective study included 397 patients with PCa from two medical centers. The patients were divided into training, internal validation (in-vad), and independent external validation (ex-vad) cohorts (n = 173, 74, and 150, respectively). mpMRI-based habitat analysis, comprising T2-weighted imaging, diffusion-weighted imaging, and apparent diffusion coefficient sequences, was performed followed by DL, deep feature selection, and filtration to compute a radscore. Subsequently, six models were constructed: one clinical model, four habitat models (habitats 1, 2, 3, and whole-tumor), and one combined model. Receiver operating characteristic curve analysis was performed to evaluate the models' ability to predict PNI. The four habitat models exhibited robust performance in predicting PNI, with area under the curve (AUC) values of 0.862-0.935, 0.802-0.957, and 0.859-0.939 in the training, in-vad, and ex-vad cohorts, respectively. The clinical model had AUC values of 0.832, 0.818, and 0.789 in the training, in-vad, and ex-vad cohorts, respectively. The combined model outperformed the clinical and habitat models, with AUC, sensitivity, and specificity values of 0.999, 1, and 0.955 for the training cohort. Decision curve analysis and clinical impact curve analysis indicated favorable clinical applicability and utility of the combined model. DL models constructed through mpMRI-based habitat analysis accurately predict the PNI status of PCa.

Wu C, Zhang X, Zhang Y, Hui H, Wang Y, Xie W

pubmed logopapersAug 23 2025
In this study, as a proof-of-concept, we aim to initiate the development of Radiology Foundation Model, termed as RadFM. We consider three perspectives: dataset construction, model design, and thorough evaluation, concluded as follows: (i), we contribute 4 multimodal datasets with 13M 2D images and 615K 3D scans. When combined with a vast collection of existing datasets, this forms our training dataset, termed as Medical Multi-modal Dataset, MedMD. (ii), we propose an architecture that enables to integrate text input with 2D or 3D medical scans, and generates responses for diverse radiologic tasks, including diagnosis, visual question answering, report generation, and rationale diagnosis; (iii), beyond evaluation on 9 existing datasets, we propose a new benchmark, RadBench, comprising three tasks aiming to assess foundation models comprehensively. We conduct both automatic and human evaluations on RadBench. RadFM outperforms former accessible multi-modal foundation models, including GPT-4V. Additionally, we adapt RadFM for diverse public benchmarks, surpassing various existing SOTAs.

Jiang H, Zhao A, Yang Q, Yan X, Wang T, Wang Y, Jia N, Wang J, Wu G, Yue Y, Luo S, Wang H, Ren L, Chen S, Liu P, Yao G, Yang W, Song S, Li X, He K, Huang G

pubmed logopapersAug 23 2025
Carotid ultrasound requires skilled operators due to small vessel dimensions and high anatomical variability, exacerbating sonographer shortages and diagnostic inconsistencies. Prior automation attempts, including rule-based approaches with manual heuristics and reinforcement learning trained in simulated environments, demonstrate limited generalizability and fail to complete real-world clinical workflows. Here, we present UltraBot, a fully learning-based autonomous carotid ultrasound robot, achieving human-expert-level performance through four innovations: (1) A unified imitation learning framework for acquiring anatomical knowledge and scanning operational skills; (2) A large-scale expert demonstration dataset (247,000 samples, 100 × scale-up), enabling embodied foundation models with strong generalization; (3) A comprehensive scanning protocol ensuring full anatomical coverage for biometric measurement and plaque screening; (4) The clinical-oriented validation showing over 90% success rates, expert-level accuracy, up to 5.5 × higher reproducibility across diverse unseen populations. Overall, we show that large-scale deep learning offers a promising pathway toward autonomous, high-precision ultrasonography in clinical practice.

Koetzier LR, Hendriks P, Heemskerk JWT, van der Werf NR, Selles M, van der Molen AJ, Smits MLJ, Goorden MC, Burgmans MC

pubmed logopapersAug 23 2025
Effective thermal ablation of liver tumors requires precise monitoring of the ablation zone. Computed tomography (CT) thermometry can non-invasively monitor lethal temperatures but suffers from metal artifacts caused by ablation equipment. This study assesses spectral CT thermometry's applicability during microwave ablation, comparing the reproducibility, precision, and accuracy of attenuation-based versus physical density-based thermometry. Furthermore, it identifies optimal metal artifact reduction (MAR) methods: O-MAR, deep learning-MAR, spectral CT, and combinations thereof. Four gel phantoms embedded with temperature sensors underwent a 10- minute, 60 W microwave ablation imaged by dual-layer spectral CT scanner in 23 scans over time. For each scan attenuation-based and physical density-based temperature maps were reconstructed. Attenuation-based and physical density-based thermometry models were tested for reproducibility over three repetitions; a fourth repetition focused on accuracy. MAR techniques were applied to one repetition to evaluate temperature precision in artifact-corrupted slices. The correlation between CT value and temperature was highly linear with an R-squared value exceeding 96 %. Model parameters for attenuation-based and physical density-based thermometry were -0.38 HU/°C and 0.00039 °C<sup>-1</sup>, with coefficients of variation of 2.3 % and 6.7 %, respectively. Physical density maps improved temperature precision in presence of needle artifacts by 73 % compared to attenuation images. O-MAR improved temperature precision with 49 % compared to no MAR. Attenuation-based thermometry yielded narrower Bland-Altman limits-of-agreement (-7.7 °C to 5.3 °C) than physical density-based thermometry. Spectral physical density-based CT thermometry at 150 keV, utilized alongside O-MAR, enhances temperature precision in presence of metal artifacts and achieves reproducible temperature measurements with high accuracy.

Deana C, Biasucci DG, Aspide R, Bagatto D, Brasil S, Brunetti D, Saitta T, Vapireva M, Zanza C, Longhitano Y, Bignami EG, Vetrugno L

pubmed logopapersAug 23 2025
Intracranial hypertension (IH) is a life-threatening complication that may occur after acute brain injury. Early recognition of IH allows prompt interventions that improve outcomes. Even if invasive intracranial monitoring is considered the gold standard for the most severely injured patients, scarce availability of resources, the need for advanced skills, and potential for complications often limit its utilization. On the other hand, different non-invasive methods to evaluate acutely brain-injured patients for elevated intracranial pressure have been investigated. Clinical examination and neuroradiology represent the cornerstone of a patient's evaluation in the intensive care unit (ICU). However, multimodal neuromonitoring, employing widely used different tools, such as brain ultrasound, automated pupillometry, and skull micro-deformation recordings, increase the possibility for continuous or semi-continuous intracranial pressure monitoring. Furthermore, artificial intelligence (AI) has been investigated to as a tool to predict elevated intracranial pressure, shedding light on new diagnostic and treatment horizons with the potential to improve patient outcomes. This narrative review, based on a systematic literature search, summarizes the best available evidence on the use of non-invasive monitoring tools and methods for the assessment of intracranial pressure.

Yuan L, Chen Q, Al-Hallaq H, Yang J, Yang X, Geng H, Latifi K, Cai B, Wu QJ, Xiao Y, Benedict SH, Rong Y, Buchsbaum J, Qi XS

pubmed logopapersAug 23 2025
To evaluate organs-at-risk (OARs) segmentation variability across eight commercial AI-based segmentation software using independent multi-institutional datasets, and to provide recommendations for clinical practices utilizing AI-segmentation. 160 planning CT image sets from four anatomical sites: head-and-neck, thorax, abdomen and pelvis were retrospectively pooled from three institutions. Contours for 31 OARs generated by the software were compared to clinical contours using multiple accuracy metrics, including: Dice similarity coefficient (DSC), 95 Percentile of Hausdorff distance (HD95), surface DSC, as well as relative added path length (RAPL) as an efficiency metric. A two-factor analysis of variance was used to quantify variability in contouring accuracy across software platforms (inter-software) and patients (inter-patient). Pairwise comparisons were performed to categorize the software into different performance groups, and inter-software variations (ISV) were calculated as the average performance differences between the groups. Significant inter-software and inter-patient contouring accuracy variations (p<0.05) were observed for most OARs. The largest ISV in DSC in each anatomical region were cervical esophagus (0.41), trachea (0.10), spinal cord (0.13) and prostate (0.17). Among the organs evaluated, 7 had mean DSC >0.9 (i.e., heart, liver), 15 had DSC ranging from 0.7 to 0.89 (i.e., parotid, esophagus). The remaining organs (i.e., optic nerves, seminal vesicle) had DSC<0.7. 16 of the 31 organs (52%) had RAPL less than 0.1. Our results reveal significant inter-software and inter-patient variability in the performance of AI-segmentation software. These findings highlight the need of thorough software commissioning, testing, and quality assurance across disease sites, patient-specific anatomies and image acquisition protocols.
Page 235 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.