Sort by:
Page 63 of 3463455 results

ResLink: A Novel Deep Learning Architecture for Brain Tumor Classification with Area Attention and Residual Connections

Sumedha Arya, Nirmal Gaud

arxiv logopreprintAug 24 2025
Brain tumors show significant health challenges due to their potential to cause critical neurological functions. Early and accurate diagnosis is crucial for effective treatment. In this research, we propose ResLink, a novel deep learning architecture for brain tumor classification using CT scan images. ResLink integrates novel area attention mechanisms with residual connections to enhance feature learning and spatial understanding for spatially rich image classification tasks. The model employs a multi-stage convolutional pipeline, incorporating dropout, regularization, and downsampling, followed by a final attention-based refinement for classification. Trained on a balanced dataset, ResLink achieves a high accuracy of 95% and demonstrates strong generalizability. This research demonstrates the potential of ResLink in improving brain tumor classification, offering a robust and efficient technique for medical imaging applications.

Prediction of functional outcomes in aneurysmal subarachnoid hemorrhage using pre-/postoperative noncontrast CT within 3 days of admission.

Yin P, Wang J, Zhang C, Tang Y, Hu X, Shu H, Wang J, Liu B, Yu Y, Zhou Y, Li X

pubmed logopapersAug 24 2025
Aneurysmal subarachnoid hemorrhage (aSAH) is a life-threatening condition, and accurate prediction of functional outcomes is critical for optimizing patient management within the initial 3 days of presentation. However, existing clinical scoring systems and imaging assessments do not fully capture clinical variability in predicting outcomes. We developed a deep learning model integrating pre- and postoperative noncontrast CT (NCCT) imaging with clinical data to predict 3-month modified Rankin Scale (mRS) scores in aSAH patients. Using data from 1850 patients across four hospitals, we constructed and validated five models: preoperative, postoperative, stacking imaging, clinical, and fusion models. The fusion model significantly outperformed the others (all p<0.001), achieving a mean absolute error of 0.79 and an area under the curve of 0.92 in the external test. These findings demonstrate that this integrated deep learning model enables accurate prediction of 3-month outcomes and may serve as a prognostic support tool early in aSAH care.

Diagnostic value of artificial intelligence-based software for the detection of pediatric upper extremity fractures.

Mollica F, Metz C, Anders MS, Wismayer KK, Schmid A, Niehues SM, Veldhoen S

pubmed logopapersAug 23 2025
Fractures in children are common in emergency care, and accurate diagnosis is crucial to avoid complications affecting skeletal development. Limited access to pediatric radiology specialists emphasizes the potential of artificial intelligence (AI)-based diagnostic tools. This study evaluates the performance of the AI software BoneView® for detecting fractures of the upper extremity in children aged 2-18 years. A retrospective analysis was conducted using radiographic data from 826 pediatric patients presenting to the university's pediatric emergency department. Independent assessments by two experienced pediatric radiologists served as reference standard. The diagnostic accuracy of the AI tool compared to the reference standard was evaluated and performance parameters, e.g., sensitivity, specificity, positive and negative predictive values were calculated. The AI tool achieved an overall sensitivity of 89% and specificity of 91% for detecting fractures of the upper extremities. Significantly poorer performance compared to the reference standard was observed for the shoulder, elbow, hand, and fingers, while no significant difference was found for the wrist, clavicle, upper arm, and forearm. The software performed best for wrist fractures (sensitivity: 96%; specificity: 94%) and worst for elbow fractures (sensitivity: 87%; specificity: 65%). The software assessed provides diagnostic support in pediatric emergency radiology. While its overall performance is robust, limitations in specific anatomical regions underscore the need for further training of the underlying algorithms. The results suggest that AI can complement clinical expertise but should not replace radiological assessment. Question There is no comprehensive analysis of an AI-based tool for the diagnosis of pediatric fractures focusing on the upper extremities. Findings The AI-based software demonstrated solid overall diagnostic accuracy in the detection of upper limb fractures in children, with performance differing by anatomical region. Clinical relevance AI-based fracture detection can support pediatric emergency radiology, especially where expert interpretation is limited. However, further algorithm training is needed for certain anatomical regions and for detecting associated findings such as joint effusions to maximize clinical benefit.

Predicting pediatric age from chest X-rays using deep learning: a novel approach.

Li M, Zhao J, Liu H, Jin B, Cui X, Wang D

pubmed logopapersAug 23 2025
Accurate age estimation is essential for assessing pediatric developmental stages and for forensics. Conventionally, pediatric age is clinically estimated by bone age through wrist X-rays. However, recent advances in deep learning enable other radiological modalities to serve as a promising complement. This study aims to explore the effectiveness of deep learning for pediatric age estimation using chest X-rays. We developed a ResNet-based deep neural network model enhanced with Coordinate Attention mechanism to predict pediatric age from chest X-rays. A dataset comprising 128,008 images was retrospectively collected from two large tertiary hospitals in Shanghai. Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE) were employed as main evaluation metrics across age groups. Further analysis was conducted using Spearman correlation and heatmap visualizations. The model achieved an MAE of 5.86 months for males and 5.80 months for females on the internal validation set. On the external test set, the MAE was 7.40 months for males and 7.29 months for females. The Spearman correlation coefficient was above 0.98, indicating a strong positive correlation between the predicted and true age. Heatmap analysis revealed the deep learning model mainly focused on the spine, mediastinum, heart and great vessels, with additional attention given to surrounding bones. We successfully constructed a large dataset of pediatric chest X-rays and developed a neural network model integrated with Coordinate Attention for age prediction. Experiments demonstrated the model's robustness and proved that chest X-rays can be effectively utilized for accurate pediatric age estimation. By integrating pediatric chest X-rays with age data using deep learning, we can provide more support for predicting children's age, thereby aiding in the screening of abnormal growth and development in children. This study explores whether deep learning could leverage chest X-rays for pediatric age prediction. Trained on over 120,000 images, the model shows high accuracy on internal and external validation sets. This method provides a potential complement for traditional bone age assessment and could reduce radiation exposure.

An Efficient Dual-Line Decoder Network with Multi-Scale Convolutional Attention for Multi-organ Segmentation

Riad Hassan, M. Rubaiyat Hossain Mondal, Sheikh Iqbal Ahamed, Fahad Mostafa, Md Mostafijur Rahman

arxiv logopreprintAug 23 2025
Proper segmentation of organs-at-risk is important for radiation therapy, surgical planning, and diagnostic decision-making in medical image analysis. While deep learning-based segmentation architectures have made significant progress, they often fail to balance segmentation accuracy with computational efficiency. Most of the current state-of-the-art methods either prioritize performance at the cost of high computational complexity or compromise accuracy for efficiency. This paper addresses this gap by introducing an efficient dual-line decoder segmentation network (EDLDNet). The proposed method features a noisy decoder, which learns to incorporate structured perturbation at training time for better model robustness, yet at inference time only the noise-free decoder is executed, leading to lower computational cost. Multi-Scale convolutional Attention Modules (MSCAMs), Attention Gates (AGs), and Up-Convolution Blocks (UCBs) are further utilized to optimize feature representation and boost segmentation performance. By leveraging multi-scale segmentation masks from both decoders, we also utilize a mutation-based loss function to enhance the model's generalization. Our approach outperforms SOTA segmentation architectures on four publicly available medical imaging datasets. EDLDNet achieves SOTA performance with an 84.00% Dice score on the Synapse dataset, surpassing baseline model like UNet by 13.89% in Dice score while significantly reducing Multiply-Accumulate Operations (MACs) by 89.7%. Compared to recent approaches like EMCAD, our EDLDNet not only achieves higher Dice score but also maintains comparable computational efficiency. The outstanding performance across diverse datasets establishes EDLDNet's strong generalization, computational efficiency, and robustness. The source code, pre-processed data, and pre-trained weights will be available at https://github.com/riadhassan/EDLDNet .

Generating Synthetic Contrast-Enhanced Chest CT Images from Non-Contrast Scans Using Slice-Consistent Brownian Bridge Diffusion Network

Pouya Shiri, Xin Yi, Neel P. Mistry, Samaneh Javadinia, Mohammad Chegini, Seok-Bum Ko, Amirali Baniasadi, Scott J. Adams

arxiv logopreprintAug 23 2025
Contrast-enhanced computed tomography (CT) imaging is essential for diagnosing and monitoring thoracic diseases, including aortic pathologies. However, contrast agents pose risks such as nephrotoxicity and allergic-like reactions. The ability to generate high-fidelity synthetic contrast-enhanced CT angiography (CTA) images without contrast administration would be transformative, enhancing patient safety and accessibility while reducing healthcare costs. In this study, we propose the first bridge diffusion-based solution for synthesizing contrast-enhanced CTA images from non-contrast CT scans. Our approach builds on the Slice-Consistent Brownian Bridge Diffusion Model (SC-BBDM), leveraging its ability to model complex mappings while maintaining consistency across slices. Unlike conventional slice-wise synthesis methods, our framework preserves full 3D anatomical integrity while operating in a high-resolution 2D fashion, allowing seamless volumetric interpretation under a low memory budget. To ensure robust spatial alignment, we implement a comprehensive preprocessing pipeline that includes resampling, registration using the Symmetric Normalization method, and a sophisticated dilated segmentation mask to extract the aorta and surrounding structures. We create two datasets from the Coltea-Lung dataset: one containing only the aorta and another including both the aorta and heart, enabling a detailed analysis of anatomical context. We compare our approach against baseline methods on both datasets, demonstrating its effectiveness in preserving vascular structures while enhancing contrast fidelity.

Spectral computed tomography thermometry for thermal ablation: applicability and needle artifact reduction.

Koetzier LR, Hendriks P, Heemskerk JWT, van der Werf NR, Selles M, van der Molen AJ, Smits MLJ, Goorden MC, Burgmans MC

pubmed logopapersAug 23 2025
Effective thermal ablation of liver tumors requires precise monitoring of the ablation zone. Computed tomography (CT) thermometry can non-invasively monitor lethal temperatures but suffers from metal artifacts caused by ablation equipment. This study assesses spectral CT thermometry's applicability during microwave ablation, comparing the reproducibility, precision, and accuracy of attenuation-based versus physical density-based thermometry. Furthermore, it identifies optimal metal artifact reduction (MAR) methods: O-MAR, deep learning-MAR, spectral CT, and combinations thereof. Four gel phantoms embedded with temperature sensors underwent a 10- minute, 60 W microwave ablation imaged by dual-layer spectral CT scanner in 23 scans over time. For each scan attenuation-based and physical density-based temperature maps were reconstructed. Attenuation-based and physical density-based thermometry models were tested for reproducibility over three repetitions; a fourth repetition focused on accuracy. MAR techniques were applied to one repetition to evaluate temperature precision in artifact-corrupted slices. The correlation between CT value and temperature was highly linear with an R-squared value exceeding 96 %. Model parameters for attenuation-based and physical density-based thermometry were -0.38 HU/°C and 0.00039 °C<sup>-1</sup>, with coefficients of variation of 2.3 % and 6.7 %, respectively. Physical density maps improved temperature precision in presence of needle artifacts by 73 % compared to attenuation images. O-MAR improved temperature precision with 49 % compared to no MAR. Attenuation-based thermometry yielded narrower Bland-Altman limits-of-agreement (-7.7 °C to 5.3 °C) than physical density-based thermometry. Spectral physical density-based CT thermometry at 150 keV, utilized alongside O-MAR, enhances temperature precision in presence of metal artifacts and achieves reproducible temperature measurements with high accuracy.

Epicardial and paracardial adipose tissue quantification in short-axis cardiac cine MRI using deep learning.

Zhang R, Wang X, Zhou Z, Ni L, Jiang M, Hu P

pubmed logopapersAug 23 2025
Epicardial and paracardial adipose tissues (EAT and PAT) are two types of fat depots around the heart and they have important roles in cardiac physiology. Manual quantification of EAT and PAT from cardiac MR (CMR) is time-consuming and prone to human bias. Leveraging the cardiac motion, we aimed to develop deep learning neural networks for automated segmentation and quantification of EAT and PAT in short-axis cine CMR. A modified U-Net equipped with modules of multi-resolution convolution, motion information extraction, feature fusion, and dual attention mechanisms, was developed. Multiple steps of ablation studies were performed to verify the efficacy of each module. The performance of different networks was also compared. The final network incorporating all modules achieved segmentation Dice indices of 77.72% ± 2.53% and 77.18% ± 3.54% for EAT and PAT, respectively, which were significantly higher than the baseline U-Net. It also achieved the highest performance compared to other networks. With our model, the determination coefficients of EAT and PAT volumes to the reference were 0.8550 and 0.8025, respectively. Our proposed network can provide accurate and quick quantification of EAT and PAT on routine short-axis cine CMR, which can potentially aid cardiologists in clinical settings.

A deep learning model for distinguishing pseudoprogression and tumor progression in glioblastoma based on pre- and post-operative contrast-enhanced T1 imaging.

Li J, Liu R, Xing Y, Yin Q, Su Q

pubmed logopapersAug 23 2025
Accurately predicting pseudoprogression (PsP) from tumor progression (TuP) in patients with glioblastoma (GBM) is crucial for treatment and prognosis. This study develops a deep learning (DL) prognostic model using pre- and post-operative contrast-enhanced T1-weighted (CET1) magnetic resonance imaging (MRI) to forecast the likelihood of PsP or TuP following standard GBM treatment. Brain MRI data and clinical characteristics from 110 GBM patients were divided into a training set (n = 68) and a validation set (n = 42). Pre-operative and post-operative CET1 images were used individually and combined. A Vision Transformer (ViT) model was built using expert-segmented tumor images to extract DL features. Several mainstream convolutional neural network (CNN) models (DenseNet121, Inception_v3, MobileNet_v2, ResNet18, ResNet50, and VGG16) were built for comparative evaluation. Principal Component Analysis (PCA) and Least Absolute Shrinkage and Selection Operator (LASSO) regression selected the significant features, classified using a Multi-Layer Perceptron (MLP). Model performance was evaluated with Receiver Operating Characteristic (ROC) curves. A multimodal model also incorporated DL features and clinical characteristics. The optimal input for predicting TuP versus PsP was the combination of pre- and post-operative CET1 tumor regions. The CET1-ViT model achieved an area under the curve (AUC) of 95.5% and accuracy of 90.7% on the training set, and an AUC of 95.2% and accuracy of 96.7% on the validation set. This model outperformed the mainstream CNN models. The multimodal model showed superior performance, with AUCs of 98.6% and 99.3% on the training and validation sets, respectively. We developed a DL model based on pre- and post-operative CET1 imaging that can effectively forecast PsP versus TuP in GBM patients, offering potential for evaluating treatment responses and early indications of tumor progression.

Towards expert-level autonomous carotid ultrasonography with large-scale learning-based robotic system.

Jiang H, Zhao A, Yang Q, Yan X, Wang T, Wang Y, Jia N, Wang J, Wu G, Yue Y, Luo S, Wang H, Ren L, Chen S, Liu P, Yao G, Yang W, Song S, Li X, He K, Huang G

pubmed logopapersAug 23 2025
Carotid ultrasound requires skilled operators due to small vessel dimensions and high anatomical variability, exacerbating sonographer shortages and diagnostic inconsistencies. Prior automation attempts, including rule-based approaches with manual heuristics and reinforcement learning trained in simulated environments, demonstrate limited generalizability and fail to complete real-world clinical workflows. Here, we present UltraBot, a fully learning-based autonomous carotid ultrasound robot, achieving human-expert-level performance through four innovations: (1) A unified imitation learning framework for acquiring anatomical knowledge and scanning operational skills; (2) A large-scale expert demonstration dataset (247,000 samples, 100 × scale-up), enabling embodied foundation models with strong generalization; (3) A comprehensive scanning protocol ensuring full anatomical coverage for biometric measurement and plaque screening; (4) The clinical-oriented validation showing over 90% success rates, expert-level accuracy, up to 5.5 × higher reproducibility across diverse unseen populations. Overall, we show that large-scale deep learning offers a promising pathway toward autonomous, high-precision ultrasonography in clinical practice.
Page 63 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.