Sort by:
Page 427 of 4494481 results

AmygdalaGo-BOLT: an open and reliable AI tool to trace boundaries of human amygdala

Zhou, Q., Dong, B., Gao, P., Jintao, W., Xiao, J., Wang, W., Liang, P., Lin, D., Zuo, X.-N., He, H.

biorxiv logopreprintMay 13 2025
Each year, thousands of brain MRI scans are collected to study structural development in children and adolescents. However, the amygdala, a particularly small and complex structure, remains difficult to segment reliably, especially in developing populations where its volume is even smaller. To address this challenge, we developed AmygdalaGo-BOLT, a boundary-aware deep learning model tailored for human amygdala segmentation. It was trained and validated using 854 manually labeled scans from pediatric datasets, with independent samples used to ensure performance generalizability. The model integrates multiscale image features, spatial priors, and self-attention mechanisms within a compact encoder-decoder architecture to enhance boundary detection. Validation across multiple imaging centers and age groups shows that AmygdalaGo-BOLT closely matches expert manual labels, improves processing efficiency, and outperforms existing tools in accuracy. This enables robust and scalable analysis of amygdala morphology in developmental neuroimaging studies where manual tracing is impractical. To support open and reproducible science, we publicly release both the labeled datasets and the full source code.

Development of a deep learning method for phase retrieval image enhancement in phase contrast microcomputed tomography.

Ding XF, Duan X, Li N, Khoz Z, Wu FX, Chen X, Zhu N

pubmed logopapersMay 13 2025
Propagation-based imaging (one method of X-ray phase contrast imaging) with microcomputed tomography (PBI-µCT) offers the potential to visualise low-density materials, such as soft tissues and hydrogel constructs, which are difficult to be identified by conventional absorption-based contrast µCT. Conventional µCT reconstruction produces edge-enhanced contrast (EEC) images which preserve sharp boundaries but are susceptible to noise and do not provide consistent grey value representation for the same material. Meanwhile, phase retrieval (PR) algorithms can convert edge enhanced contrast to area contrast to improve signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) but usually results to over-smoothing, thus creating inaccuracies in quantitative analysis. To alleviate these problems, this study developed a deep learning-based method called edge view enhanced phase retrieval (EVEPR), by strategically integrating the complementary spatial features of denoised EEC and PR images, and further applied this method to segment the hydrogel constructs in vivo and ex vivo. EVEPR used paired denoised EEC and PR images to train a deep convolutional neural network (CNN) on a dataset-to-dataset basis. The CNN had been trained on important high-frequency details, for example, edges and boundaries from the EEC image and area contrast from PR images. The CNN predicted result showed enhanced area contrast beyond conventional PR algorithms while improving SNR and CNR. The enhanced CNR especially allowed for the image to be segmented with greater efficiency. EVEPR was applied to in vitro and ex vivo PBI-µCT images of low-density hydrogel constructs. The enhanced visibility and consistency of hydrogel constructs was essential for segmenting such material which usually exhibit extremely poor contrast. The EVEPR images allowed for more accurate segmentation with reduced manual adjustments. The efficiency in segmentation allowed for the generation of a sizeable database of segmented hydrogel scaffolds which were used in conventional data-driven segmentation applications. EVEPR was demonstrated to be a robust post-image processing method capable of significantly enhancing image quality by training a CNN on paired denoised EEC and PR images. This method not only addressed the common issues of over-smoothing and noise susceptibility in conventional PBI-µCT image processing but also allowed for efficient and accurate in vitro and ex vivo image processing applications of low-density materials.

Rethinking femoral neck anteversion assessment: a novel automated 3D CT method compared to traditional manual techniques.

Xiao H, Yibulayimu S, Zhao C, Sang Y, Chen Y, Ge Y, Sun Q, Ming Y, Bei M, Zhu G, Song Y, Wang Y, Wu X

pubmed logopapersMay 13 2025
To evaluate the accuracy and reliability of a novel automated 3D CT-based method for measuring femoral neck anteversion (FNA) compared to three traditional manual methods. A total of 126 femurs from 63 full-length CT scans (35 men and 28 women; average age: 52.0 ± 14.7 years) were analyzed. The automated method used a deep learning network for femur segmentation, landmark identification, and anteversion calculation, with results generated based on two axes: Auto_GT (using the greater trochanter-to-intercondylar notch center axis) and Auto_P (using the piriformis fossa-to-intercondylar notch center axis). These results were validated through manual landmark annotation. The same dataset was assessed using three conventional manual methods: Murphy, Reikeras, and Lee methods. Intra- and inter-observer reliability were assessed using intraclass correlation coefficients (ICCs), and pairwise comparisons analyzed correlations and differences between methods. The automated methods produced consistent FNA measurements (Auto_GT: 17.59 ± 9.16° vs. Auto_P: 17.37 ± 9.17° on the right; 15.08 ± 9.88° vs. 14.84 ± 9.90° on the left). Intra-observer ICCs ranged from 0.864 to 0.961, and inter-observer ICCs between Auto_GT and the manual methods were high, except for the Lee method. No significant differences were observed between the two automated methods or between the automated and manual verification methods. Moreover, strong correlations (R > 0.9, p < 0.001) were found between Auto_GT and the manual methods. The novel automated 3D CT-based method demonstrates strong reproducibility and reliability for measuring femoral neck anteversion, with performance comparable to traditional manual techniques. These results indicate its potential utility for preoperative planning, postoperative evaluation, and computer-assisted orthopedic procedures. Not applicable.

Artificial intelligence for chronic total occlusion percutaneous coronary interventions.

Rempakos A, Pilla P, Alexandrou M, Mutlu D, Strepkos D, Carvalho PEP, Ser OS, Bahbah A, Amin A, Prasad A, Azzalini L, Ybarra LF, Mastrodemos OC, Rangan BV, Al-Ogaili A, Jalli S, Burke MN, Sandoval Y, Brilakis ES

pubmed logopapersMay 13 2025
Artificial intelligence (AI) has become pivotal in advancing medical care, particularly in interventional cardiology. Recent AI developments have proven effective in guiding advanced procedures and complex decisions. The authors review the latest AI-based innovations in the diagnosis of chronic total occlusions (CTO) and in determining the probability of success of CTO percutaneous coronary intervention (PCI). Neural networks and deep learning strategies were the most commonly used algorithms, and the models were trained and deployed using a variety of data types, such as clinical parameters and imaging. AI holds great promise in facilitating CTO PCI.

An automated cascade framework for glioma prognosis via segmentation, multi-feature fusion and classification techniques.

Hamoud M, Chekima NEI, Hima A, Kholladi NH

pubmed logopapersMay 13 2025
Glioma is one of the most lethal types of brain tumors, accounting for approximately 33% of all diagnosed brain tumor cases. Accurate segmentation and classification are crucial for precise glioma characterization, emphasizing early detection of malignancy, effective treatment planning, and prevention of tumor progression. Magnetic Resonance Imaging (MRI) serves as a non-invasive imaging modality that allows detailed examination of gliomas without exposure to ionizing radiation. However, manual analysis of MRI scans is impractical, time-consuming, subjective, and requires specialized expertise from radiologists. To address this, computer-aided diagnosis (CAD) systems have greatly evolved as powerful tools to support neuro-oncologists in the brain cancer screening process. In this work, we present a glioma classification framework based on 3D multi-modal MRI segmentation using the CNN models SegResNet and Swin UNETR which incorporates transformer mechanisms for enhancing segmentation performance. MRI images undergo preprocessing with a Gaussian filter and skull stripping to improve tissue localization. Key textural features are then extracted from segmented tumor regions using Gabor Transform, Discrete Wavelet Transform (DWT), and deep features from ResNet50. These features are fused, normalized, and classified using a Support Vector Machine (SVM) to distinguish between Low-Grade Glioma (LGG) and High-Grade Glioma (HGG). Extensive experiments on benchmark datasets, including BRATS2020 and BRATS2023, demonstrate the effectiveness of the proposed approach. Our model achieved Dice scores of 0.815 for Tumor Core, 0.909 for Whole Tumor, and 0.829 for Enhancing Tumor. Concerning classification, the framework attained 97% accuracy, 94% precision, 96% recall, and a 95% F1-score. These results highlight the potential of the proposed framework to provide reliable support for radiologists in the early detection and classification of gliomas.

DEMAC-Net: A Dual-Encoder Multiattention Collaborative Network for Cervical Nerve Pathway and Adjacent Anatomical Structure Segmentation.

Cui H, Duan J, Lin L, Wu Q, Guo W, Zang Q, Zhou M, Fang W, Hu Y, Zou Z

pubmed logopapersMay 13 2025
Currently, cervical anesthesia is performed using three main approaches: superficial cervical plexus block, deep cervical plexus block, and intermediate plexus nerve block. However, each technique carries inherent risks and demands significant clinical expertise. Ultrasound imaging, known for its real-time visualization capabilities and accessibility, is widely used in both diagnostic and interventional procedures. Nevertheless, accurate segmentation of small and irregularly shaped structures such as the cervical and brachial plexuses remains challenging due to image noise, complex anatomical morphology, and limited annotated training data. This study introduces DEMAC-Net-a dual-encoder, multiattention collaborative network-to significantly improve the segmentation accuracy of these neural structures. By precisely identifying the cervical nerve pathway (CNP) and adjacent anatomical tissues, DEMAC-Net aims to assist clinicians, especially those less experienced, in effectively guiding anesthesia procedures and accurately identifying optimal needle insertion points. Consequently, this improvement is expected to enhance clinical safety, reduce procedural risks, and streamline decision-making efficiency during ultrasound-guided regional anesthesia. DEMAC-Net combines a dual-encoder architecture with the Spatial Understanding Convolution Kernel (SUCK) and the Spatial-Channel Attention Module (SCAM) to extract multi-scale features effectively. Additionally, a Global Attention Gate (GAG) and inter-layer fusion modules refine relevant features while suppressing noise. A novel dataset, Neck Ultrasound Dataset (NUSD), was introduced, containing 1,500 annotated ultrasound images across seven anatomical regions. Extensive experiments were conducted on both NUSD and the BUSI public dataset, comparing DEMAC-Net to state-of-the-art models using metrics such as Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). On the NUSD dataset, DEMAC-Net achieved a mean DSC of 93.3%, outperforming existing models. For external validation on the BUSI dataset, it demonstrated superior generalization, achieving a DSC of 87.2% and a mean IoU of 77.4%, surpassing other advanced methods. Notably, DEMAC-Net displayed consistent segmentation stability across all tested structures. The proposed DEMAC-Net significantly improves segmentation accuracy for small nerves and complex anatomical structures in ultrasound images, outperforming existing methods in terms of accuracy and computational efficiency. This framework holds great potential for enhancing ultrasound-guided procedures, such as peripheral nerve blocks, by providing more precise anatomical localization, ultimately improving clinical outcomes.

Segmentation of renal vessels on non-enhanced CT images using deep learning models.

Zhong H, Zhao Y, Zhang Y

pubmed logopapersMay 13 2025
To evaluate the possibility of performing renal vessel reconstruction on non-enhanced CT images using deep learning models. 177 patients' CT scans in the non-enhanced phase, arterial phase and venous phase were chosen. These data were randomly divided into the training set (n = 120), validation set (n = 20) and test set (n = 37). In training set and validation set, a radiologist marked out the right renal arteries and veins on non-enhanced CT phase images using contrast phases as references. Trained deep learning models were tested and evaluated on the test set. A radiologist performed renal vessel reconstruction on the test set without the contrast phase reference, and the results were used for comparison. Reconstruction using the arterial phase and venous phase was used as the gold standard. Without the contrast phase reference, both radiologist and model could accurately identify artery and vein main trunk. The accuracy was 91.9% vs. 97.3% (model vs. radiologist) in artery and 91.9% vs. 100% in vein, the difference was insignificant. The model had difficulty identify accessory arteries, the accuracy was significantly lower than radiologist (44.4% vs. 77.8%, p = 0.044). The model also had lower accuracy in accessory veins, but the difference was insignificant (64.3% vs. 85.7%, p = 0.094). Deep learning models could accurately recognize the right renal artery and vein main trunk, and accuracy was comparable to that of radiologists. Although the current model still had difficulty recognizing small accessory vessels, further training and model optimization would solve these problems.

A Deep Learning-Driven Framework for Inhalation Injury Grading Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries face a challenge in clinical diagnosis and grading due to the limitations of traditional methods, such as Abbreviated Injury Score (AIS), which rely on subjective assessments and show weak correlations with clinical outcomes. This study introduces a novel deep learning-based framework for grading inhalation injuries using bronchoscopy images with the duration of mechanical ventilation as an objective metric. To address the scarcity of medical imaging data, we propose enhanced StarGAN, a generative model that integrates Patch Loss and SSIM Loss to improve synthetic images' quality and clinical relevance. The augmented dataset generated by enhanced StarGAN significantly improved classification performance when evaluated using the Swin Transformer, achieving an accuracy of 77.78%, an 11.11% improvement over the original dataset. Image quality was assessed using the Fr\'echet Inception Distance (FID), where Enhanced StarGAN achieved the lowest FID of 30.06, outperforming baseline models. Burn surgeons confirmed the realism and clinical relevance of the generated images, particularly the preservation of bronchial structures and color distribution. These results highlight the potential of enhanced StarGAN in addressing data limitations and improving classification accuracy for inhalation injury grading.

Deep Learning-Derived Cardiac Chamber Volumes and Mass From PET/CT Attenuation Scans: Associations With Myocardial Flow Reserve and Heart Failure.

Hijazi W, Shanbhag A, Miller RJH, Kavanagh PB, Killekar A, Lemley M, Wopperer S, Knight S, Le VT, Mason S, Acampa W, Rosamond T, Dey D, Berman DS, Chareonthaitawee P, Di Carli MF, Slomka PJ

pubmed logopapersMay 13 2025
Computed tomography (CT) attenuation correction scans are an intrinsic part of positron emission tomography (PET) myocardial perfusion imaging using PET/CT, but anatomic information is rarely derived from these ultralow-dose CT scans. We aimed to assess the association between deep learning-derived cardiac chamber volumes (right atrial, right ventricular, left ventricular, and left atrial) and mass (left ventricular) from these scans with myocardial flow reserve and heart failure hospitalization. We included 18 079 patients with consecutive cardiac PET/CT from 6 sites. A deep learning model estimated cardiac chamber volumes and left ventricular mass from computed tomography attenuation correction imaging. Associations between deep learning-derived CT mass and volumes with heart failure hospitalization and reduced myocardial flow reserve were assessed in a multivariable analysis. During a median follow-up of 4.3 years, 1721 (9.5%) patients experienced heart failure hospitalization. Patients with 3 or 4 abnormal chamber volumes were 7× more likely to be hospitalized for heart failure compared with patients with normal volumes. In adjusted analyses, left atrial volume (hazard ratio [HR], 1.25 [95% CI, 1.19-1.30]), right atrial volume (HR, 1.29 [95% CI, 1.23-1.35]), right ventricular volume (HR, 1.25 [95% CI, 1.20-1.31]), left ventricular volume (HR, 1.27 [95% CI, 1.23-1.35]), and left ventricular mass (HR, 1.25 [95% CI, 1.18-1.32]) were independently associated with heart failure hospitalization. In multivariable analyses, left atrial volume (odds ratio, 1.14 [95% CI, 1.0-1.19]) and ventricular mass (odds ratio, 1.12 [95% CI, 1.6-1.17]) were independent predictors of reduced myocardial flow reserve. Deep learning-derived chamber volumes and left ventricular mass from computed tomography attenuation correction were predictive of heart failure hospitalization and reduced myocardial flow reserve in patients undergoing cardiac PET perfusion imaging. This anatomic data can be routinely reported along with other PET/CT parameters to improve risk prediction.

An incremental algorithm for non-convex AI-enhanced medical image processing

Elena Morotti

arxiv logopreprintMay 13 2025
Solving non-convex regularized inverse problems is challenging due to their complex optimization landscapes and multiple local minima. However, these models remain widely studied as they often yield high-quality, task-oriented solutions, particularly in medical imaging, where the goal is to enhance clinically relevant features rather than merely minimizing global error. We propose incDG, a hybrid framework that integrates deep learning with incremental model-based optimization to efficiently approximate the $\ell_0$-optimal solution of imaging inverse problems. Built on the Deep Guess strategy, incDG exploits a deep neural network to generate effective initializations for a non-convex variational solver, which refines the reconstruction through regularized incremental iterations. This design combines the efficiency of Artificial Intelligence (AI) tools with the theoretical guarantees of model-based optimization, ensuring robustness and stability. We validate incDG on TpV-regularized optimization tasks, demonstrating its effectiveness in medical image deblurring and tomographic reconstruction across diverse datasets, including synthetic images, brain CT slices, and chest-abdomen scans. Results show that incDG outperforms both conventional iterative solvers and deep learning-based methods, achieving superior accuracy and stability. Moreover, we confirm that training incDG without ground truth does not significantly degrade performance, making it a practical and powerful tool for solving non-convex inverse problems in imaging and beyond.
Page 427 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.