Sort by:
Page 404 of 4524519 results

Denoising of high-resolution 3D UTE-MR angiogram data using lightweight and efficient convolutional neural networks.

Tessema AW, Ambaye DT, Cho H

pubmed logopapersMay 22 2025
High-resolution magnetic resonance angiography (~ 50 μm<sup>3</sup> MRA) data plays a critical role in the accurate diagnosis of various vascular disorders. However, it is very challenging to acquire, and it is susceptible to artifacts and noise which limits its ability to visualize smaller blood vessels and necessitates substantial noise reduction measures. Among many techniques, the BM4D filter is a state-of-the-art denoising technique but comes with high computational cost, particularly for high-resolution 3D MRA data. In this research, five different optimized convolutional neural networks were utilized to denoise contrast-enhanced UTE-MRA data using a supervised learning approach. Since noise-free MRA data is challenging to acquire, the denoised image using BM4D filter was used as ground truth and this research mainly focused on reducing computational cost and inference time for denoising high-resolution UTE-MRA data. All five models were able to generate nearly similar denoised data compared to the ground truth with different computational footprints. Among all, the nested-UNet model generated almost similar images with the ground truth and achieved SSIM, PSNR, and MSE of 0.998, 46.12, and 3.38e-5 with 3× faster inference time than the BM4D filter. In addition, most optimized models like UNet and attention-UNet models generated nearly similar images with nested-UNet but 8.8× and 7.1× faster than the BM4D filter. In conclusion, using highly optimized networks, we have shown the possibility of denoising high-resolution UTE-MRA data with significantly shorter inference time, even with limited datasets from animal models. This can potentially make high-resolution 3D UTE-MRA data to be less computationally burdensome.

HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image Synthesis for Interpretable Compression Fracture Grading.

Zhang Q, Chuang C, Zhang S, Zhao Z, Wang K, Xu J, Sun J

pubmed logopapersMay 22 2025
Osteoporotic vertebral compression fractures (OVCFs) are prevalent in the elderly population, typically assessed on computed tomography (CT) scans by evaluating vertebral height loss. This assessment helps determine the fracture's impact on spinal stability and the need for surgical intervention. However, the absence of pre-fracture CT scans and standardized vertebral references leads to measurement errors and inter-observer variability, while irregular compression patterns further challenge the precise grading of fracture severity. While deep learning methods have shown promise in aiding OVCFs screening, they often lack interpretability and sufficient sensitivity, limiting their clinical applicability. To address these challenges, we introduce a novel vertebra synthesis-height loss quantification-OVCFs grading framework. Our proposed model, HealthiVert-GAN, utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy vertebral images that simulate the pre-fracture state of fractured vertebrae. This model integrates three auxiliary modules that leverage the morphology and height information of adjacent healthy vertebrae to ensure anatomical consistency. Additionally, we introduce the Relative Height Loss of Vertebrae (RHLV) as a quantification metric, which divides each vertebra into three sections to measure height loss between pre-fracture and post-fracture states, followed by fracture severity classification using a Support Vector Machine (SVM). Our approach achieves state-of-the-art classification performance on both the Verse2019 dataset and in-house dataset, and it provides cross-sectional distribution maps of vertebral height loss. This practical tool enhances diagnostic accuracy in clinical settings and assisting in surgical decision-making.

An X-ray bone age assessment method for hands and wrists of adolescents in Western China based on feature fusion deep learning models.

Wang YH, Zhou HM, Wan L, Guo YC, Li YZ, Liu TA, Guo JX, Li DY, Chen T

pubmed logopapersMay 22 2025
The epiphyses of the hand and wrist serve as crucial indicators for assessing skeletal maturity in adolescents. This study aimed to develop a deep learning (DL) model for bone age (BA) assessment using hand and wrist X-ray images, addressing the challenge of classifying BA in adolescents. The results of this DL-based classification were then compared and analyzed with those obtained from manual assessment. A retrospective analysis was conducted on 688 hand and wrist X-ray images of adolescents aged 11.00-23.99 years from western China, which were randomly divided into training set, validation set and test set. The BA assessment results were initially analyzed and compared using four DL network models: InceptionV3, InceptionV3 + SE + Sex, InceptionV3 + Bilinear and InceptionV3 + Bilinear. + SE + Sex, to identify the DL model with the best classification performance. Subsequently, the results of the top-performing model were compared with those of manual classification. The study findings revealed that the InceptionV3 + Bilinear + SE + Sex model exhibited the best performance, achieving classification accuracies of 96.15% and 90.48% for the training and test set, respectively. Furthermore, based on the InceptionV3 + Bilinear + SE + Sex model, classification accuracies were calculated for four age groups (< 14.0 years, 14.0 years ≤ age < 16.0 years, 16.0 years ≤ age < 18.0 years, ≥ 18.0 years), with notable accuracies of 100% for the age groups 16.0 years ≤ age < 18.0 years and ≥ 18.0 years. The BA classification, utilizing the feature fusion DL network model, holds significant reference value for determining the age of criminal responsibility of adolescents, particularly at the critical legal age boundaries of 14.0, 16.0, and 18.0 years.

Deep learning-based model for difficult transfemoral access prediction compared with human assessment in stroke thrombectomy.

Canals P, Garcia-Tornel A, Requena M, Jabłońska M, Li J, Balocco S, Díaz O, Tomasello A, Ribo M

pubmed logopapersMay 22 2025
In mechanical thrombectomy (MT), extracranial vascular tortuosity is among the main determinants of procedure duration and success. Currently, no rapid and reliable method exists to identify the anatomical features precluding fast and stable access to the cervical vessels. A retrospective sample of 513 patients were included in this study. Patients underwent first-line transfemoral MT following anterior circulation large vessel occlusion stroke. Difficult transfemoral access (DTFA) was defined as impossible common carotid catheterization or time from groin puncture to first carotid angiogram >30 min. A machine learning model based on 29 anatomical features automatically extracted from head-and-neck computed tomography angiography (CTA) was developed to predict DTFA. Three experienced raters independently assessed the likelihood of DTFA on a reduced cohort of 116 cases using a Likert scale as benchmark for the model, using preprocedural CTA as well as automatic 3D vascular segmentation separately. Among the study population, 11.5% of procedures (59/513) presented DTFA. Six different features from the aortic, supra-aortic, and cervical regions were included in the model. Cross-validation resulted in an area under the receiver operating characteristic (AUROC) curve of 0.76 (95% CI 0.75 to 0.76) for DTFA prediction, with high sensitivity for impossible access identification (0.90, 95% CI 0.81 to 0.94). The model outperformed human assessment in the reduced cohort [F1-score (95% CI) by experts with CTA: 0.43 (0.37 to 0.50); experts with 3D segmentation: 0.50 (0.46 to 0.54); and model: 0.70 (0.65 to 0.75)]. A fully automatic model for DTFA prediction was developed and validated. The presented method improved expert assessment of difficult access prediction in stroke MT. Derived information could be used to guide decisions regarding arterial access for MT.

Render-FM: A Foundation Model for Real-time Photorealistic Volumetric Rendering

Zhongpai Gao, Meng Zheng, Benjamin Planche, Anwesa Choudhuri, Terrence Chen, Ziyan Wu

arxiv logopreprintMay 22 2025
Volumetric rendering of Computed Tomography (CT) scans is crucial for visualizing complex 3D anatomical structures in medical imaging. Current high-fidelity approaches, especially neural rendering techniques, require time-consuming per-scene optimization, limiting clinical applicability due to computational demands and poor generalizability. We propose Render-FM, a novel foundation model for direct, real-time volumetric rendering of CT scans. Render-FM employs an encoder-decoder architecture that directly regresses 6D Gaussian Splatting (6DGS) parameters from CT volumes, eliminating per-scan optimization through large-scale pre-training on diverse medical data. By integrating robust feature extraction with the expressive power of 6DGS, our approach efficiently generates high-quality, real-time interactive 3D visualizations across diverse clinical CT data. Experiments demonstrate that Render-FM achieves visual fidelity comparable or superior to specialized per-scan methods while drastically reducing preparation time from nearly an hour to seconds for a single inference step. This advancement enables seamless integration into real-time surgical planning and diagnostic workflows. The project page is: https://gaozhongpai.github.io/renderfm/.

DP-MDM: detail-preserving MR reconstruction via multiple diffusion models.

Geng M, Zhu J, Hong R, Liu Q, Liang D, Liu Q

pubmed logopapersMay 22 2025
<i>Objective.</i>Magnetic resonance imaging (MRI) is critical in medical diagnosis and treatment by capturing detailed features, such as subtle tissue changes, which help clinicians make precise diagnoses. However, the widely used single diffusion model has limitations in accurately capturing more complex details. This study aims to address these limitations by proposing an efficient method to enhance the reconstruction of detailed features in MRI.<i>Approach.</i>We present a detail-preserving reconstruction method that leverages multiple diffusion models (DP-MDM) to extract structural and detailed features in the k-space domain, which complements the image domain. Since high-frequency information in k-space is more systematically distributed around the periphery compared to the irregular distribution of detailed features in the image domain, this systematic distribution allows for more efficient extraction of detailed features. To further reduce redundancy and enhance model performance, we introduce virtual binary masks with adjustable circular center windows that selectively focus on high-frequency regions. These masks align with the frequency distribution of k-space data, enabling the model to focus more efficiently on high-frequency information. The proposed method employs a cascaded architecture, where the first diffusion model recovers low-frequency structural components, with subsequent models enhancing high-frequency details during the iterative reconstruction stage.<i>Main results.</i>Experimental results demonstrate that DP-MDM achieves superior performance across multiple datasets. On the<i>T1-GE brain</i>dataset with 2D random sampling at<i>R</i>= 15, DP-MDM achieved 35.14 dB peak signal-to-noise ratio (PSNR) and 0.8891 structural similarity (SSIM), outperforming other methods. The proposed method also showed robust performance on the<i>Fast-MRI</i>and<i>Cardiac MR</i>datasets, achieving the highest PSNR and SSIM values.<i>Significance.</i>DP-MDM significantly advances MRI reconstruction by balancing structural integrity and detail preservation. It not only enhances diagnostic accuracy through improved image quality but also offers a versatile framework that can potentially be extended to other imaging modalities, thereby broadening its clinical applicability.

ActiveNaf: A novel NeRF-based approach for low-dose CT image reconstruction through active learning.

Zidane A, Shimshoni I

pubmed logopapersMay 22 2025
CT imaging provides essential information about internal anatomy; however, conventional CT imaging delivers radiation doses that can become problematic for patients requiring repeated imaging, highlighting the need for dose-reduction techniques. This study aims to reduce radiation doses without compromising image quality. We propose an approach that combines Neural Attenuation Fields (NAF) with an active learning strategy to better optimize CT reconstructions given a limited number of X-ray projections. Our method uses a secondary neural network to predict the Peak Signal-to-Noise Ratio (PSNR) of 2D projections generated by NAF from a range of angles in the operational range of the CT scanner. This prediction serves as a guide for the active learning process in choosing the most informative projections. In contrast to conventional techniques that acquire all X-ray projections in a single session, our technique iteratively acquires projections. The iterative process improves reconstruction quality, reduces the number of required projections, and decreases patient radiation exposure. We tested our methodology on spinal imaging using a limited subset of the VerSe 2020 dataset. We compare image quality metrics (PSNR3D, SSIM3D, and PSNR2D) to the baseline method and find significant improvements. Our method achieves the same quality with 36 projections as the baseline method achieves with 60. Our findings demonstrate that our approach achieves high-quality 3D CT reconstructions from sparse data, producing clearer and more detailed images of anatomical structures. This work lays the groundwork for advanced imaging techniques, paving the way for safer and more efficient medical imaging procedures.

Radiomics-Based Early Triage of Prostate Cancer: A Multicenter Study from the CHAIMELEON Project

Vraka, A., Marfil-Trujillo, M., Ribas-Despuig, G., Flor-Arnal, S., Cerda-Alberich, L., Jimenez-Gomez, P., Jimenez-Pastor, A., Marti-Bonmati, L.

medrxiv logopreprintMay 22 2025
Prostate cancer (PCa) is the most commonly diagnosed malignancy in men worldwide. Accurate triage of patients based on tumor aggressiveness and staging is critical for selecting appropriate management pathways. While magnetic resonance imaging (MRI) has become a mainstay in PCa diagnosis, most predictive models rely on multiparametric imaging or invasive inputs, limiting generalizability in real-world clinical settings. This study aimed to develop and validate machine learning (ML) models using radiomic features extracted from T2-weighted MRI--alone and in combination with clinical variables--to predict ISUP grade (tumor aggressiveness), lymph node involvement (cN) and distant metastasis (cM). A retrospective multicenter cohort from three European sites in the Chaimeleon project was analyzed. Radiomic features were extracted from prostate zone segmentations and lesion masks, following standardized preprocessing and ComBat harmonization. Feature selection and model optimization were performed using nested cross-validation and Bayesian tuning. Hybrid models were trained using XGBoost and interpreted with SHAP values. The ISUP model achieved an AUC of 0.66, while the cN and cM models reached AUCs of 0.77 and 0.80, respectively. The best-performing models consistently combined prostate zone radiomics with clinical features such as PSA, PIRADSv2 and ISUP grade. SHAP analysis confirmed the importance of both clinical and texture-based radiomic features, with entropy and non-uniformity measures playing central roles in all tasks. Our results demonstrate the feasibility of using T2-weighted MRI and zonal radiomics for robust prediction of aggressiveness, nodal involvement and distant metastasis in PCa. This fully automated pipeline offers an interpretable, accessible and clinically translatable tool for first-line PCa triage, with potential integration into real-world diagnostic workflows.

FLAMeS: A Robust Deep Learning Model for Automated Multiple Sclerosis Lesion Segmentation

Dereskewicz, E., La Rosa, F., dos Santos Silva, J., Sizer, E., Kohli, A., Wynen, M., Mullins, W. A., Maggi, P., Levy, S., Onyemeh, K., Ayci, B., Solomon, A. J., Assländer, J., Al-Louzi, O., Reich, D. S., Sumowski, J. F., Beck, E. S.

medrxiv logopreprintMay 22 2025
Background and Purpose Assessment of brain lesions on MRI is crucial for research in multiple sclerosis (MS). Manual segmentation is time consuming and inconsistent. We aimed to develop an automated MS lesion segmentation algorithm for T2-weighted fluid-attenuated inversion recovery (FLAIR) MRI. Methods We developed FLAIR Lesion Analysis in Multiple Sclerosis (FLAMeS), a deep learning-based MS lesion segmentation algorithm based on the nnU-Net 3D full-resolution U-Net and trained on 668 FLAIR 1.5 and 3 tesla scans from persons with MS. FLAMeS was evaluated on three external datasets: MSSEG-2 (n=14), MSLesSeg (n=51), and a clinical cohort (n=10), and compared to SAMSEG, LST-LPA, and LST-AI. Performance was assessed qualitatively by two blinded experts and quantitatively by comparing automated and ground truth lesion masks using standard segmentation metrics. Results In a blinded qualitative review of 20 scans, both raters selected FLAMeS as the most accurate segmentation in 15 cases, with one rater favoring FLAMeS in two additional cases. Across all testing datasets, FLAMeS achieved a mean Dice score of 0.74, a true positive rate of 0.84, and an F1 score of 0.78, consistently outperforming the benchmark methods. For other metrics, including positive predictive value, relative volume difference, and false positive rate, FLAMeS performed similarly or better than benchmark methods. Most lesions missed by FLAMeS were smaller than 10 mm3, whereas the benchmark methods missed larger lesions in addition to smaller ones. Conclusions FLAMeS is an accurate, robust method for MS lesion segmentation that outperforms other publicly available methods.

An Interpretable Deep Learning Approach for Autism Spectrum Disorder Detection in Children Using NASNet-Mobile.

K VRP, Hima Bindu C, Devi KRM

pubmed logopapersMay 22 2025
Autism spectrum disorder (ASD) is a multifaceted neurodevelopmental disorder featuring impaired social interactions and communication abilities engaging the individuals in a restrictive or repetitive behaviour. Though incurable early detection and intervention can reduce the severity of symptoms. Structural magnetic resonance imaging (sMRI) can improve diagnostic accuracy, facilitating early diagnosis to offer more tailored care. With the emergence of deep learning (DL), neuroimaging-based approaches for ASD diagnosis have been focused. However, many existing models lack interpretability of their decisions for diagnosis. The prime objective of this work is to perform ASD classification precisely and to interpret the classification process in a better way so as to discern the major features that are appropriate for the prediction of disorder. The proposed model employs neural architecture search network - mobile(NASNet-Mobile) model for ASD detection, which is integrated with an explainable artificial intelligence (XAI) technique called local interpretable model-agnostic explanations (LIME) for increased transparency of ASD classification. The model is trained on sMRI images of two age groups taken from autism brain imaging data exchange-I (ABIDE-I) dataset. The proposed model yielded accuracy of 0.9607, F1-score of 0.9614, specificity of 0.9774, sensitivity of 0.9451, negative predicted value (NPV) of 0.9429, positive predicted value (PPV) of 0.9783 and the diagnostic odds ratio of 745.59 for 2 to 11 years age group compared to 12 to 18 years group. These results are superior compared to other state of the art models Inception v3 and SqueezeNet.
Page 404 of 4524519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.