Sort by:
Page 343 of 3863853 results

Deep Learning Image Reconstruction (DLIR) Algorithm to Maintain High Image Quality and Diagnostic Accuracy in Quadruple-low CT Angiography of Children with Pulmonary Sequestration: A Case Control Study.

Li H, Zhang Y, Hua S, Sun R, Zhang Y, Yang Z, Peng Y, Sun J

pubmed logopapersMay 22 2025
CT angiography (CTA) is a commonly used clinical examination to detect abnormal arteries and diagnose pulmonary sequestration (PS). Reducing the radiation dose, contrast medium dosage, and injection pressure in CTA, especially in children, has always been an important research topic, but few research is proven by pathology. The current study aimed to evaluate the diagnostic accuracy for children with PS in a quadruple-low CTA (4L-CTA: low tube voltage, radiation, contrast medium, and injection flow rate) using deep learning image reconstruction (DLIR) in comparison with routine protocol CTA with adaptive statistical iterative reconstruction-V (ASIR-V) MATERIALS AND METHODS: 53 patients (1.50±1.36years) suspected with PS were enrolled to undergo chest 4L-CTA using 70kVp tube voltage with radiation dose or 0.90 mGy in volumetric CT dose index (CTDIvol) and contrast medium dose of 0.8 ml/kg injected in 16 s. Images were reconstructed using DLIR. Another 53 patients (1.25±1.02years) with a routine dose protocol was used for comparison, and images were reconstructed with ASIR-V. The contrast-to-noise ratio (CNR) and edge-rise distance (ERD) of the aorta were calculated. The subjective overall image quality and artery visualization were evaluated using a 5-point scale (5, excellent; 3, acceptable). All patients underwent surgery after CT, the sensitivity and specificity for diagnosing PS were calculated. 4L-CTA reduced radiation dose by 51%, contrast dose by 47%, injection flow rate by 44% and injection pressure by 44% compared to the routine CTA (all p<0.05). Both groups had satisfactory subjective image quality and achieved 100% in both sensitivity and specificity for diagnosing PS. 4L-CTA had a reduced CNR (by 27%, p<0.05) but similar ERD, which reflects the image spatial resolution (p>0.05) compared to the routine CTA. 4L-CTA revealed small arteries with a diameter of 0.8 mm. DLIR ensures the realization of 4L-CTA in children with PS for significant radiation and contrast dose reduction, while maintaining image quality, visualization of small arteries, and high diagnostic accuracy.

Deep Learning-Based Multimodal Feature Interaction-Guided Fusion: Enhancing the Evaluation of EGFR in Advanced Lung Adenocarcinoma.

Xu J, Feng B, Chen X, Wu F, Liu Y, Yu Z, Lu S, Duan X, Chen X, Li K, Zhang W, Dai X

pubmed logopapersMay 22 2025
The aim of this study is to develop a deep learning-based multimodal feature interaction-guided fusion (DL-MFIF) framework that integrates macroscopic information from computed tomography (CT) images with microscopic information from whole-slide images (WSIs) to predict the epidermal growth factor receptor (EGFR) mutations of primary lung adenocarcinoma in patients with advanced-stage disease. Data from 396 patients with lung adenocarcinoma across two medical institutions were analyzed. The data from 243 cases were divided into a training set (n=145) and an internal validation set (n=98) in a 6:4 ratio, and data from an additional 153 cases from another medical institution were included as an external validation set. All cases included CT scan images and WSIs. To integrate multimodal information, we developed the DL-MFIF framework, which leverages deep learning techniques to capture the interactions between radiomic macrofeatures derived from CT images and microfeatures obtained from WSIs. Compared to other classification models, the DL-MFIF model achieved significantly higher area under the curve (AUC) values. Specifically, the model outperformed others on both the internal validation set (AUC=0.856, accuracy=0.750) and the external validation set (AUC=0.817, accuracy=0.708). Decision curve analysis (DCA) demonstrated that the model provided superior net benefits(range 0.15-0.87). Delong's test for external validation confirmed the statistical significance of the results (P<0.05). The DL-MFIF model demonstrated excellent performance in evaluating and distinguishing the EGFR in patients with advanced lung adenocarcinoma. This model effectively aids radiologists in accurately classifying EGFR mutations in patients with primary lung adenocarcinoma, thereby improving treatment outcomes for this population.

ActiveNaf: A novel NeRF-based approach for low-dose CT image reconstruction through active learning.

Zidane A, Shimshoni I

pubmed logopapersMay 22 2025
CT imaging provides essential information about internal anatomy; however, conventional CT imaging delivers radiation doses that can become problematic for patients requiring repeated imaging, highlighting the need for dose-reduction techniques. This study aims to reduce radiation doses without compromising image quality. We propose an approach that combines Neural Attenuation Fields (NAF) with an active learning strategy to better optimize CT reconstructions given a limited number of X-ray projections. Our method uses a secondary neural network to predict the Peak Signal-to-Noise Ratio (PSNR) of 2D projections generated by NAF from a range of angles in the operational range of the CT scanner. This prediction serves as a guide for the active learning process in choosing the most informative projections. In contrast to conventional techniques that acquire all X-ray projections in a single session, our technique iteratively acquires projections. The iterative process improves reconstruction quality, reduces the number of required projections, and decreases patient radiation exposure. We tested our methodology on spinal imaging using a limited subset of the VerSe 2020 dataset. We compare image quality metrics (PSNR3D, SSIM3D, and PSNR2D) to the baseline method and find significant improvements. Our method achieves the same quality with 36 projections as the baseline method achieves with 60. Our findings demonstrate that our approach achieves high-quality 3D CT reconstructions from sparse data, producing clearer and more detailed images of anatomical structures. This work lays the groundwork for advanced imaging techniques, paving the way for safer and more efficient medical imaging procedures.

Denoising of high-resolution 3D UTE-MR angiogram data using lightweight and efficient convolutional neural networks.

Tessema AW, Ambaye DT, Cho H

pubmed logopapersMay 22 2025
High-resolution magnetic resonance angiography (~ 50 μm<sup>3</sup> MRA) data plays a critical role in the accurate diagnosis of various vascular disorders. However, it is very challenging to acquire, and it is susceptible to artifacts and noise which limits its ability to visualize smaller blood vessels and necessitates substantial noise reduction measures. Among many techniques, the BM4D filter is a state-of-the-art denoising technique but comes with high computational cost, particularly for high-resolution 3D MRA data. In this research, five different optimized convolutional neural networks were utilized to denoise contrast-enhanced UTE-MRA data using a supervised learning approach. Since noise-free MRA data is challenging to acquire, the denoised image using BM4D filter was used as ground truth and this research mainly focused on reducing computational cost and inference time for denoising high-resolution UTE-MRA data. All five models were able to generate nearly similar denoised data compared to the ground truth with different computational footprints. Among all, the nested-UNet model generated almost similar images with the ground truth and achieved SSIM, PSNR, and MSE of 0.998, 46.12, and 3.38e-5 with 3× faster inference time than the BM4D filter. In addition, most optimized models like UNet and attention-UNet models generated nearly similar images with nested-UNet but 8.8× and 7.1× faster than the BM4D filter. In conclusion, using highly optimized networks, we have shown the possibility of denoising high-resolution UTE-MRA data with significantly shorter inference time, even with limited datasets from animal models. This can potentially make high-resolution 3D UTE-MRA data to be less computationally burdensome.

Predicting Depression in Healthy Young Adults: A Machine Learning Approach Using Longitudinal Neuroimaging Data.

Zhang A, Zhang H

pubmed logopapersMay 22 2025
Accurate prediction of depressive symptoms in healthy individuals can enable early intervention and reduce both individual and societal costs. This study aimed to develop predictive models for depression in young adults using machine learning (ML) techniques and longitudinal data from the Beck Depression Inventory, structural MRI (sMRI), and resting-state functional MRI (rs-fMRI). Feature selection methods, including the least absolute shrinkage and selection operator (LASSO), Boruta, and VSURF, were applied to identify MRI features associated with depression. Support vector machine and random forest algorithms were then used to construct prediction models. Eight MRI features were identified as predictive of depression, including brain regions in the Orbital Gyrus, Superior Frontal Gyrus, Middle Frontal Gyrus, Parahippocampal Gyrus, Cingulate Gyrus, and Inferior Parietal Lobule. The overlaps and the differences between selected features and brain regions with significant between-group differences in t-tests suggest that ML provides a unique perspective on the neural changes associated with depression. Six pairs of prediction models demonstrated varying performance, with accuracies ranging from 0.68 to 0.85 and areas under the curve (AUC) ranging from 0.57 to 0.81. The best-performing model achieved an accuracy of 0.85 and an AUC of 0.80, highlighting the potential of combining sMRI and rs-fMRI features with ML for early depression detection while revealing the potential of overfitting in small-sample and high-dimensional settings. This study necessitates further research to (1) replicate findings in independent larger datasets to address potential overfitting and (2) utilize different advanced ML techniques and multimodal data fusion to improve model performance.

Daily proton dose re-calculation on deep-learning corrected cone-beam computed tomography scans.

Vestergaard CD, Muren LP, Elstrøm UV, Stolarczyk L, Nørrevang O, Petersen SE, Taasti VT

pubmed logopapersMay 22 2025
Synthetic CT (sCT) generation from cone-beam CT (CBCT) must maintain stable performance and allow for accurate dose calculation across all treatment fractions to effectively support adaptive proton therapy. This study evaluated a 3D deep-learning (DL) network for sCT generation for prostate cancer patients over the full treatment course. Patient data from 25/6 prostate cancer patients were used to train/test the DL network. Patients in the test set had a planning CT, 39 CBCT images, and at least one repeat CT (reCT) used for replanning. The generated sCT images were compared to fan-beam planning and reCT images in terms of i) CT number accuracy and stability within spherical regions-of-interest (ROIs) in the bladder, prostate, and femoral heads, ii) proton range calculation accuracy through single-spot plans, and iii) dose trends in target coverage over the treatment course (one patient). The sCT images demonstrated image quality comparable to CT, while preserving the CBCT anatomy. The mean CT numbers on the sCT and CT images were comparable, e.g. for the prostate ROI they ranged from 29 HU to 59 HU for sCT, and from 36 HU to 50 HU for CT. The largest median proton range difference was 1.9 mm. Proton dose calculations showed excellent target coverage (V95%≥99.6 %) for the high-dose target. The DL network effectively generated high-quality sCT images with CT numbers, proton range, and dose characteristics comparable to fan-beam CT. Its robustness against intra-patient variations makes it a feasible tool for adaptive proton therapy.

Deep Learning for Automated Prediction of Sphenoid Sinus Pneumatization in Computed Tomography.

Alamer A, Salim O, Alharbi F, Alsaleem F, Almuqbil A, Alhassoon K, Alsunaydih F

pubmed logopapersMay 22 2025
The sphenoid sinus is an important access point for trans-sphenoidal surgeries, but variations in its pneumatization may complicate surgical safety. Deep learning can be used to identify these anatomical variations. We developed a convolutional neural network (CNN) model for the automated prediction of sphenoid sinus pneumatization patterns in computed tomography (CT) scans. This model was tested on mid-sagittal CT images. Two radiologists labeled all CT images into four pneumatization patterns: Conchal (type I), presellar (type II), sellar (type III), and postsellar (type IV). We then augmented the training set to address the limited size and imbalanced nature of the data. The initial dataset included 249 CT images, divided into training (n = 174) and test (n = 75) datasets. The training dataset was augmented to 378 images. Following augmentation, the overall diagnostic accuracy of the model improved from 76.71% to 84%, with an area under the curve (AUC) of 0.84, indicating very good diagnostic performance. Subgroup analysis showed excellent results for type IV, with the highest AUC of 0.93, perfect sensitivity (100%), and an F1-score of 0.94. The model also performed robustly for type I, achieving an accuracy of 97.33% and high specificity (99%). These metrics highlight the model's potential for reliable clinical application. The proposed CNN model demonstrates very good diagnostic accuracy in identifying various sphenoid sinus pneumatization patterns, particularly excelling in type IV, which is crucial for endoscopic sinus surgery due to its higher risk of surgical complications. By assisting radiologists and surgeons, this model enhances the safety of transsphenoidal surgery, highlighting its value, novelty, and applicability in clinical settings.

Customized GPT-4V(ision) for radiographic diagnosis: can large language model detect supernumerary teeth?

Aşar EM, İpek İ, Bi Lge K

pubmed logopapersMay 21 2025
With the growing capabilities of language models like ChatGPT to process text and images, this study evaluated their accuracy in detecting supernumerary teeth on periapical radiographs. A customized GPT-4V model (CGPT-4V) was also developed to assess whether domain-specific training could improve diagnostic performance compared to standard GPT-4V and GPT-4o models. One hundred eighty periapical radiographs (90 with and 90 without supernumerary teeth) were evaluated using GPT-4 V, GPT-4o, and a fine-tuned CGPT-4V model. Each image was assessed separately with the standardized prompt "Are there any supernumerary teeth in the radiograph above?" to avoid contextual bias. Three dental experts scored the responses using a three-point Likert scale for positive cases and a binary scale for negatives. Chi-square tests and ROC analysis were used to compare model performances (p < 0.05). Among the three models, CGPT-4 V exhibited the highest accuracy, detecting supernumerary teeth correctly in 91% of cases, compared to 77% for GPT-4o and 63% for GPT-4V. The CGPT-4V model also demonstrated a significantly lower false positive rate (16%) than GPT-4V (42%). A statistically significant difference was found between CGPT-4V and GPT-4o (p < 0.001), while no significant difference was observed between GPT-4V and CGPT-4V or between GPT-4V and GPT-4o. Additionally, CGPT-4V successfully identified multiple supernumerary teeth in radiographs where present. These findings highlight the diagnostic potential of customized GPT models in dental radiology. Future research should focus on multicenter validation, seamless clinical integration, and cost-effectiveness to support real-world implementation.

Exchange of Quantitative Computed Tomography Assessed Body Composition Data Using Fast Healthcare Interoperability Resources as a Necessary Step Toward Interoperable Integration of Opportunistic Screening Into Clinical Practice: Methodological Development Study.

Wen Y, Choo VY, Eil JH, Thun S, Pinto Dos Santos D, Kast J, Sigle S, Prokosch HU, Ovelgönne DL, Borys K, Kohnke J, Arzideh K, Winnekens P, Baldini G, Schmidt CS, Haubold J, Nensa F, Pelka O, Hosch R

pubmed logopapersMay 21 2025
Fast Healthcare Interoperability Resources (FHIR) is a widely used standard for storing and exchanging health care data. At the same time, image-based artificial intelligence (AI) models for quantifying relevant body structures and organs from routine computed tomography (CT)/magnetic resonance imaging scans have emerged. The missing link, simultaneously a needed step in advancing personalized medicine, is the incorporation of measurements delivered by AI models into an interoperable and standardized format. Incorporating image-based measurements and biomarkers into FHIR profiles can standardize data exchange, enabling timely, personalized treatment decisions and improving the precision and efficiency of patient care. This study aims to present the synergistic incorporation of CT-derived body organ and composition measurements with FHIR, delineating an initial paradigm for storing image-based biomarkers. This study integrated the results of the Body and Organ Analysis (BOA) model into FHIR profiles to enhance the interoperability of image-based biomarkers in radiology. The BOA model was selected as an exemplary AI model due to its ability to provide detailed body composition and organ measurements from CT scans. The FHIR profiles were developed based on 2 primary observation types: Body Composition Analysis (BCA Observation) for quantitative body composition metrics and Body Structure Observation for organ measurements. These profiles were structured to interoperate with a specially designed Diagnostic Report profile, which references the associated Imaging Study, ensuring a standardized linkage between image data and derived biomarkers. To ensure interoperability, all labels were mapped to SNOMED CT (Systematized Nomenclature of Medicine - Clinical Terms) or RadLex terminologies using specific value sets. The profiles were developed using FHIR Shorthand (FSH) and SUSHI, enabling efficient definition and implementation guide generation, ensuring consistency and maintainability. In this study, 4 BOA profiles, namely, Body Composition Analysis Observation, Body Structure Volume Observation, Diagnostic Report, and Imaging Study, have been presented. These FHIR profiles, which cover 104 anatomical landmarks, 8 body regions, and 8 tissues, enable the interoperable usage of the results of AI segmentation models, providing a direct link between image studies, series, and measurements. The BOA profiles provide a foundational framework for integrating AI-derived imaging biomarkers into FHIR, bridging the gap between advanced imaging analytics and standardized health care data exchange. By enabling structured, interoperable representation of body composition and organ measurements, these profiles facilitate seamless integration into clinical and research workflows, supporting improved data accessibility and interoperability. Their adaptability allows for extension to other imaging modalities and AI models, fostering a more standardized and scalable approach to using imaging biomarkers in precision medicine. This work represents a step toward enhancing the integration of AI-driven insights into digital health ecosystems, ultimately contributing to more data-driven, personalized, and efficient patient care.

ÆMMamba: An Efficient Medical Segmentation Model With Edge Enhancement.

Dong X, Zhou B, Yin C, Liao IY, Jin Z, Xu Z, Pu B

pubmed logopapersMay 21 2025
Medical image segmentation is critical for disease diagnosis, treatment planning, and prognosis assessment, yet the complexity and diversity of medical images pose significant challenges to accurate segmentation. While Convolutional Neural Networks capture local features and Vision Transformers excel in the global context, both struggle with efficient long-range dependency modeling. Inspired by Mamba's State Space Modeling efficiency, we propose ÆMMamba, a novel multi-scale feature extraction framework built on the Mamba backbone network. AÆMMamba integrates several innovative modules: the Efficient Fusion Bridge (EFB) module, which employs a bidirectional state-space model and attention mechanisms to fuse multi-scale features; the Edge-Aware Module (EAM), which enhances low-level edge representation using Sobel-based edge extraction; and the Boundary Sensitive Decoder (BSD), which leverages inverse attention and residual convolutional layers to handle cross-level complex boundaries. ÆMMamba achieves state-of-the-art performance across 8 medical segmentation datasets. On polyp segmentation datasets (Kvasir, ClinicDB, ColonDB, EndoScene, ETIS), it records the highest mDice and mIoU scores, outperforming methods like MADGNet and Swin-UMamba, with a standout mDice of 72.22 on ETIS, the most challenging dataset in this domain. For lung and breast segmentation, ÆMMamba surpasses competitors such as H2Former and SwinUnet, achieving Dice scores of 84.24 on BUSI and 79.83 on COVID-19 Lung. And on the LGG brain MRI dataset, ÆMMamba attains an mDice of 87.25 and an mIoU of 79.31, outperforming all compared methods. The source code will be released at https://github.com/xingbod/eMMamba.
Page 343 of 3863853 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.