Sort by:
Page 146 of 2082073 results

Deep learning enables fast and accurate quantification of MRI-guided near-infrared spectral tomography for breast cancer diagnosis.

Feng J, Tang Y, Lin S, Jiang S, Xu J, Zhang W, Geng M, Dang Y, Wei C, Li Z, Sun Z, Jia K, Pogue BW, Paulsen KD

pubmed logopapersMay 29 2025
The utilization of magnetic resonance (MR) im-aging to guide near-infrared spectral tomography (NIRST) shows significant potential for improving the specificity and sensitivity of breast cancer diagnosis. However, the ef-ficiency and accuracy of NIRST image reconstruction have been limited by the complexities of light propagation mod-eling and MRI image segmentation. To address these chal-lenges, we developed and evaluated a deep learning-based approach for MR-guided 3D NIRST image reconstruction (DL-MRg-NIRST). Using a network trained on synthetic data, the DL-MRg-NIRST system reconstructed images from data acquired during 38 clinical imaging exams of pa-tients with breast abnormalities. Statistical analysis of the results demonstrated a sensitivity of 87.5%, a specificity of 92.9%, and a diagnostic accuracy of 89.5% in distinguishing pathologically defined benign from malignant lesions. Ad-ditionally, the combined use of MRI and DL-MRg-NIRST di-agnoses achieved an area under the receiver operating characteristic (ROC) curve of 0.98. Remarkably, the DL-MRg-NIRST image reconstruction process required only 1.4 seconds, significantly faster than state-of-the-art MR-guided NIRST methods.

Manual and automated facial de-identification techniques for patient imaging with preservation of sinonasal anatomy.

Ding AS, Nagururu NV, Seo S, Liu GS, Sahu M, Taylor RH, Creighton FX

pubmed logopapersMay 29 2025
Facial recognition of reconstructed computed tomography (CT) scans poses patient privacy risks, necessitating reliable facial de-identification methods. Current methods obscure sinuses, turbinates, and other anatomy relevant for otolaryngology. We present a facial de-identification method that preserves these structures, along with two automated workflows for large-volume datasets. A total of 20 adult head CTs from the New Mexico Decedent Image Database were included. Using 3D Slicer, a seed-growing technique was performed to label the skin around the face. This label was dilated bidirectionally to form a 6-mm mask that obscures facial features. This technique was then automated using: (1) segmentation propagation that deforms an atlas head CT and corresponding mask to match other scans and (2) a deep learning model (nnU-Net). Accuracy of these methods against manually generated masks was evaluated with Dice scores and modified Hausdorff distances (mHDs). Manual de-identification resulted in facial match rates of 45.0% (zero-fill), 37.5% (deletion), and 32.5% (re-face). Dice scores for automated face masks using segmentation propagation and nnU-Net were 0.667 ± 0.109 and 0.860 ± 0.029, respectively, with mHDs of 4.31 ± 3.04 mm and 1.55 ± 0.71 mm. Match rates after de-identification using segmentation propagation (zero-fill: 42.5%; deletion: 40.0%; re-face: 35.0%) and nnU-Net (zero-fill: 42.5%; deletion: 35.0%; re-face: 30.0%) were comparable to manual masks. We present a simple facial de-identification approach for head CTs, as well as automated methods for large-scale implementation. These techniques show promise for preventing patient identification while preserving underlying sinonasal anatomy, but further studies using live patient photographs are necessary to fully validate its effectiveness.

Deep learning reconstruction for improved image quality of ultra-high-resolution brain CT angiography: application in moyamoya disease.

Ma Y, Nakajima S, Fushimi Y, Funaki T, Otani S, Takiya M, Matsuda A, Kozawa S, Fukushima Y, Okuchi S, Sakata A, Yamamoto T, Sakamoto R, Chihara H, Mineharu Y, Arakawa Y, Nakamoto Y

pubmed logopapersMay 29 2025
To investigate vessel delineation and image quality of ultra-high-resolution (UHR) CT angiography (CTA) reconstructed using deep learning reconstruction (DLR) optimised for brain CTA (DLR-brain) in moyamoya disease (MMD), compared with DLR optimised for body CT (DLR-body) and hybrid iterative reconstruction (Hybrid-IR). This retrospective study included 50 patients with suspected or diagnosed MMD who underwent UHR brain CTA. All images were reconstructed using DLR-brain, DLR-body, and Hybrid-IR. Quantitative analysis focussed on moyamoya perforator vessels in the basal ganglia and periventricular anastomosis. For these small vessels, edge sharpness, peak CT number, vessel contrast, full width at half maximum (FWHM), and image noise were measured and compared. Qualitative analysis was performed by visual assessment to compare vessel delineation and image quality. DLR-brain significantly improved edge sharpness, peak CT number, vessel contrast, and FWHM, and significantly reduced image noise compared with DLR-body and Hybrid-IR (P < 0.05). DLR-brain significantly outperformed the other algorithms in the visual assessment (P < 0.001). DLR-brain provided superior visualisation of small intracranial vessels compared with DLR-body and Hybrid-IR in UHR brain CTA.

Multimodal medical image-to-image translation via variational autoencoder latent space mapping.

Liang Z, Cheng M, Ma J, Hu Y, Li S, Tian X

pubmed logopapersMay 29 2025
Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality-specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice. To develop a unified image translation method using variational autoencoder (VAE) latent space mapping, which enables flexible conversion between different medical imaging modalities to meet clinical demands. We propose a three-stage approach to construct a unified image translation model. Initially, a VAE is trained to learn a shared latent space for various medical images. A stacked bidirectional transformer is subsequently utilized to learn the mapping between different modalities within the latent space under the guidance of the image modality. Finally, the VAE decoder is fine-tuned to improve image quality. Our internal dataset collected paired imaging data from 87 head and neck cases, with each case containing cone beam computed tomography (CBCT), computed tomography (CT), MR T1c, and MR T2W images. The effectiveness of this strategy is quantitatively evaluated on our internal dataset and a public dataset by the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural similarity index (SSIM). Additionally, the dosimetry characteristics of the synthetic CT images are evaluated, and subjective quality assessments of the synthetic MR images are conducted to determine their clinical value. The VAE with the Kullback‒Leibler (KL)-16 image tokenizer demonstrates superior image reconstruction ability, achieving a Fréchet inception distance (FID) of 4.84, a PSNR of 32.80 dB, and an SSIM of 92.33%. In synthetic CT tasks, the model shows greater accuracy in intramodality translations than in cross-modality translations, as evidenced by an MAE of 21.60 ± 8.80 Hounsfield unit (HU) in the CBCT-to-CT task and 45.23 ± 13.21 HU/47.55 ± 13.88 in the MR T1c/T2w-to-CT tasks. For the cross-contrast MR translation tasks, the results are very close, with mean PSNR and SSIM values of 26.33 ± 1.36 dB and 85.21% ± 2.21%, respectively, for the T1c-to-T2w translation and 26.03 ± 1.67 dB and 85.73% ± 2.66%, respectively, for the T2w-to-T1c translation. Dosimetric results indicate that all the gamma pass rates for synthetic CTs are higher than 99% for photon intensity-modulated radiation therapy (IMRT) planning. However, the subjective quality assessment scores for synthetic MR images are lower than those for real MR images. The proposed three-stage approach successfully develops a unified image translation model that can effectively handle a wide range of medical image translation tasks. This flexibility and effectiveness make it a valuable tool for clinical applications.

The use of imaging in the diagnosis and treatment of thromboembolic pulmonary hypertension.

Szewczuk K, Dzikowska-Diduch O, Gołębiowski M

pubmed logopapersMay 29 2025
Chronic thromboembolic pulmonary hypertension (CTEPH) is a potentially life-threatening condition, classified as group 4 pulmonary hypertension (PH), caused by stenosis or occlusion of the pulmonary arteries due to unresolved thromboembolic material. The prognosis for untreated CTEPH patients is poor because it leads to elevated pulmonary artery pressure and right heart failure. Early and accurate diagnosis of CTEPH is crucial because it remains the only form of PH that is potentially curable. However, diagnosing CTEPH is often challenging and frequently delayed or misdiagnosed. This review discusses the current role of multimodal imaging in diagnosing CTEPH, guiding clinical decision-making, and monitoring post-treatment outcomes. The characteristic findings, strengths, and limitations of various imaging modalities, such as computed tomography, ventilation-perfusion lung scintigraphy, digital subtraction pulmonary angiography, and magnetic resonance imaging, are evaluated. Additionally, the role of artificial intelligence in improving the diagnosis and treatment outcomes of CTEPH is explored. Optimal patient assessment and therapeutic decision-making should ideally be conducted in specialized centers by a multidisciplinary team, utilizing data from imaging, pulmonary hemodynamics, and patient comorbidities.

Menopausal hormone therapy and the female brain: Leveraging neuroimaging and prescription registry data from the UK Biobank cohort.

Barth C, Galea LAM, Jacobs EG, Lee BH, Westlye LT, de Lange AG

pubmed logopapersMay 29 2025
Menopausal hormone therapy (MHT) is generally thought to be neuroprotective, yet results have been inconsistent. Here, we present a comprehensive study of MHT use and brain characteristics in females from the UK Biobank. 19,846 females with magnetic resonance imaging data were included. Detailed MHT prescription data from primary care records was available for 538. We tested for associations between the brain measures (i.e. gray/white matter brain age, hippocampal volumes, white matter hyperintensity volumes) and MHT user status, age at first and last use, duration of use, formulation, route of administration, dosage, type, and active ingredient. We further tested for the effects of a history of hysterectomy ± bilateral oophorectomy among MHT users and examined associations by APOE ε4 status. Current MHT users, not past users, showed older gray and white matter brain age, with a difference of up to 9 mo, and smaller hippocampal volumes compared to never-users. Longer duration of use and older age at last use post-menopause was associated with older gray and white matter brain age, larger white matter hyperintensity volume, and smaller hippocampal volumes. MHT users with a history of hysterectomy ± bilateral oophorectomy showed <i>younger</i> gray matter brain age relative to MHT users without such history. We found no associations by APOE ε4 status and with other MHT variables. Our results indicate that population-level associations between MHT use and female brain health might vary depending on duration of use and past surgical history. The authors received funding from the Research Council of Norway (LTW: 223273, 249795, 273345, 298646, 300768), the South-Eastern Norway Regional Health Authority (CB: 2023037, 2022103; LTW: 2018076, 2019101), the European Research Council under the European Union's Horizon 2020 research and innovation program (LTW: 802998), the Swiss National Science Foundation (AMGdL: PZ00P3_193658), the Canadian Institutes for Health Research (LAMG: PJT-173554), the Treliving Family Chair in Women's Mental Health at the Centre for Addiction and Mental Health (LAMG), womenmind at the Centre for Addiction and Mental Health (LAMG, BHL), the Ann S. Bowers Women's Brain Health Initiative (EGJ), and the National Institutes of Health (EGJ: AG063843).

Predicting abnormal fetal growth using deep learning.

Mikołaj KW, Christensen AN, Taksøe-Vester CA, Feragen A, Petersen OB, Lin M, Nielsen M, Svendsen MBS, Tolsgaard MG

pubmed logopapersMay 29 2025
Ultrasound assessment of fetal size and growth is the mainstay of monitoring fetal well-being during pregnancy, as being small for gestational age (SGA) or large for gestational age (LGA) poses significant risks for both the fetus and the mother. This study aimed to enhance the prediction accuracy of abnormal fetal growth. We developed a deep learning model, trained on a dataset of 433,096 ultrasound images derived from 94,538 examinations conducted on 65,752 patients. The deep learning model performed significantly better in detecting both SGA (58% vs 70%) and LGA compared with the current clinical standard, the Hadlock formula (41% vs 55%), p < 0.001. Additionally, the model estimates were significantly less biased across all demographic and technical variables compared to the Hadlock formula. Incorporating key anatomical features such as cortical structures, liver texture, and skin thickness was likely to be responsible for the improved prediction accuracy observed.

Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects.

Ghaedi E, Asadi A, Hosseini SA, Arabi H

pubmed logopapersMay 29 2025
Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.

Deep Learning CAIPIRINHA-VIBE Improves and Accelerates Head and Neck MRI.

Nitschke LV, Lerchbaumer M, Ulas T, Deppe D, Nickel D, Geisel D, Kubicka F, Wagner M, Walter-Rittel T

pubmed logopapersMay 29 2025
The aim of this study was to evaluate image quality for contrast-enhanced (CE) neck MRI with a deep learning-reconstructed VIBE sequence with acceleration factors (AF) 4 (DL4-VIBE) and 6 (DL6-VIBE). Patients referred for neck MRI were examined in a 3-Tesla scanner in this prospective, single-center study. Four CE fat-saturated (FS) VIBE sequences were acquired in each patient: Star-VIBE (4:01 min), VIBE (2:05 min), DL4-VIBE (0:24 min), DL6-VIBE (0:17 min). Image quality was evaluated by three radiologists with a 5-point Likert scale and included overall image quality, muscle contour delineation, conspicuity of mucosa and pharyngeal musculature, FS uniformity, and motion artifacts. Objective image quality was assessed with signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and quantification of metal artifacts. 68 patients (60.3% male; mean age 57.4±16 years) were included in this study. DL4-VIBE was superior for overall image quality, delineation of muscle contours, differentiation of mucosa and pharyngeal musculature, vascular delineation, and motion artifacts. Notably, DL4-VIBE exhibited exceptional FS uniformity (p<0.001). SNR and CNR were superior for DL4-VIBE compared to all other sequences (p<0.001). Metal artifacts were least pronounced in the standard VIBE, followed by DL4-VIBE (p<0.001). Although DL6-VIBE was inferior to DL4-VIBE, it demonstrated improved FS homogeneity, delineation of pharyngeal mucosa, and CNR compared to Star-VIBE and VIBE. DL4-VIBE significantly improves image quality for CE neck MRI with a fraction of the scan time of conventional sequences.
Page 146 of 2082073 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.