Sort by:
Page 45 of 1411405 results

An Optimized Framework of QSM Mask Generation Using Deep Learning: QSMmask-Net.

Lee G, Jung W, Sakaie KE, Oh SH

pubmed logopapersJun 1 2025
Quantitative susceptibility mapping (QSM) provides the spatial distribution of magnetic susceptibility within tissues through sequential steps: phase unwrapping and echo combination, mask generation, background field removal, and dipole inversion. Accurate mask generation is crucial, as masks excluding regions outside the brain and without holes are necessary to minimize errors and streaking artifacts during QSM reconstruction. Variations in susceptibility values can arise from different mask generation methods, highlighting the importance of optimizing mask creation. In this study, we propose QSMmask-net, a deep neural network-based method for generating precise QSM masks. QSMmask-net achieved the highest Dice score compared to other mask generation methods. Mean susceptibility values using QSMmask-net masks showed the lowest differences from manual masks (ground truth) in simulations and healthy controls (no significant difference, p > 0.05). Linear regression analysis confirmed a strong correlation with manual masks for hemorrhagic lesions (slope = 0.9814 ± 0.007, intercept = 0.0031 ± 0.001, R<sup>2</sup> = 0.9992, p < 0.05). We have demonstrated that mask generation methods can affect the susceptibility value estimations. QSMmask-net reduces the labor required for mask generation while providing mask quality comparable to manual methods. The proposed method enables users without specialized expertise to create optimized masks, potentially broadening QSM applicability efficiently.

Toward Noninvasive High-Resolution In Vivo pH Mapping in Brain Tumors by <sup>31</sup>P-Informed deepCEST MRI.

Schüre JR, Rajput J, Shrestha M, Deichmann R, Hattingen E, Maier A, Nagel AM, Dörfler A, Steidl E, Zaiss M

pubmed logopapersJun 1 2025
The intracellular pH (pH<sub>i</sub>) is critical for understanding various pathologies, including brain tumors. While conventional pH<sub>i</sub> measurement through <sup>31</sup>P-MRS suffers from low spatial resolution and long scan times, <sup>1</sup>H-based APT-CEST imaging offers higher resolution with shorter scan times. This study aims to directly predict <sup>31</sup>P-pH<sub>i</sub> maps from CEST data by using a fully connected neuronal network. Fifteen tumor patients were scanned on a 3-T Siemens PRISMA scanner and received <sup>1</sup>H-based CEST and T1 measurement, as well as <sup>31</sup>P-MRS. A neural network was trained voxel-wise on CEST and T1 data to predict <sup>31</sup>P-pH<sub>i</sub> values, using data from 11 patients for training and 4 for testing. The predicted pH<sub>i</sub> maps were additionally down-sampled to the original the <sup>31</sup>P-pH<sub>i</sub> resolution, to be able to calculate the RMSE and analyze the correlation, while higher resolved predictions were compared with conventional CEST metrics. The results demonstrated a general correspondence between the predicted deepCEST pH<sub>i</sub> maps and the measured <sup>31</sup>P-pH<sub>i</sub> in test patients. However, slight discrepancies were also observed, with a RMSE of 0.04 pH units in tumor regions. High-resolution predictions revealed tumor heterogeneity and features not visible in conventional CEST data, suggesting the model captures unique pH information and is not simply a T1 segmentation. The deepCEST pH<sub>i</sub> neural network enables the APT-CEST hidden pH-sensitivity and offers pH<sub>i</sub> maps with higher spatial resolution in shorter scan time compared with <sup>31</sup>P-MRS. Although this approach is constrained by the limitations of the acquired data, it can be extended with additional CEST features for future studies, thereby offering a promising approach for 3D pH imaging in a clinical environment.

Fully automated image quality assessment based on deep learning for carotid computed tomography angiography: A multicenter study.

Fu W, Ma Z, Yang Z, Yu S, Zhang Y, Zhang X, Mei B, Meng Y, Ma C, Gong X

pubmed logopapersJun 1 2025
To develop and evaluate the performance of fully automated model based on deep learning and multiple logistics regression algorithm for image quality assessment (IQA) of carotid computed tomography angiography (CTA) images. This study retrospectively collected 840 carotid CTA images from four tertiary hospitals. Three radiologists independently assessed the image quality using a 3-point Likert scale, based on the degree of noise, vessel enhancement, arterial vessel contrast, vessel edge sharpness, and overall diagnostic acceptability. An automated assessment model was developed using a training dataset consisting of 600 carotid CTA images. The assessment steps included: (i) selection of objective representative slices; (ii) use of 3D Res U-net approach to extract objective indices from the representative slices and (iii) use of single objective index and multiple indices combinedly to develop logistic regression models for IQA. In the internal and external test datasets (n = 240), the performance of models was evaluated using sensitivity, specificity, precision, F-score, accuracy, the area under the receiver operating characteristic curve (AUC), and the IQA results of models was compared with radiologists' consensus. The representative slices were determined based on the same length model. The performance of multi-index model was excellent in internal and external test datasets with AUCs of 0.98 and 0.97. And the consistency between model and radiologists achieved 91.8% (95% CI: 87.0-96.5) and 92.6% (95 % CI: 86.9-98.4) in internal and external test datasets respectively. The fully automated multi-index model showed equivalent performance to the subjective perceptions of radiologists with greater efficiency for IQA.

Artificial intelligence in pediatric osteopenia diagnosis: evaluating deep network classification and model interpretability using wrist X-rays.

Harris CE, Liu L, Almeida L, Kassick C, Makrogiannis S

pubmed logopapersJun 1 2025
Osteopenia is a bone disorder that causes low bone density and affects millions of people worldwide. Diagnosis of this condition is commonly achieved through clinical assessment of bone mineral density (BMD). State of the art machine learning (ML) techniques, such as convolutional neural networks (CNNs) and transformer models, have gained increasing popularity in medicine. In this work, we employ six deep networks for osteopenia vs. healthy bone classification using X-ray imaging from the pediatric wrist dataset GRAZPEDWRI-DX. We apply two explainable AI techniques to analyze and interpret visual explanations for network decisions. Experimental results show that deep networks are able to effectively learn osteopenic and healthy bone features, achieving high classification accuracy rates. Among the six evaluated networks, DenseNet201 with transfer learning yielded the top classification accuracy at 95.2 %. Furthermore, visual explanations of CNN decisions provide valuable insight into the blackbox inner workings and present interpretable results. Our evaluation of deep network classification results highlights their capability to accurately differentiate between osteopenic and healthy bones in pediatric wrist X-rays. The combination of high classification accuracy and interpretable visual explanations underscores the promise of incorporating machine learning techniques into clinical workflows for the early and accurate diagnosis of osteopenia.

Broadening the Net: Overcoming Challenges and Embracing Novel Technologies in Lung Cancer Screening.

Czerlanis CM, Singh N, Fintelmann FJ, Damaraju V, Chang AEB, White M, Hanna N

pubmed logopapersJun 1 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide, with most cases diagnosed at advanced stages where curative treatment options are limited. Low-dose computed tomography (LDCT) for lung cancer screening (LCS) of individuals selected based on age and smoking history has shown a significant reduction in lung cancer-specific mortality. The number needed to screen to prevent one death from lung cancer is lower than that for breast cancer, cervical cancer, and colorectal cancer. Despite the substantial impact on reducing lung cancer-related mortality and proof that LCS with LDCT is effective, uptake of LCS has been low and LCS eligibility criteria remain imperfect. While LCS programs have historically faced patient recruitment challenges, research suggests that there are novel opportunities to both identify and improve screening for at-risk populations. In this review, we discuss the global obstacles to implementing LCS programs and strategies to overcome barriers in resource-limited settings. We explore successful approaches to promote LCS through robust engagement with community partners. Finally, we examine opportunities to enhance LCS in at-risk populations not captured by current eligibility criteria, including never smokers and individuals with a family history of lung cancer, with a focus on early detection through novel artificial intelligence technologies.

Deep learning-based MRI reconstruction with Artificial Fourier Transform Network (AFTNet).

Yang Y, Zhang Y, Li Z, Tian JS, Dagommer M, Guo J

pubmed logopapersJun 1 2025
Deep complex-valued neural networks (CVNNs) provide a powerful way to leverage complex number operations and representations and have succeeded in several phase-based applications. However, previous networks have not fully explored the impact of complex-valued networks in the frequency domain. Here, we introduce a unified complex-valued deep learning framework - Artificial Fourier Transform Network (AFTNet) - which combines domain-manifold learning and CVNNs. AFTNet can be readily used to solve image inverse problems in domain transformation, especially for accelerated magnetic resonance imaging (MRI) reconstruction and other applications. While conventional methods typically utilize magnitude images or treat the real and imaginary components of k-space data as separate channels, our approach directly processes raw k-space data in the frequency domain, utilizing complex-valued operations. This allows for a mapping between the frequency (k-space) and image domain to be determined through cross-domain learning. We show that AFTNet achieves superior accelerated MRI reconstruction compared to existing approaches. Furthermore, our approach can be applied to various tasks, such as denoised magnetic resonance spectroscopy (MRS) reconstruction and datasets with various contrasts. The AFTNet presented here is a valuable preprocessing component for different preclinical studies and provides an innovative alternative for solving inverse problems in imaging and spectroscopy. The code is available at: https://github.com/yanting-yang/AFT-Net.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.

Integrating anatomy and electrophysiology in the healthy human heart: Insights from biventricular statistical shape analysis using universal coordinates.

Van Santvliet L, Zappon E, Gsell MAF, Thaler F, Blondeel M, Dymarkowski S, Claessen G, Willems R, Urschler M, Vandenberk B, Plank G, De Vos M

pubmed logopapersJun 1 2025
A cardiac digital twin is a virtual replica of a patient-specific heart, mimicking its anatomy and physiology. A crucial step of building a cardiac digital twin is anatomical twinning, where the computational mesh of the digital twin is tailored to the patient-specific cardiac anatomy. In a number of studies, the effect of anatomical variation on clinically relevant functional measurements like electrocardiograms (ECGs) is investigated, using computational simulations. While such a simulation environment provides researchers with a carefully controlled ground truth, the impact of anatomical differences on functional measurements in real-world patients remains understudied. In this study, we develop a biventricular statistical shape model and use it to quantify the effect of biventricular anatomy on ECG-derived and demographic features, providing novel insights for the development of digital twins of cardiac electrophysiology. To this end, a dataset comprising high-resolution cardiac CT scans from 271 healthy individuals, including athletes, is utilized. Furthermore, a novel, universal, ventricular coordinate-based method is developed to establish lightweight shape correspondence. The performance of the shape model is rigorously established, focusing on its dimensionality reduction capabilities and the training data requirements. The most important variability in healthy ventricles captured by the model is their size, followed by their elongation. These anatomical factors are found to significantly correlate with ECG-derived and demographic features. Additionally, a comprehensive synthetic cohort is made available, featuring ready-to-use biventricular meshes with fiber structures and anatomical region annotations. These meshes are well-suited for electrophysiological simulations.

Artificial intelligence in fetal brain imaging: Advancements, challenges, and multimodal approaches for biometric and structural analysis.

Wang L, Fatemi M, Alizad A

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming fetal brain imaging by addressing key challenges in diagnostic accuracy, efficiency, and data integration in prenatal care. This review explores AI's application in enhancing fetal brain imaging through ultrasound (US) and magnetic resonance imaging (MRI), with a particular focus on multimodal integration to leverage their complementary strengths. By critically analyzing state-of-the-art AI methodologies, including deep learning frameworks and attention-based architectures, this study highlights significant advancements alongside persistent challenges. Notable barriers include the scarcity of diverse and high-quality datasets, computational inefficiencies, and ethical concerns surrounding data privacy and security. Special attention is given to multimodal approaches that integrate US and MRI, combining the accessibility and real-time imaging of US with the superior soft tissue contrast of MRI to improve diagnostic precision. Furthermore, this review emphasizes the transformative potential of AI in fostering clinical adoption through innovations such as real-time diagnostic tools and human-AI collaboration frameworks. By providing a comprehensive roadmap for future research and implementation, this study underscores AI's potential to redefine fetal imaging practices, enhance diagnostic accuracy, and ultimately improve perinatal care outcomes.

Deep learning for liver lesion segmentation and classification on staging CT scans of colorectal cancer patients: a multi-site technical validation study.

Bashir U, Wang C, Smillie R, Rayabat Khan AK, Tamer Ahmed H, Ordidge K, Power N, Gerlinger M, Slabaugh G, Zhang Q

pubmed logopapersJun 1 2025
To validate a liver lesion detection and classification model using staging computed tomography (CT) scans of colorectal cancer (CRC) patients. A UNet-based deep learning model was trained on 272 public liver tumour CT scans and tested on 220 CRC staging CTs acquired from a single institution (2014-2019). Performance metrics included lesion detection rates by size (<10 mm, 10-20 mm, >20 mm), segmentation accuracy (dice similarity coefficient, DSC), volume measurement agreement (Bland-Altman limits of agreement, LOAs; intraclass correlation coefficient, ICC), and classification accuracy (malignant vs benign) at patient and lesion levels (detected lesions only). The model detected 743 out of 884 lesions (84%), with detection rates of 75%, 91.3%, and 96% for lesions <10 mm, 10-20 mm, and >20 mm, respectively. The median DSC was 0.76 (95% CI: 0.72-0.80) for lesions <10 mm, 0.83 (95% CI: 0.79-0.86) for 10-20 mm, and 0.85 (95% CI: 0.82-0.88) for >20 mm. Bland-Altman analysis showed a mean volume bias of -0.12 cm<sup>3</sup> (LOAs: -1.68 to +1.43 cm<sup>3</sup>), and ICC was 0.81. Lesion-level classification showed 99.5% sensitivity, 65.7% specificity, 53.8% positive predictive value (PPV), 99.7% negative predictive value (NPV), and 75.4% accuracy. Patient-level classification had 100% sensitivity, 27.1% specificity, 59.2% PPV, 100% NPV, and 64.5% accuracy. The model demonstrates strong lesion detection and segmentation performance, particularly for sub-centimetre lesions. Although classification accuracy was moderate, the 100% NPV suggests strong potential as a CRC staging screening tool. Future studies will assess its impact on radiologist performance and efficiency.
Page 45 of 1411405 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.