Sort by:
Page 105 of 2252246 results

Deep learning-based MRI reconstruction with Artificial Fourier Transform Network (AFTNet).

Yang Y, Zhang Y, Li Z, Tian JS, Dagommer M, Guo J

pubmed logopapersJun 1 2025
Deep complex-valued neural networks (CVNNs) provide a powerful way to leverage complex number operations and representations and have succeeded in several phase-based applications. However, previous networks have not fully explored the impact of complex-valued networks in the frequency domain. Here, we introduce a unified complex-valued deep learning framework - Artificial Fourier Transform Network (AFTNet) - which combines domain-manifold learning and CVNNs. AFTNet can be readily used to solve image inverse problems in domain transformation, especially for accelerated magnetic resonance imaging (MRI) reconstruction and other applications. While conventional methods typically utilize magnitude images or treat the real and imaginary components of k-space data as separate channels, our approach directly processes raw k-space data in the frequency domain, utilizing complex-valued operations. This allows for a mapping between the frequency (k-space) and image domain to be determined through cross-domain learning. We show that AFTNet achieves superior accelerated MRI reconstruction compared to existing approaches. Furthermore, our approach can be applied to various tasks, such as denoised magnetic resonance spectroscopy (MRS) reconstruction and datasets with various contrasts. The AFTNet presented here is a valuable preprocessing component for different preclinical studies and provides an innovative alternative for solving inverse problems in imaging and spectroscopy. The code is available at: https://github.com/yanting-yang/AFT-Net.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.

Integrating anatomy and electrophysiology in the healthy human heart: Insights from biventricular statistical shape analysis using universal coordinates.

Van Santvliet L, Zappon E, Gsell MAF, Thaler F, Blondeel M, Dymarkowski S, Claessen G, Willems R, Urschler M, Vandenberk B, Plank G, De Vos M

pubmed logopapersJun 1 2025
A cardiac digital twin is a virtual replica of a patient-specific heart, mimicking its anatomy and physiology. A crucial step of building a cardiac digital twin is anatomical twinning, where the computational mesh of the digital twin is tailored to the patient-specific cardiac anatomy. In a number of studies, the effect of anatomical variation on clinically relevant functional measurements like electrocardiograms (ECGs) is investigated, using computational simulations. While such a simulation environment provides researchers with a carefully controlled ground truth, the impact of anatomical differences on functional measurements in real-world patients remains understudied. In this study, we develop a biventricular statistical shape model and use it to quantify the effect of biventricular anatomy on ECG-derived and demographic features, providing novel insights for the development of digital twins of cardiac electrophysiology. To this end, a dataset comprising high-resolution cardiac CT scans from 271 healthy individuals, including athletes, is utilized. Furthermore, a novel, universal, ventricular coordinate-based method is developed to establish lightweight shape correspondence. The performance of the shape model is rigorously established, focusing on its dimensionality reduction capabilities and the training data requirements. The most important variability in healthy ventricles captured by the model is their size, followed by their elongation. These anatomical factors are found to significantly correlate with ECG-derived and demographic features. Additionally, a comprehensive synthetic cohort is made available, featuring ready-to-use biventricular meshes with fiber structures and anatomical region annotations. These meshes are well-suited for electrophysiological simulations.

Artificial intelligence in fetal brain imaging: Advancements, challenges, and multimodal approaches for biometric and structural analysis.

Wang L, Fatemi M, Alizad A

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming fetal brain imaging by addressing key challenges in diagnostic accuracy, efficiency, and data integration in prenatal care. This review explores AI's application in enhancing fetal brain imaging through ultrasound (US) and magnetic resonance imaging (MRI), with a particular focus on multimodal integration to leverage their complementary strengths. By critically analyzing state-of-the-art AI methodologies, including deep learning frameworks and attention-based architectures, this study highlights significant advancements alongside persistent challenges. Notable barriers include the scarcity of diverse and high-quality datasets, computational inefficiencies, and ethical concerns surrounding data privacy and security. Special attention is given to multimodal approaches that integrate US and MRI, combining the accessibility and real-time imaging of US with the superior soft tissue contrast of MRI to improve diagnostic precision. Furthermore, this review emphasizes the transformative potential of AI in fostering clinical adoption through innovations such as real-time diagnostic tools and human-AI collaboration frameworks. By providing a comprehensive roadmap for future research and implementation, this study underscores AI's potential to redefine fetal imaging practices, enhance diagnostic accuracy, and ultimately improve perinatal care outcomes.

Deep learning for liver lesion segmentation and classification on staging CT scans of colorectal cancer patients: a multi-site technical validation study.

Bashir U, Wang C, Smillie R, Rayabat Khan AK, Tamer Ahmed H, Ordidge K, Power N, Gerlinger M, Slabaugh G, Zhang Q

pubmed logopapersJun 1 2025
To validate a liver lesion detection and classification model using staging computed tomography (CT) scans of colorectal cancer (CRC) patients. A UNet-based deep learning model was trained on 272 public liver tumour CT scans and tested on 220 CRC staging CTs acquired from a single institution (2014-2019). Performance metrics included lesion detection rates by size (<10 mm, 10-20 mm, >20 mm), segmentation accuracy (dice similarity coefficient, DSC), volume measurement agreement (Bland-Altman limits of agreement, LOAs; intraclass correlation coefficient, ICC), and classification accuracy (malignant vs benign) at patient and lesion levels (detected lesions only). The model detected 743 out of 884 lesions (84%), with detection rates of 75%, 91.3%, and 96% for lesions <10 mm, 10-20 mm, and >20 mm, respectively. The median DSC was 0.76 (95% CI: 0.72-0.80) for lesions <10 mm, 0.83 (95% CI: 0.79-0.86) for 10-20 mm, and 0.85 (95% CI: 0.82-0.88) for >20 mm. Bland-Altman analysis showed a mean volume bias of -0.12 cm<sup>3</sup> (LOAs: -1.68 to +1.43 cm<sup>3</sup>), and ICC was 0.81. Lesion-level classification showed 99.5% sensitivity, 65.7% specificity, 53.8% positive predictive value (PPV), 99.7% negative predictive value (NPV), and 75.4% accuracy. Patient-level classification had 100% sensitivity, 27.1% specificity, 59.2% PPV, 100% NPV, and 64.5% accuracy. The model demonstrates strong lesion detection and segmentation performance, particularly for sub-centimetre lesions. Although classification accuracy was moderate, the 100% NPV suggests strong potential as a CRC staging screening tool. Future studies will assess its impact on radiologist performance and efficiency.

Virtual monochromatic image-based automatic segmentation strategy using deep learning method.

Chen L, Yu S, Chen Y, Wei X, Yang J, Guo C, Zeng W, Yang C, Zhang J, Li T, Lin C, Le X, Zhang Y

pubmed logopapersJun 1 2025
The image quality of single-energy CT (SECT) limited the accuracy of automatic segmentation. Dual-energy CT (DECT) may potentially improve automatic segmentation yet the performance and strategy have not been investigated thoroughly. Based on DECT-generated virtual monochromatic images (VMIs), this study proposed a novel deep learning model (MIAU-Net) and evaluated the segmentation performance on the head organs-at-risk (OARs). The VMIs from 40 keV to 190 keV were retrospectively generated at intervals of 10 keV using the DECT of 46 patients. Images with expert delineation were used for training, validation, and testing MIAU-Net for automatic segmentation. Theperformance of MIAU-Net was compared with the existingU-Net, Attention-UNet, nnU-Net and TransFuse methods based on Dice Similarity Coefficient (DSC). Correlationanalysis was performed to evaluate and optimize the impact of different virtual energies on the accuracy of segmentation. Using MIAU-Net, average DSCs across all virtual energy levels were 93.78 %, 81.75 %, 84.46 %, 92.85 %, 94.40 %, and 84.75 % for the brain stem, optic chiasm, lens, mandible, eyes, and optic nerves, respectively, higher than the previous publications using SECT. MIAU-Net achieved the highest average DSC (88.84 %) and the lowest parameters (14.54 M) in all tested models. The results suggested that 60 keV-80 keV is the optimal VMI energy level for soft tissue delineation, while 100 keV is optimal for skeleton segmentation. This work proposed and validated a novel deep learning model for automatic segmentation based on DECT, suggesting potential advantages and OAR-specific optimal energy of using VMIs for automatic delineation.

Driving Knowledge to Action: Building a Better Future With Artificial Intelligence-Enabled Multidisciplinary Oncology.

Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, <i>Driving Knowledge to Action.</i> We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.

An explainable adaptive channel weighting-based deep convolutional neural network for classifying renal disorders in computed tomography images.

Loganathan G, Palanivelan M

pubmed logopapersJun 1 2025
Renal disorders are a significant public health concern and a cause of mortality related to renal failure. Manual diagnosis is subjective, labor-intensive, and depends on the expertise of nephrologists in renal anatomy. To improve workflow efficiency and enhance diagnosis accuracy, we propose an automated deep learning model, called EACWNet, which incorporates adaptive channel weighting-based deep convolutional neural network and explainable artificial intelligence. The proposed model categorizes renal computed tomography images into various classes, such as cyst, normal, tumor, and stone. The adaptive channel weighting module utilizes both global and local contextual insights to refine the final feature map channel weights through the integration of a scale-adaptive channel attention module in the higher convolutional blocks of the VGG-19 backbone model employed in the proposed method. The efficacy of the EACWNet model has been assessed using a publicly available renal CT images dataset, attaining an accuracy of 98.87% and demonstrating a 1.75% improvement over the backbone model. However, this model exhibits class-wise precision variation, achieving higher precision for cyst, normal, and tumor cases but lower precision for the stone class due to its inherent variability and heterogeneity. Furthermore, the model predictions have been subjected to additional analysis using the explainable artificial intelligence method such as local interpretable model-agnostic explanations, to visualize better and understand the model predictions.

DeepValve: The first automatic detection pipeline for the mitral valve in Cardiac Magnetic Resonance imaging.

Monopoli G, Haas D, Singh A, Aabel EW, Ribe M, Castrini AI, Hasselberg NE, Bugge C, Five C, Haugaa K, Forsch N, Thambawita V, Balaban G, Maleckar MM

pubmed logopapersJun 1 2025
Mitral valve (MV) assessment is key to diagnosing valvular disease and to addressing its serious downstream complications. Cardiac magnetic resonance (CMR) has become an essential diagnostic tool in MV disease, offering detailed views of the valve structure and function, and overcoming the limitations of other imaging modalities. Automated detection of the MV leaflets in CMR could enable rapid and precise assessments that enhance diagnostic accuracy. To address this gap, we introduce DeepValve, the first deep learning (DL) pipeline for MV detection using CMR. Within DeepValve, we tested three valve detection models: a keypoint-regression model (UNET-REG), a segmentation model (UNET-SEG) and a hybrid model based on keypoint detection (DSNT-REG). We also propose metrics for evaluating the quality of MV detection, including Procrustes-based metrics (UNET-REG, DSNT-REG) and customized Dice-based metrics (UNET-SEG). We developed and tested our models on a clinical dataset comprising 120 CMR images from patients with confirmed MV disease (mitral valve prolapse and mitral annular disjunction). Our results show that DSNT-REG delivered the best regression performance, accurately locating landmark locations. UNET-SEG achieved satisfactory Dice and customized Dice scores, also accurately predicting valve location and topology. Overall, our work represents a critical first step towards automated MV assessment using DL in CMR and paving the way for improved clinical assessment in MV disease.

Advancing Intracranial Aneurysm Detection: A Comprehensive Systematic Review and Meta-analysis of Deep Learning Models Performance, Clinical Integration, and Future Directions.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.
Page 105 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.