Sort by:
Page 80 of 1331324 results

Fetal origins of adult disease: transforming prenatal care by integrating Barker's Hypothesis with AI-driven 4D ultrasound.

Andonotopo W, Bachnas MA, Akbar MIA, Aziz MA, Dewantiningrum J, Pramono MBA, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersMay 26 2025
The fetal origins of adult disease, widely known as Barker's Hypothesis, suggest that adverse fetal environments significantly impact the risk of developing chronic diseases, such as diabetes and cardiovascular conditions, in adulthood. Recent advancements in 4D ultrasound (4D US) and artificial intelligence (AI) technologies offer a promising avenue for improving prenatal diagnostics and validating this hypothesis. These innovations provide detailed insights into fetal behavior and neurodevelopment, linking early developmental markers to long-term health outcomes. This study synthesizes contemporary developments in AI-enhanced 4D US, focusing on their roles in detecting fetal anomalies, assessing neurodevelopmental markers, and evaluating congenital heart defects. The integration of AI with 4D US allows for real-time, high-resolution visualization of fetal anatomy and behavior, surpassing the diagnostic precision of traditional methods. Despite these advancements, challenges such as algorithmic bias, data diversity, and real-world validation persist and require further exploration. Findings demonstrate that AI-driven 4D US improves diagnostic sensitivity and accuracy, enabling earlier detection of fetal abnormalities and optimization of clinical workflows. By providing a more comprehensive understanding of fetal programming, these technologies substantiate the links between early-life conditions and adult health outcomes, as proposed by Barker's Hypothesis. The integration of AI and 4D US has the potential to revolutionize prenatal care, paving the way for personalized maternal-fetal healthcare. Future research should focus on addressing current limitations, including ethical concerns and accessibility challenges, to promote equitable implementation. Such advancements could significantly reduce the global burden of chronic diseases and foster healthier generations.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Optimizing MRI sequence classification performance: insights from domain shift analysis.

Mahmutoglu MA, Rastogi A, Brugnara G, Vollmuth P, Foltyn-Dumitru M, Sahm F, Pfister S, Sturm D, Bendszus M, Schell M

pubmed logopapersMay 26 2025
MRI sequence classification becomes challenging in multicenter studies due to variability in imaging protocols, leading to unreliable metadata and requiring labor-intensive manual annotation. While numerous automated MRI sequence identification models are available, they frequently encounter the issue of domain shift, which detrimentally impacts their accuracy. This study addresses domain shift, particularly from adult to pediatric MRI data, by evaluating the effectiveness of pre-trained models under these conditions. This retrospective and multicentric study explored the efficiency of a pre-trained convolutional (ResNet) and CNN-Transformer hybrid model (MedViT) to handle domain shift. The study involved training ResNet-18 and MedVit models on an adult MRI dataset and testing them on a pediatric dataset, with expert domain knowledge adjustments applied to account for differences in sequence types. The MedViT model demonstrated superior performance compared to ResNet-18 and benchmark models, achieving an accuracy of 0.893 (95% CI 0.880-0.904). Expert domain knowledge adjustments further improved the MedViT model's accuracy to 0.905 (95% CI 0.893-0.916), showcasing its robustness in handling domain shift. Advanced neural network architectures like MedViT and expert domain knowledge on the target dataset significantly enhance the performance of MRI sequence classification models under domain shift conditions. By combining the strengths of CNNs and transformers, hybrid architectures offer enhanced robustness for reliable automated MRI sequence classification in diverse research and clinical settings. Question Domain shift between adult and pediatric MRI data limits deep learning model accuracy, requiring solutions for reliable sequence classification across diverse patient populations. Findings The MedViT model outperformed ResNet-18 in pediatric imaging; expert domain knowledge adjustment further improved accuracy, demonstrating robustness across diverse datasets. Clinical relevance This study enhances MRI sequence classification by leveraging advanced neural networks and expert domain knowledge to mitigate domain shift, boosting diagnostic precision and efficiency across diverse patient populations in multicenter environments.

FROG: A Fine-Grained Spatiotemporal Graph Neural Network With Self-Supervised Guidance for Early Diagnosis of Alzheimer's Disease.

Zhang S, Wang Q, Wei M, Zhong J, Zhang Y, Song Z, Li C, Zhang X, Han Y, Li Y, Lv H, Jiang J

pubmed logopapersMay 26 2025
Functional magnetic resonance imaging (fMRI) has demonstrated significant potential in the early diagnosis and study of pathological mechanisms of Alzheimer's disease (AD). To fit subtle cross-spatiotemporal interactions and learn pathological features from fMRI, we proposed a fine-grained spatiotemporal graph neural network with self-supervised learning (SSL) for diagnosis and biomarker extraction of early AD. First, considering the spatiotemporal interaction of the brain, we designed two masks that leverage the spatial correlation and temporal repeatability of fMRI. Afterwards, temporal gated inception convolution and graph scalable inception convolution were proposed for the spatiotemporal autoencoder to enhance subtle cross-spatiotemporal variation and learn noise-suppressed signals. Furthermore, a spatiotemporal scalable cosine error with high selectivity for signal reconstruction was designed in SSL to guide the autoencoder to fit the fine-grained pathological features in an unsupervised manner. A total of 5,687 samples from four cross-population cohorts were involved. The accuracy of our model was 5.1% higher than the state-of-the-art models, which included four AD diagnostic models, four SSL strategies, and three multivariate time series models. The neuroimaging biomarkers were precisely localized to the abnormal brain regions, and correlated significantly with the cognitive scale and biomarkers (P$< $0.001). Moreover, the AD progression was reflected through the mask reconstruction error of our SSL strategy. The results demonstrate that our model can effectively capture spatiotemporal and pathological features, and providing a novel and relevant framework for the early diagnosis of AD based on fMRI.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Training a deep learning model to predict the anatomy irradiated in fluoroscopic x-ray images.

Guo L, Trujillo D, Duncan JR, Thomas MA

pubmed logopapersMay 26 2025
Accurate patient dosimetry estimates from fluoroscopically-guided interventions (FGIs) are hindered by limited knowledge of the specific anatomy that was irradiated. Current methods use data reported by the equipment to estimate the patient anatomy exposed during each irradiation event. We propose a deep learning algorithm to automatically match 2D fluoroscopic images with corresponding anatomical regions in computational phantoms, enabling more precise patient dose estimates. Our method involves two main steps: (1) simulating 2D fluoroscopic images, and (2) developing a deep learning algorithm to predict anatomical coordinates from these images. For part (1), we utilized DeepDRR for fast and realistic simulation of 2D x-ray images from 3D computed tomography datasets. We generated a diverse set of simulated fluoroscopic images from various regions with different field sizes. In part (2), we employed a Residual Neural Network (ResNet) architecture combined with metadata processing to effectively integrate patient-specific information (age and gender) to learn the transformation between 2D images and specific anatomical coordinates in each representative phantom. For the Modified ResNet model, we defined an allowable error range of ± 10 mm. The proposed method achieved over 90% of predictions within ± 10 mm, with strong alignment between predicted and true coordinates as confirmed by Bland-Altman analysis. Most errors were within ± 2%, with outliers beyond ± 5% primarily in Z-coordinates for infant phantoms due to their limited representation in the training data. These findings highlight the model's accuracy and its potential for precise spatial localization, while emphasizing the need for improved performance in specific anatomical regions. In this work, a comprehensive simulated 2D fluoroscopy image dataset was developed, addressing the scarcity of real clinical datasets and enabling effective training of deep-learning models. The modified ResNet successfully achieved precise prediction of anatomical coordinates from the simulated fluoroscopic images, enabling the goal of more accurate patient-specific dosimetry.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

Multimodal integration of longitudinal noninvasive diagnostics for survival prediction in immunotherapy using deep learning.

Yeghaian M, Bodalal Z, van den Broek D, Haanen JBAG, Beets-Tan RGH, Trebeschi S, van Gerven MAJ

pubmed logopapersMay 26 2025
Immunotherapies have revolutionized the landscape of cancer treatments. However, our understanding of response patterns in advanced cancers treated with immunotherapy remains limited. By leveraging routinely collected noninvasive longitudinal and multimodal data with artificial intelligence, we could unlock the potential to transform immunotherapy for cancer patients, paving the way for personalized treatment approaches. In this study, we developed a novel artificial neural network architecture, multimodal transformer-based simple temporal attention (MMTSimTA) network, building upon a combination of recent successful developments. We integrated pre- and on-treatment blood measurements, prescribed medications, and CT-based volumes of organs from a large pan-cancer cohort of 694 patients treated with immunotherapy to predict mortality at 3, 6, 9, and 12 months. Different variants of our extended MMTSimTA network were implemented and compared to baseline methods, incorporating intermediate and late fusion-based integration methods. The strongest prognostic performance was demonstrated using a variant of the MMTSimTA model with area under the curves of 0.84 ± 0.04, 0.83 ± 0.02, 0.82 ± 0.02, 0.81 ± 0.03 for 3-, 6-, 9-, and 12-month survival prediction, respectively. Our findings show that integrating noninvasive longitudinal data using our novel architecture yields an improved multimodal prognostic performance, especially in short-term survival prediction. Our study demonstrates that multimodal longitudinal integration of noninvasive data using deep learning may offer a promising approach for personalized prognostication in immunotherapy-treated cancer patients.

Predicting Surgical Versus Nonsurgical Management of Acute Isolated Distal Radius Fractures in Patients Under Age 60 Using a Convolutional Neural Network.

Hsu D, Persitz J, Noori A, Zhang H, Mashouri P, Shah R, Chan A, Madani A, Paul R

pubmed logopapersMay 26 2025
Distal radius fractures (DRFs) represent up to 20% of the fractures in the emergency department. Delays to surgery of more than 14 days are associated with poorer functional outcomes and increased health care utilization/costs. At our institution, the average time to surgery is more than 19 days because of the separation of surgical and nonsurgical care pathways and a lengthy referral process. To address this challenge, we aimed to create a convolutional neural network (CNN) capable of automating DRF x-ray analysis and triaging. We hypothesize that this model will accurately predict whether an acute isolated DRF fracture in a patient under the age of 60 years will be treated surgically or nonsurgically at our institution based on the radiographic input. We included 163 patients under the age of 60 years who presented to the emergency department between 2018 and 2023 with an acute isolated DRF and who were referred for clinical follow-up. Radiographs taken within 4 weeks of injury were collected in posterior-anterior and lateral views and then preprocessed for model training. The surgeons' decision to treat surgically or nonsurgically at our institution was the reference standard for assessing the model prediction accuracy. We included 723 radiographic posterior-anterior and lateral pairs (385 surgical and 338 nonsurgical) for model training. The best-performing model (seven CNN layers, one fully connected layer, an image input size of 256 × 256 pixels, and a 1.5× weighting for volarly displaced fractures) achieved 88% accuracy and 100% sensitivity. Values for true positive (100%), true negative (72.7%), false positive (27.3%), and false negative (0%) were calculated. After training based on institution-specific indications, a CNN-based algorithm can predict with 88% accuracy whether treatment of an acute isolated DRF in a patient under the age of 60 years will be treated surgically or nonsurgically. By promptly identifying patients who would benefit from expedited surgical treatment pathways, this model can reduce times for referral.

AI in Orthopedic Research: A Comprehensive Review.

Misir A, Yuce A

pubmed logopapersMay 26 2025
Artificial intelligence (AI) is revolutionizing orthopedic research and clinical practice by enhancing diagnostic accuracy, optimizing treatment strategies, and streamlining clinical workflows. Recent advances in deep learning have enabled the development of algorithms that detect fractures, grade osteoarthritis, and identify subtle pathologies in radiographic and magnetic resonance images with performance comparable to expert clinicians. These AI-driven systems reduce missed diagnoses and provide objective, reproducible assessments that facilitate early intervention and personalized treatment planning. Moreover, AI has made significant strides in predictive analytics by integrating diverse patient data-including gait and imaging features-to forecast surgical outcomes, implant survivorship, and rehabilitation trajectories. Emerging applications in robotics, augmented reality, digital twin technologies, and exoskeleton control promise to further transform preoperative planning and intraoperative guidance. Despite these promising developments, challenges such as data heterogeneity, algorithmic bias, and the "black box" nature of many models-as well as issues with robust validation-remain. This comprehensive review synthesizes current developments, critically examines limitations, and outlines future directions for integrating AI into musculoskeletal care.
Page 80 of 1331324 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.